text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A Hybrid Mutual Coupling Reduction Technique in a Dual-Band MIMO Textile Antenna for WBAN and 5G Applications
This paper presents a hybrid mutual coupling reduction technique applied onto a dual-band textile MIMO antenna for wireless body area network and 5G applications. The MIMO antenna consists of two hexagonal patch antennas, each integrated with a split-ring (SR) and a bar slot to operate in dual-band mode at 2.45 GHz and 3.5 GHz. Each patch is dimensioned at <inline-formula> <tex-math notation="LaTeX">$47.2 \times 31$ </tex-math></inline-formula> mm<sup>2</sup>. This hybrid technique results in a simple structure, while enabling significant reduction of mutual coupling (MC) between the closely spaced patches (up to <inline-formula> <tex-math notation="LaTeX">$0.1\lambda$ </tex-math></inline-formula>). This technique combines a line patch and a patch rotation technique, explained as follows. First, a line patch is introduced at an optimized distance to enable operation with a broad impedance bandwidth at both target frequencies. One of the patches is then rotated by 90° at an optimized distance, resulting in a significant MC suppression while maintaining the dual and broad impedance bandwidth. The proposed MIMO antenna is further evaluated under several bending configurations to assess its robustness. A satisfactory agreement between simulated and measured results is observed in both planar and bending conditions. Results show that the MIMO antenna achieves an impedance bandwidth of 4.3 % and 6.79 % in the 2.45 GHz and 3.5 GHz band, respectively. Moreover, very low MC (<inline-formula> <tex-math notation="LaTeX">$S_{21} < -30$ </tex-math></inline-formula> dB) is achieved, with a low (< 0.002) envelop correlation coefficient, and about 10 dB of diversity gain at both desired frequencies using this technique. Even when bent at an angle of 50° at the <inline-formula> <tex-math notation="LaTeX">$x$ </tex-math></inline-formula>- and <inline-formula> <tex-math notation="LaTeX">$y$ </tex-math></inline-formula>-axes, the antenna bent maintained a realized gain of 1.878 dBi and 4.027 dBi in the lower and upper band, respectively. A robust performance is offered by the antenna against the lossy effects of the human body with good agreements between simulated and measured results.
I. INTRODUCTION
Antennas are increasingly being developed in a more compact format and on flexible materials. This is especially The associate editor coordinating the review of this manuscript and approving it for publication was Chan Hwang See.
attractive in wearable applications, as these antennas are used in many applications such as health monitoring, tracking and etc. [1]. These applications also demands that these antennas can operate across a wide range of frequencies, within the framework of the Internet of Things (IoT). Besides that, the combined IoT connectivity to high speed fifth generation (5G) systems will be advantageous in demanding systems such as healthcare using a single set of antenna. Moreover, flexible antennas can be worn while the users move freely, without affecting their morphologies requires these antennas to be wide in bandwidth. Moreover, realtime information transmission with minimal losses needed in these systems when applied for managing critical illnesses and procedures also adds to the challenge in flexible antenna design.
Considerable published work has reported on the effects of mutual coupling in dual band antenna arrays. However, the alleviating the effects of coupling (S 21 ) for dual-band MIMO textile antennas, specifically operating at 2.45 and 3.5 GHz is hardly discussed. This research proposes a unique hybrid technique to suppress the mutual coupling of the 2×1 MIMO antenna. This is aimed at achieving an S 21 of less than -30 dB with the closest possible placement of elements (an edge-toedge spacing of 0.1λ). This antenna is benchmarked against the performance of the same antenna with and without applying the proposed hybrid technique. Finally, the robustness of the proposed MIMO antenna against bending deformation is evaluated thoroughly. This paper is organized as follows. Section II outlines the characterization of the antenna element, after which a study on the mutual coupling effects antenna is presented. The three main parameters are analyzed and presented in three parts: distance analysis, mutual coupling reduction and gain, and envelope correlation coefficient. The measurement results of the prototype are presented in Section IV prior to the conclusions in Section V.
II. ANTENNA CHARACTERIZATION
The proposed antenna is designed to operate in dual band mode, centered at 2.45 for the wireless body are network (WBAN) lower band, and at 3.5 GHz for 5G as the upper band. Felt textile is used as the substrate and is sandwiched between top radiator and a full ground plane. It has a relative permittivity (ε r ) of 1.44, a loss tangent (tan δ) of 0.044, and a thickness (H ) of 3 mm. The conductive elements are formed using ShieldIt Super electro-textile from LessEMF Inc., which is 0.17 mm thick and features an estimated conductivity (σ ) of 1.18 × 10 5 Sm −1 . As an initial step, a rectangular-shaped patch radiator is designed to operate in the lower band, centered at 2.45 GHz, as shown in Figure 1.
Then, its design is modified with a slotted ring based on [26] to produce another resonant frequency at 3.5 GHz. The SR shaped slot in the middle of the rectangular-shaped patch antenna enables the bandwidth broadening of the upper 3.5 GHz band. The feeding points are placed on the bottom left edge for first element, labelled as Port 1, and bottom right edge for second element, labelled as Port 2, as illustrated in Figure 1. A detailed design procedure is presented in [27] which can be summarized in four steps, as follows: • First, the dimensions of the patch without SR-shaped slot are calculated based on the upper band resonance.
• Second, the probe feed structure is optimized to obtain a suitable matching in the upper band.
• Third, the SR shaped and the bar slot are added to broaden the bandwidth, and to provide operation in the respective bands.
• Fourth, the dimensions of the SR-shaped slot are tuned to provide operation in the respective lower band as shown in Figure 1. All simulations and optimizations are performed using CST Microwave Studio software.
III. MUTUAL COUPLING
In this section, the distance between and orientation of the patch elements are studied to optimize the proposed antenna in terms of reflection coefficient and mutual coupling. Details of this study are discussed in the next subsections.
A. DISTANCE ANALYSIS
The distance between the patch elements affects the antenna performance in terms of reflection coefficient (S 11 ) and mutual coupling (S 21 ). The distance between antennas is varied from 0.5λ to 0.1λ, as illustrated in Figure 2. The results shown in Figure 3 indicated that the S 11 and bandwidth of the antenna is preserved in both bands with the variation of antenna gap. However, as shown in Figure 3(b), the S 21 values increased with decreasing distance, indicating higher coupling between the patches. The coupling increases from
B. MUTUAL COUPLING REDUCTION TECHNIQUE
A hybrid technique involving a line patch and rotation of antenna elements is then applied in this subsection. In the first step, a line patch of WL x LL is introduced at the optimized guided distance (D) of 0.1λ, as shown in the Figure 3. Next, the width (WL) and the length (LL) of the line patch are studied. They are varied as follows: WL = 0.73, 2, 4, 6, 8, and 10 mm, and LL = 40, 50, and 60 mm. Aimed at obtaining an optimized length and width of the line patch, the results are shown in Figure 4. While the size of the line patch did not affect the S 11 , the S 21 of the antenna varied with the variation of WL and LL, particularly at higher frequencies. From the analysis, the final optimized WL and LL value is 0.73 mm and 60 mm, respectively. The next step in the proposed hybrid technique is by rotating the antenna elements to arrive at the final MIMO antenna design. The different rotations are illustrated in Table 1 and the optimized MIMO antenna is produced by patch improved S 21 significantly, up to 60 % and 33 % at 2.45 GHz and 3.5 GHz, respectively.
On the other hand, Figure 6 illustrates the surface current distribution when one of the ports is excited. As seen in this figure, a single technique of either only adding the line patch in between the antenna elements or rotating the patch element reduced the current interaction with the other patch element. However, in both cases, part of the current still overflows to the adjacent patch. Combination of both techniques significantly reduced the coupling between the antenna elements.
C. BENDING
A comprehensive analysis on the effects of the bending on the proposed MIMO antenna is presented in this section. Simulations of the bending evaluation curvatures are performed at different angles (α) of 10 • , 20 • , 30 • , 40 • , and 50 • , which translates to 24.38, 30.48, 40.6, 69.8, and 121.9 mm radii, respectively, based on [28]. These bending values are selected to emulate the curvature of proposed MIMO antenna when wrapped around the arm in a regular body. Bending is investigated at two conditions, when bent at x-and y-axes for five different bending angles, as illustrated in Figure 7. The extreme condition is identified when the antenna is bent at y-axis with smallest angle/shortest radius, α = 10 • @ 24.28 mm. Measurements are then performed to observe The results obtained from the bent antennas are compared with simulations in flat condition, as illustrated in Figure 8. Decreasing the bending degree from 50 • to 10 • lowers the resonance in both bands, with a more significant change in the upper band. In contrast, different mutual coupling behavior can be observed when bent at the x-and y-axes. When bent at the x-axis at 2.45 GHz, lower S 21 is seen with increasing bending degrees. This behavior is contrary at 3.5 GHz. On the other hand, when varying the bending degree at y-axis, the S 21 fluctuates in the lower band, but is almost consistent in the upper band. As expected, bending at an angle of 50 • resulted in high mutual coupling at both frequency bands. Hence, it can be concluded that bending of the antenna at different degrees affected particularly the performance at the higher frequencies.
D. GAIN, RADIATION EFFICIENCY AND CORRELATION ANALYSIS
The proposed MIMO antenna is evaluated in terms of envelope correlation coefficient (ECC), diversity gain (DG), channel loss capacity (CCL) and total active reflection coefficient (TARC). The correlation between antenna elements is described by the ECC (ρ e ) and the diversity gain. They are used to evaluate the correlation levels of the channels [29], and is calculated using equation (1), as follows: A low ECC value indicates minimal correlation between antenna elements. Similarly, diversity gain (DG) is dependent on the spatial correlation coefficient between the patch elements. A low ECC (< 0.5) leads to high diversity gain, and both are related by equation (2), as follows: The simulated and measured ECC within the frequency of interest is presented in Figure 9. ECC at all resonant frequencies are below 0.05 in both flat and bent conditions, and satisfies the minimum (< 0.5) diversity criteria [21]. A low ECC leads to high diversity gain, which is demonstrated by the plot in Figure 10. For an ECC value of less than 0.1, the diversity gain is almost 10 dB.
On the other hand, CCL is the estimated maximum message transmission which can take place without any loss in the communication channel. The acceptable rate should be less than 0.4 bits/s/Hz. Calculated using equations (3) to (5), the CCL result is presented in Figure 11.
It shows that the proposed MIMO exhibits acceptable CCL for all bending conditions with varying operating frequency. and Another evaluated parameter for this antenna is TARC, defined as the ratio of reflected and incident power for a MIMO antenna system. For a two-port MIMO antenna, TARC is calculated using equation (6) and must be below −0 dB. For the proposed MIMO antenna at both operating VOLUME 9, 2021 frequencies, the TARC are below −5 dB as seen in Figure 12.
Radiation pattern of the proposed MIMO antenna is shown in Figure 13. It is observed that the main lobe of the radiation pattern is maintained while there are increasing in back lobe when the bending degree is decrease from 50 • to 10 • in both the lower and upper operating bands of the antenna. On the other hand, significant variation in the radiation patterns of the antenna is seen when bent at the y-axis. As the bending degree is reduced, the main lobe direction tilted to the left, with slightly higher back lobes pattern.
F. SPECIFIC ABSORPTION RATE (SAR) ANALYSIS
The SAR values for the proposed antenna are calculated using CST MWS by mounting the proposed antennas in proximity of a truncated Hugo human body model (on the upper arm). The proposed antenna is placed 1 mm away from these models, as seen in Figure 14. The SAR distributions averaged over 10 g of tissue are then calculated at 2.4 GHz and 3.5 GHz for this antenna with an input power 1 W when placed on the left upper arm. SAR levels for this antenna in planar condition presented in Figure 10 indicate that the maximum 10 g SARs are observed to be 0.0283 W/kg and 0.0162 W/kg at 2.45 GHz and 3.5 GHz, respectively. These simulated SAR results are verified against the measured SAR of the antennas in [30], which used the same textile materials and full ground plane as the proposed MIMO antenna. The maximum 10 g measured SARs in [30] are 0.1 W/kg and 0.5 W/kg at 2.45 GHz and 5.2 GHz, respectively. A satisfactory agreement between the simulated and measured SAR is observed. Due to the use of the full ground plane, the SAR values for antenna in this antenna did not exceed 0.1 W/kg in both bands.
IV. EXPERIMENTAL EVALUATION RESULTS
The proposed MIMO antenna is then fabricated and experimentally assessed in the planar condition and when bent at both axes, as shown in Figure 15 (with 20 • of bending radius). The measurements are performed using Keysight Technologies E5071C E-series Vector Network Analyzer (VNA). A 50-coaxial cable has been used to connect SMA to the VNA for measurements. Their S 11 and S 21 results are presented in Figure 16, with the solid lines representing the simulated performance, whereas measurement are represented by the dashed lines. The simulated S 11 for the proposed antenna in planar form are observed to be consistent with measurements in free space, as illustrated in Figure 16(a), except for a slight upwards shift in the lower band. Satisfactory agreements are also seen between measured S 11 for all bending configurations at y-axis, including their bandwidths. However, when bent at the most extreme condition (α = 10 • at y-axis), the proposed MIMO antenna showed a downwards shift in the lower band. In planar condition, its measured S 21 is about −35 dB in both bands, with improvements of 6 dB and 10 dB at 2.45 GHz and 3.5 GHz, respectively. On the other hand, t is observed that the measured S 21 is less than −30 dB when the antenna is bent at the y-axis for all bending conditions in the lower and upper bands. This indicates that the MC is reduced significantly even in the extreme bending condition. This validates the design's robustness against any y-axis bending and maintained its dual band characteristic. Table 2 summarizes the performance of the proposed MIMO antenna in terms of S 11 , S 21 , impedance bandwidth, realized gain, radiation efficiency, and directivity when operating in flat condition in free space at 2.45 GHz and 3.5 GHz. As evident from these results, satisfactory performance for all parameters are observed in free space. A small difference exists between simulated and measured results due to the potential fabrication inaccuracies, the inhomogeneous thickness of the textile layers and inhomogeneous dielectric properties. The simulated and measured 2-D radiation patterns for the proposed antenna at 2.45 GHz and 3.5 GHz presented in Figure 17 indicate directional patterns with small back lobes. A good agreement between the simulated and measured radiation patterns have been observed.
Besides simulations, the prototype is measured in proximity of the human body on the chest and upper arm, as shown in Figure 18(c) and Figure 18(d), respectively. Comparison between simulated and measured S 11 and S 21 on the chest and upper arm are summarized in Figure 19. The impact of the human body on the antenna is minimal due to the shielding against coupling provided by the full ground plane. Measured impedance bandwidths of 6.95 % and 7.11 % are achieved in the lower and upper bands, respectively, when measured on the chest.
Meanwhile, when placed on the upper arm, the measured bandwidth is 7.78 % and 9.15 % in the lower and upper bands, respectively. Measured S 21 are consistently less than −30 dB when the antenna is mounted on the chest and upper arm in both lower and upper bands. A good agreement between the on-body simulated and measured S 11 and S 21 is seen, with small marginal shift observed due to nonidealities in the experimental environment. The low MC exhibited by the proposed antenna makes it suitable for off-body MIMO in WBAN and 5G applications.
In summary, Table 3 compares the performance of the proposed MIMO antenna with previous 1×2 MIMO antennas in terms frequency, flexibility, antenna size, technique, S 21 , and gap between elements. One of the most similar work in [8] presented a multiband wearable MIMO antenna with a comparable 0.1λ 0 inter-element gap with the proposed design. However, the metamaterial technique applied to the structure result in a more complex structure. It is also worth noting that the proposed work is the first work proposed on wearable MIMO antenna operating at 2.45 GHz and 3.5 GHz designed using a hybrid method to result in a relatively simple and compact structure. Besides the extensive validation on antenna deformation, the proposed hybrid technique also resulted in less than 30 dB of S 21 and a very small inter-element gap (0.1λ 0 ). Such method can potentially be applied to design MIMO antennas in space-constrained mobile devices.
V. CONCLUSION
This study proposes a hybrid method of mutual coupling reduction applied in designing a textile MIMO antenna for on body applications. This antenna is designed by combining two octagonal structures each integrated with a SR and bar slot. Mutual coupling of the MIMO antenna is significantly reduced by rotating the patch element and adding a line patch between the antenna elements. Most importantly, the resulting optimized structure is simple and can be implemented as a textile antenna. Due to this, the agreement between simulations and measurements is satisfactory. Moreover, evaluation of this antenna under different degrees of bending and bending axes indicated robust performance, with minimal changes in terms of reflection coefficient, mutual coupling, and radiation characteristics. Further assessments of this antenna in terms of MIMO parameters such as ECC, DG, CCL and TARC also validated that this antenna can be potentially applied in the next generation of 5G wearable devices. Dr . Soh is a member of the IET and URSI. He volunteers in the IEEE MTT-S Education Committee. He is also a Chartered Engineer Registered with the U.K. Engineering Council. He was a recipient of the IEEE AP-S Doctoral Research Award, in 2012, the IEEE MTT-S Graduate Fellowship for Medical Applications, in 2013, and the URSI Young Scientist Award, in 2015. He was also the Second Place Winner of the IEEE Presidents' Change the World Competition, in 2013. Three of his (co)authored journals were awarded the IEEE AP/MTT/EMC Malaysia Joint Chapter's Publication Award, in 2018, 2019, and 2020, and another two journals were also awarded the CST University Publication Award, in 2011 and 2012. | 4,631.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Automated Kernel Independent Component Analysis Based Two Variable Weighted Multi-view Clustering for Complete and Incomplete Dataset
In recent years, data are collected to a greater extent from several sources or represented by multiple views, in which different views express different point of views of the data. Even though each view might be individually exploited for discovering patterns by clustering, the clustering performance could be further perfect by exploring the valuable information among multiple views. On the other hand, several applications offer only a partial mapping among the two levels of variables such as the view weights and the variables weights views, developing a complication for current approaches, since incomplete view of the data are not supported by these approaches. In order to overcome this complication, proposed a Kernel-based Independent Component Analysis (KICA) based on steepest descent subspace two variables weighted clustering in this study and it is named as KICASDSTWC that can execute with an incomplete mapping. Independent Component Analysis (ICA) which exploit distinguish operations depending on canonical correlations in a reproducing kernel Hilbert space. Centroid values of the subspace clustering approaches are optimized depending on steepest descent algorithm and Artificial Fish Swarm Optimization (AFSO) algorithm for the purpose of weight calculation to recognize the compactness of the view and a variable weight. This framework permits the integration of complete and incomplete views of data. Experimental observations on three real-life data sets and the outcome have revealed that the proposed KICASDSTWC considerably outperforms all the competing approaches in terms of Precision, Recall, F Measure, Average Cluster Entropy (ACE) and Accuracy for both complete and incomplete view of the data with respect to the true clusters in the data.
INTRODUCTION
In several real world data mining complications, the identical instance possibly will exist in several datasets with dissimilar representations.Various datasets might highlight different features of instances.An example is clustering the users in a user-oriented recommendation system.For this process, related datasets can be: Learning with this kind of data is generally referred as multi-view learning (Bickel and Scheffer, 2004).Even though there are some earlier researches on multiple datasets, the entire presume the completeness of the different datasets.Multi-view learning is particularly appropriate for applications that concurrently gather data from several modalities, with each unique modality presenting one or more views of the data.
In the past decade, multi-view data has raised interests in the so-called multi-view clustering (Tzortzis and Likas, 2010;Long et al., 2008;Greene and Cunningham, 2009).Different from the traditional clustering methods which take multiple views as a flat set of variables and ignore the differences among different views, multi-view clustering exploits the information from multiple views and take the differences among different views into consideration in order to produce a more accurate and robust partitioning of the data.
Variable weighting clustering has been main research subject in the field of cluster analysis (Deng et al., 2010;Cheng et al., 2008).It automatically works out a weight for each variable and recognizes significant variables and irrelevant variables through variable weights.The multi-view data may perhaps be regarded as have two levels of variables.In case of clustering the multi-view data, the divergence of views and the significance of individual variables in each view should be considered.The conventional variable weighting clustering techniques only calculate weights for individual variables and pay no attention to the differences in views in the multi-view data.As a result, they are not appropriate for multi-view data.On the other hand, in the real world applications, there are several circumstances in which complete datasets are not available.
Existing multi-view algorithms characteristically presume that there is a complete bipartite mapping among instances in the different views to characterize these correspondences, symbolizing that each object is represented in all views.Subsequently, the mapping among instances in the different views is not complete.Even in certain cases where the connections among views are recorded, sensor availability and scheduling possibly will result in several isolated instances in the various views.Even though it is practical to recognize a partial mapping between the views, the lack of an absolute bipartite mapping presents a complication to most existing multi-view learning approaches.Without a complete mapping, these approaches will be incapable to transmit any information concerning an isolated instance to the other views.
The most important motivation of the proposed approach is to resolve the setback of weight value computation and centroid selection in multi-view data with incomplete data point of view, because the entire existing multi-view clustering data suitable only for clustering complete multi-view data.In the proposed KICASDSTWC approach for clustering both complete and incomplete view data in multi-view data.Incomplete view of data is carried out by proposing Kernel-based Independent Component Analysis (KICA), that differentiate the complete and incomplete view of multi view data, in addition differentiate the impacts of different views and different variables in clustering, the weights of views and individual variables are automatically computed depending on the AFSA.As a result, the view weights replicate the significance of the views in the complete data, at the same time the variable weights in a view only replicate the significance of variables in the view.In Steepest Descent Algorithm is proposed to select or optimize the fuzzy centroid values, Singular Value Decomposition (SVD) to lessen the complexity of clustering.Augmented Lagrangian Cauchy Step computation (ALCS) to score the objects in subspaces where they are homogeneous and have elevated correlated utilities.The proposed KICASDSTWC is extended to support both incomplete and complete view data; it becomes efficient in clustering large high dimensional multiview data.
LITERATURE REVIEW
In recent times, numerous multi-view clustering algorithms have been developed (Chaudhuri et al., 2009).These multi-view clustering approaches have been shown to provide enhanced performance in comparison to single-view approaches.On the other hand, the drawbacks of certain approaches are clear.For example, few approaches presume that the dimensions of the characteristics in multiple views are similar, restricting their applicability to the homogeneous circumstances.Few other approaches simply focus on the clustering of two-view data in order that it might be tough to broaden them to more than a two-view circumstance.Also, a suitable weighting approach is missing for these multiple views, even though coordinating different information is also one critical step in acquiring better clustering outcomes (Tang et al., 2009).An integrated framework that can incorporate several categories of multi-view data is lacking (Tang et al., 2010).
Conventionally, tensor-based approaches have been exploited to model multi-view data (Kolda and Bader, 2009).Tensors are higher-order generalizations of matrices and certain tensor approaches are incredibly great to analyze the latent pattern unknown in the multiview data.Tensor decompositions (Kolda and Bader, 2009) obtain multi-linear structures in higher-order data-sets, in which the data have over two modes.Tensor decompositions and multi-way investigation permit naturally obtaining hidden components and examining complex association among them.Sun et al. (2006) commenced a Dynamic Tensor Analysis (DTA) approach and its variants and implement them to anomaly detection and multi-way latent semantic indexing.It seems their clustering approach is intended for dynamic stream data.Dunlavy et al. (2006) execute Parallel Factor Analysis (PARAFAC) decomposition for examining scientific publication data with multiple linkages.The last two concepts that incorporate multiview data as a tensor resemble to this approach.However this approach is based on Tucker-type tensor decomposition.Chaudhuri et al. (2009) developed a clustering approach which carries out clustering on lower dimensional subspace of the multiple views of the data, planned by means of canonical correlation investigation.Two approaches for mixtures of Gaussians and mixtures of log concave distributions were provided.Long et al. (2008) developed an allpurpose scheme for multi-view clustering in a distributed framework.This scheme commences the idea of mapping function to enable the several patterns from several pattern spaces comparable and therefore a best possible pattern can be learned from the multiple patterns of multiple views.Greene and Cunningham (2009) developed a clustering approach for multi-view data with the help of a late integration strategy.In this approach, a matrix that includes the partitioning of each individual view is generated and then segmented to two matrices with the help of matrix factorization approach: the one represents the contribution of those partitioning to the concluding multi-view clusters, called metaclusters and the rest represent instances to the metaclusters Cohn et al. (2009) provided an interactive scheme in which a user continuously offers feedback to enhance the quality of a proposed clustering.In both of these situations, the user feedback is integrated in the form of constraints.This interactive scheme is a constructive extension that possibly will allow user knowledge to be brought into a multi-view clustering approach.
On the other hand, with all of the above approaches for multi-view clustering of the complete view, these limitations may be between instances that do not have equivalences in the other views and weight value calculation depends on the view also not supported by these approaches, thus facilitate a challenge to multiview learning, in particular when the mapping is very limited.
METHODOLOGY
Based For the purpose of multi-view clustering with both complete and incomplete view of the data, a novel fast Kernel-based Independent Component Analysis and Steepest Descent Subspace Two variable Weighted Clustering (KICASDSTWC) with incomplete view methods has been proposed in this study.The proposed KICASDSTWC with incomplete view methods in which incomplete data are transformed into complete data by proposing the KICA in which the subspaces are created in accordance with a set of centroids for total dataset results from KICA based is calculated based Gradient descent method along with user's domain knowledge of utility function.The proposed method distinguishes the impacts of several views and several variables by introducing the weights of views and individual variables to the distance function.The view weights are calculated from the complete variables, at the same time the variable weights in a view are calculated from the subset of the data which comprises only the variables in the view.
As a result, the significance of the views in the complete data is reflected by the view weights while the significance of variables in the view is reflected by the variable weights in a view.The automatic calculations of the centroid value for the specific data through the proposed gradient descent method differentiate it from other existing clustering approaches.At the beginning, the input data results from KIC values are transformed into the fuzzy centroid values followed by which the fuzzy centroid values are optimized using the gradient descent method.With the assistance of the algorithm, the view and variable weights of the KICASDSTWC values in the objective function are optimized by employing Artificial Fish Swarm Optimization (AFSO).In order to carry out multi-view clustering for both complete and incomplete dataset, at first the incomplete dataset is transformed into complete dataset by proposing Kernel-based Independent Component Analysis (KICA).For ease of understanding, consider X and Y represent the two number complete and incomplete dataset respectively.Generalization to over two types of complete and incomplete dataset with one complete and remaining incomplete dataset can be performed in a similar way.Assume that complete multi-view data is indicated as I while incomplete multi-view dataset is indicated as I i.e., the variables values of the multi-view data are available for only a subset of the entire examples.To formalize and discover incomplete data, KICA is proposed to find the values of the variables.KICA learning approaches exploit the following concept: by means of a nonlinear mapping for both complete and incomplete view of the multi-view data samples: The data in the input space˲ # , ˲ $ , … , ˲ ∈ ℜ H is plotted to a potentially much elevated dimensional complete multi-view data with variable space ˢ deal, kernel value Φ{˲ # {, Φ{˲ $ {, … , Φ{˲ {. ˳ # , ˳ $ , … , ˳ ∈ ℜ H is plotted to a potentially much elevated dimensional incomplete multi-view data with variable space ˢ deal with kernel value Φ{˳ # {, Φ{˳ $ {, . . ., Φ{˳ {.In case, if the learning of incomplete and complete multi-view data can be expressed based on inner products with correct nonlinear mapping Φ.In eigen-decomposition of a positive function (the kernel) is exploited to describe the following inner product for the transformation space: where, ., .represents an inner product, the Φ E s indicates the eigen-functions of the kernel and λ E denotes the related eigen values.Kernel ICA presumes a Reproducing Kernel Hilbert Space (RKHS) of the random variable V with kernel H{˲ − ˳{ and feature map {˲{ = {⋅, ˲{.Subsequently, the F-correlation among incomplete and complete variable multi-view data is given as the maximal correlation among the two random variables ˦ # {˲ # ) and ˦ $ {˳ # ) in which ˦ # and ˦ $ range over random variable V in F correlation as below: Noticeably, in case if the random variables ˲ # and ˳ # are independent, at that time the F-correlation for mutliview data among complete and incomplete data happens to be zero.Furthermore, the reverse is also true provided that the set F is large enough.This indicates that = 0 implies ˲ # and ˳ # are independent.With the intention of obtaining a computationally tractable implementation of F-correlation, the reproducing property of RKHS is exploited to estimate the Fcorrelation: Consider ˟ # and ˟ $ represent the linear spaces spanned by the Φ-images of the data samples, then ˦ # and ˦ $ can be fragmented into two parts, i.e.: where, ˦ # , and ˦ $ , are orthogonal to ˟ # and ˟ $ correspondingly.With the help of the empirical complete and incomplete view of mutliview data to approximate the population value, the F correlation can be given as: where, H # and H $ represent the gram matrices linked with the complete and incomplete view of multi-view datasets {˲ # { and {˳ # { given as: The above kernel based system determines the resemblance value between the incomplete and complete view for multi view data.Once multi-view dataset incomplete and complete view of the data are discovered, subsequently carry out centroid value computation with the assistance of Steepest Descent Algorithm (SDA) for that purpose the complete and incomplete view results from KICA is given as HˢD = {z # , … .Z {, its dimensions are described by set of J objects ˭J represented by the set ˓ of ˭ variables and view weights VW.Consider the value of object ˭J on attribute I and in time weight values is indicated by muv K .Also consider IJˤ represent an object chosen as the centroid from SDA.In addition, ℎ {˭˯˰ { = ˟˭˯ is indicated as a homogeneous function to determine the homogeneity among object ˭J and centroid IJˤ, on attribute I in a multi-view weight value.The users are permitted to define the homogeneous function, however the homogeneous values must be normalized to (0, 1) in order that Smu KJK = 1 points out that the value muv H is "perfectly" homogeneous of the centroid muv = KJK , or else which indicated by value ˟˭˯ = 0.
The distribution centroid: The fuzzy centroid as developed (Kim et al., 2004) simulated the concept of distribution centroid for a improved representation of categorical variables.The cluster centers for the categorical variable part will be better represented by a fuzzy scenario.For ˖J˭ ˢ = {˰ # , ˰ $ , ˰ % , … ˰ {, the distribution centroid of a cluster ˭˰I is specified as and given as below: where, In the above equation: where, μ{z CD { = I ?; At this point, the value of 1 is assigned to u C J= , if the data object ˲ belongs to cluster ˭˰I otherwise which 0 is allocated, if the data object ˲ do not belong to cluster ˭˰I.Based on the above mentioned equations 10, 11, 12 and 13, it is obvious that the number of repetitions of each categorical value is been considered by the cluster computation of distribution centroid.As a result, the distribution characteristics of categorical variables are considered to indicate the center of a cluster.In the proposed approach, optimisation of fuzzy membership centroid values is done with the assistance of SDA.SDA becomes an iterative and computations depend on the computation of the objective function ˭˰˦I˦and ˭˰˦I˦ at each iteration are commonly concerned.The SDA approach of choosing the best centroid values by maintaining least amount of cluster multi-view datapoints for each cluster, subsequently recur the step until maximum number of points in the cluster is attains, or else go to step 3 and negative direction that is remaining points in the multi-view data are elected to choose optimized centroid value.The optimized centroid values are reduced through calculation of step size in step 4 of the algorithm then revise the chosen optimized fuzzy centroid value results in step 5 and move to step 2. The fundamental form of the algorithm for optimizing the centroid values is given below.
where, $ indicates a parameter which maintains the width of the Gaussian function centered at centroid IJˤ.At this point, the similarity function is not symmetric, i.e., ℎ {˰ { ≠ ℎ {˰ {, as the calculation depends on the distribution of objects centered at the former object.The evaluation of width of the Gaussian function is completed with the help of k-nearest neighbor's heuristic (Nocedal and Wright, 2006) and is given as: where, ˚˥˩˧ℎ represents the set of k-nearest neighbors of object ˭J on feature {I, ˰˱˱{ and ˫ = É˭JÉ with an supposition that is the neighborhood parameter described by users.In accordance with the distribution of the objects projected in the data space of attribute, the width of the Gaussian function is being implemented by the k-nearest neighbors heuristic, as a result showing that is more strong than keeping a constant value.Calculating and pruning the homogeneous tensor using SVD for optimized centroid , a homogeneous tensor ˟ ∈ [0, 1] É É×É É×É É is characterized containing the homogeneity values Hˡ˟ with respect to centroid IJˤ.
Algorithm 2: SVD pruning: Input ÉmoÉ × ÉaÉ × ÉvwwÉ homogenous tensor S Output: Pruned homogenous S 1. M = unfold{S{ 2. Add dummy row and column to M 3.While true do 4. N ← zero mean normalization {M{ 5. U V ← N //SVD decomposition on N 6. u ← principalcomponent{U{ 7. v ← principalcomponent{V { 8. Calculate threshold τ I τ J 9. Prune row i of M if Éu{i{É < τ I , 1 ≤ i ≤ r 10.Prune column i of M if Év{j{É < τ J , 1 ≤ j ≤ n 11.If there is no pruning then break 12. Remove dummy row and column from M 13. S = fold{M{ Initially, zero mean normalization is carried out on matrix H to get hold of the zero mean normalized matrix ˚ (Line 4), which will later be exploited to compute the covariance matrices.Zero mean normalization is carried out by computing the mean I˰˧ of the matrix H that ∀˪ ∈ {1, … I{ of each column: Subsequently, from each entry of H, its equivalent column mean ∀˪ ∈ {1, … I{ is been subtracted: N{i, j{ = M{i, j{ − avg D At some point in the performance of the clustering process for the above returned centroid values, the homogeneous tensor ˟ together with the utilities of the objects were exploited to compute the probability of each value ˭˯˰ of the data to be clustered with the centroid IJˤ.Subsequently, the covariance matrices of the homogeneous values in the object space and feature space called ˚˚ and ˚˚ respectively were calculated (˚ is the conjugate transpose of matrix ˚): where, ˡ represents a J × J orthonormal matrix (its columns are the eigenvectors of ˚˚ ), $ represents a J × I diagonal matrix with the eigen values on the diagonal and ˢ is a I × I orthonormal matrix (its columns are the eigenvectors of ˚˚).If the magnitude of the pruned objects in their related elements of their principal components is little (Line 9 and 10), a heuristic however parameter-free approach can be proposed to find out the threshold for pruning objects.For pruned rows (objects) and columns (features) of matrix H, the homogeneous values are fixed to "0".The process of computing SVD and pruning the matrix H is replicated until there is no more pruning.The clustering process for computing the probability value is carried out in which p KJK ∈ ℝ represent the probability of object ˭J to be clustered with centroid IJˤon attribute I.The view weight ˰, variable weight ˱ for multi-view data is computed with the help of Artificial Fish Swarm Algorithm (AFSA).Consider ˜ ∈ ℝ É É×É É×É É be the probability tensor, such that J is an element of it provided with the respective indices É˭JÉ × ÉIÉ × É˰˱˱É.The following objective function is then maximised to calculate the probabilities: To perform the clustering process, the objective functions are defined as: The Optimization of ˦{J{ under constraint ˧{J{ is a linear programming problem, as ˦{J{ and ˧{J{ is linear functions of the design variable ˜.Augmented Lagrangian multiplier technique is then exploited to maximize the objective function ˦{J{ for clustering multi-view data in the subspace clustering technique.As a result, the modified objective function is defined as: The optimization of ˘{J{ (Algorithm 3) depends on Augmented Lagrangian Cauchy Step computation (ALCS) methods, ˦{˜{ and ˧{˜{ would be employed by ALCS with the intention that the constrained optimization problem are been replaced with iterations of unconstrained optimization sub problems, Hence, the iterations continue until the solution converges.For algorithm 3, ALCS necessitates three parameters such as , ⊝ , ∈ to calculate the optimized probability value for clustering process.In the majority of situations, the results are insensitive to these parameters and therefore can be fixed to their default values.The closeness cluster results for multi-view data results is constantly indicated by parameter .As a result, δ provides the standard tradeoff between accuracy and efficiency, i.e., smaller δ indicates longer computation time however better result.Parameter ⊝ maintains the level of clustering to the constraint ˧{˜{.Parameter ∈ auxiliary nonnegative scalar quantities on ˘{˜{ when the constraint is breached.
From the outcomes of the optimized probability values for multi-view data both view and variable weights values are calculated using the Artificial Fish Swarm Optimization (AFSO) algorithm.Artificial Fish (AF) is a fictitious entity of true fish, which is exploited to carry on the analysis and explanation of the problem and can be recognized by exploiting an animal ecology conception.The each one of the variable and view weight values of multi-view data to carry out KICASDSTWC results the weight values by external perception by its vision.where, ˞IJˤ {{ generates random numbers between 0 and 1, Step represents the step length to carry out weight value calculation, J represents the number of multi-view data samples for clustering, δindicates the crowd factor of AFSO algorithm to optimize the Variable and View Weight values is found depending on the input parameters and to manage the distribution of the two types of weights ˢˣ and ˣ.It can be simply validated whether the objective function ( 20) can get minimized with regard to ˢˣ and ˣ if ≥ 0 and ξ ≥ 0. In addition, it is carried out as given below.
> 0, based on (25), ˰˱ is inversely proportional to ˗.The smaller ˗ and the larger ˰ shows that the equivalent variable is more significant.η > 0, based on ( 25), = 0 will generate a clustering result with only one significant variable in a view which possibly will not be desirable for high dimensional data.The attributes are presumed to be segmented into ˠ views {˙ { (# : > 0, based on (27), = 0 will generate a clustering result with only one significant view.It possibly will not be desirable for multi-view data: The functions multi-view clustering data samples that comprise the behaviors of the ˓˘: ˓˘_˜J˥˳, ˓˘_˟˱IJ˭, ˓˘_˘JˬˬJ˱, ˓˘_HJ˰˥.Every fish typically resides in the place with a best objective function (20).The fundamental behaviors of AF are defined (Jiang and Yuan, 2005;Wang et al., 2005) as given below for maximum.
AF_Prey: This is a fundamental biological behavior that tends to the each variable (˱{ and view weights (˰˱{ values is assigned to best variable (˱{ and view weights (˰˱{ food; commonly the fish give attention to perceives the best variable (˱{ and view weights (˰˱{ values in water to decide the movement by vision: If ZW C < ZW D in the higher clustering accuracy it goes to an additional mutliview data samples; if not, choose a state Iˣ arbitrarily yet again to weight calculation and find whether it satisfies the forward condition.If it is not possible to satisfy higher clustering accuracy after maximum number of iterations completed by fish, it travels a step randomly to choose another variable (˱{ and view weights (˰˱{ values.In case of the maximum number of iterations is small in ˓˘_˜J˥˳, the AF can work like a swim approach randomly, which makes it best variable (˱{ and view weights (˰˱{ values results: indicates that the companion center has additional clustering accuracy and is not very crowded, it goes forward a step to the companion center: If not, implements the preying behavior.The crowd factor restricts the length of the weight calculation searching space and more AF only cluster at the best possible area, which guarantees that AF move to optimum in a broad field.
AF_Follow:
In the moving progression of the weight calculation from one position to many positions then discover best clustering accuracy by comparing the neighbourhood partners will trail and accomplish best clustering accuracy results rapidly: AF_Move: Fish swim arbitrarily in the water; indeed, they are looking for best weight calculation results food in larger ranges: AF_Leap: When the objective functions {˰˱ { − {˰˱ { < ˥JJ carried out by fish.It selects fish, certain variable view weights (˰˱{ values arbitrarily in the complete fish swarm and fix parameters arbitrarily to the chosen AF, ˥JJ is a smaller constant (Wang et al., 2005) for view weights (˰˱{ values: The above mentioned process is also similar for variable weights.The step direction is restructured for next iteration of weight calculation: where, represents parameter to the maximum number of iterations and ˚ indicates the number of clustering data samples, ˮ represents current fishes and ˮ + 1 next fish (variable and view weights data samples) position.This process is continued until all, the multi-view clustering data samples is all concluded.
EXPERIMENTAL RESULTS AND DISCUSSION
In order to investigate the performance of the KICASDSTWC with incomplete view in classifying real-life data, selected three data sets from UCI Machine Learning Repository (Frank and Asuncion, 2010): They are: With these data, evaluated the performance of the proposed KICASDSTWC with incomplete view existing, against Quasi Newton's Subspace Two variable Weighted Clustering (QNSTWC), TW-kmeans with four individual variable weighting clustering algorithms, i.e., EWKM (Tzortzis and Likas, 2010) and a weighted multi-view clustering algorithm WCMM (Jing et al., 2007).
Characteristics of three real-life data sets:
The Multiple Features (MF) data set includes 2,000 patterns of handwritten numerals that were obtained from a collection of Dutch utility maps.These patterns were segmented into 10 classes ("0"-"9"), each comprise 200 patterns.Each pattern was described by 649 characteristics that were segmented again into the following six views: The Internet Advertisement (IA) data set includes a collection of 3,279 images from various web pages that are classifiedd either as advertisements or non advertisements (i.e., two classes).The instances are expressed in six sets of 1,558 characteristics, which are the geometry of the images (width, height and aspect ratio), the phrases in the url of the pages includes the images (base url), the phrases of the images url (image url), the phrases in the url of the pages the images are directing at (target url), the anchor text and the text of the images alt (alternative) html tags (alt text).The entire views have binary characteristics, apart from the geometry view whose characteristics are uninterrupted.
The Image Segmentation (IS) data set includes 2,310 objects drawn arbitrarily from a database of seven outdoor images.The data set includes 19 characteristics which can be naturally segmented into two views: In order to evaluate the clustering accuracy, this study uses Precision, Recall, F-measure, accuracy and average cluster entropy to evaluate the results.
Precision: Precision is computed as the fraction of accurate objects among those that the algorithm considers belonging to the relevant cluster.
Recall: Recall is the fraction of authentic objects that were recognized.F-measure: F-measure is the harmonic mean of precision and recall and accuracy is the proportion of accurately clustered objects.
The results of the different clustering approaches with the above mentioned parameter results are shown in the Table 1.The performance comparison results of the proposed KICASDSTWC shows higher Precision, Recall, F measure and average accuracy, since the weight and centroid values are automatically calculated rather than using fixed values.
Average Cluster Entropy (ACE): depends on the contamination of a cluster given the true classes in the data.If J represents the fraction of class ˪ in obtained cluster ˩, ˚ represents the size of cluster ˩ and ˚ indicates the total number of examples, subsequently the average cluster entropy is given as: where, H represents the number of clusters.
CONCLUSION
In this study, proposed novel robust KICASDSTWC methods for complete and incomplete view of the data.In order to carry out multi-view data for incomplete data, new approach to ICA depending on kernel methods is given in this study.At the same time, most current ICA algorithms are depending on employing a single nonlinear function, this approach is a more flexible one in which candidate nonlinear functions of incomplete and complete view data are selected adaptively from a reproducing kernel Hilbert space.The proposed clustering is dissimilar to other existing approaches, because incomplete and complete view data are primarily learned then fuzzy centroid values are optimized using SDA.Given multiple-view data, calculate weights for views and individual variables concurrently with the help of AFSO.With the aim of reducing the complexity in the subspace clustering approach Singular Value Decomposition is proposed with ALCS methods for probability distribution optimization.The proposed system have been evaluated using three datasets namely Multiple Features (MF), Internet Advertisement (IA) and Image Segmentation (IS) based on the precision, recall, f measure and accuracy to analysis the properties of two types of weights.It shows that proposed KICASDSTWC achieves better clustering results than the existing clustering methods.Future works must concentrate on the ability to use isolated instances that do not have a corresponding multi-view representation to enhance learning and facilitate multi-view learning to be employed for a wider variety of applications.
Fig. 1 :
Fig. 1: Flowchart representation of proposed methodology distance matrix ˖˩Jˮ × in which ˤ˩Jˮ I , I indicates distance from I to I 2. Make an initial guess ˴ at the minimum; keep ˫ = 0. Choose convergence parameter > 0, is calculated from distance matrix 3. Calculate the gradient steepest descent of the centroid objective function ˭˰˦I˦{˴{ at a point I to all other points and centroid is indicated as I { { = ∇mvfcf z {E{ 4. Compute centroid value as É˕É = ˕ ˕, when ÉIÉ < and ˯ˮ˩ˬ˩ˮ˳{˯ { > ˯ {˭˩J{ > 0.5 then terminate the iteration process I * = I { { is minimum number of cluster multi-view data cluster datapoints.Otherwise go to step 3 5.Consider the search direction at the current point ˴ { { as ˤ { { = −I { { 6. Compute a step size { { to reduce fuzzy centroid value {˴ { { + { { ˤ { { { 7. One dimensional search is exploited to determine { { 8. Revise the chosen fuzzy centroid values as ˴ { #{ = ˴ { { + { { ˤ { { 9. Keep ˫ = ˫ + 1 and move to step 2 Gaussian function which employed above is the homogeneous function as similarity among data object ˭J and centroid IJJ is been normalized on feature {a, w{ to [0; 1].The homogeneous function is specified as below: based on (27), ˱ is inversely proportional to ˖.The smaller ˖ , the larger ˱ , the more compact the corresponding view.
•
Shape view: Includes nine characteristics regarding the shape information of the seven images.• RGB view: Includes 10 characteristics regarding the RGB values of the seven images.The graphical representations of the clustering results for variable and view weights with different variables and the methods results are shown in Fig. 2. It shows the variation in variable weights for varying = 8 values and view weights ˱ = 1, ˱ = 32 for TW-K means, QNSTWC and proposed KICASDSTWC with Incomplete View (ICV) ˱ = 1, ˱ = 32 results are shown in Fig. 2. It shows that proposed KICASDSTWC with Incomplete View (ICV) results shows have higher clustering accuracy with less view weights values are automatically calculated using AFSA, the proposed system also supports incomplete view of multi-view dataset, Centroid values are optimized using SDA.The graphical representations of the clustering results for variable and view weights with different variables and the methods results are shown in Fig. 3 for Internet Advertisement (IA) dataset.It shows the variation in variable weights for varying = 8 values and view weights ˱ = 1, ˱ = 32 for TW-K means and
Fig. 2 :
Fig. 2: Comparison of the total variable weights and view weights for methods in Multiple Features (MF) data set Fig. 4 for Image Segmentation (IS) dataset.It shows the variation in variable weights for varying = 8 values and view weights ˱ = 1, ˱ = 32 for TW-K means, QNSTWC and KICASDSTWC with Incomplete View (ICV) results are shown in Fig. 4. It shows that proposed KICASDSTWC with Incomplete View (ICV) results shows have higher clustering accuracy with less view weights are taken for incomplete and complete view results.
The number of variable (˱{ and view weights (˰˱{ values of multi-view data is signified as Iˣ is the position on ˓˘, ˢ˩J˯Iˬ indicates the visual distance and Iˣ represents the visual position of the current multi-view data weight values at visual position is superior than the earlier weight values state, it goes forward to the next weight value calculation direction and arrives the Iˣ state; or else, current variable (˱{ and view weights (˰˱{ values maintains an inspecting travel around in the vision until it attains maximum clustering accuracy.Consider Iˣ represents the current state of the variable (˱{ and view weights (vw{ values and it is indicated as Iˣ = {˴˱ # , … , ˴˱ { and Iˣ = {˴˱ # , … , ˴˱ { subsequently process can be expressed as given below: ˴˱ = ˴˱ + ˢ˩J˯Iˬ.˞IJˤ{{, ˩ ∈ {0, J] (23) The fish will bring together in groups of variable(˱{ and view weights (˰˱{ valuesthat are obviously assign variable (˱{ and view weights (˰˱{ values to multi-view data point clustering in the moving procedure, which is a type of living habits to satisfy clustering accuracy and eradicates unnecessary variable (˱{ and view weights (˰˱{ values.Consider Iˣ represent the ˓˘ current state of variable (˱{ and view weights (˰˱{ values, Iˣ represent the center location of variable (˱{ and view weights (˰˱{ values and J represent the number of its companions in the current neighborhood {ˤ < ˢ˩J˯Iˬ{, J represents total number of variable (˱{ and view weights (˰˱{ values data samples.When Iˣ > Iˣ and > , which 30)AF_Swarm:
Table 1 :
Summary of clustering results on three real-life data sets by five clustering algorithms | 8,226.4 | 2015-04-25T00:00:00.000 | [
"Computer Science"
] |
Euphilomedes biacutidens (Ostracoda, Myodocopida, Philomedidae), a new species from China Sea
Ostracods are one of the major groups of marine benthos, inhabiting virtually all oceanic environments worldwide, and a total of 31 species have been recorded in genus Euphilomedes Kornicker, 1967. In the present study, we describe a new species Euphilomedes biacutidens collected from the Taiwan Strait and South China Sea. E. biacutidens sp. nov. differs from the related species of the genus Euphilomedes in having a unique combination of the characteristics of spines on carapace, the filaments on sensory seta, the arrangement of setae on tip of the first antenna, the numbers of setae on appendages, the claws on fifth limb, the teeth on the comb of the seventh limb and furcal claws. It is particularly obvious that it has a bifurcated and pointed ventral corner of the rostrum, two spines on the posterior margin of right valve, a row of teeth along the inner margin of article 3 of the endopod of the second antenna, and some long claws instead of setae on the fifth limb.
INTRODUCTION
Ostracoda is a class of the phylum Arthropoda (Martin & Davis, 2001). The ostracods are small bivalved aquatic crustaceans and can be benthos or plankton. Ostracods are one of the major groups of marine meiobenthos and also macrobenthos which inhabit virtually all oceanic environments worldwide with various feeding habits and high taxonomic diversity (Karanovic, 2010). Studies on ostracods from China began in the 1950's on fossil species (Chang, 1955). About thirty years later, we initiated investigations of the taxonomy and ecology of living marine ostracods in China (Chen, 1982;Chen, 1984). So far, 237 species of recent marine ostracods have been recorded from the China Sea (Chen, 2012;Chen et al., 2015a;Chen et al., 2015b;Xiang et al., 2017).
MATERIALS AND METHODS
Samples were obtained from two cruises of the South China Sea and Taiwan Strait in 1984-1985and 1994-1995 (Fig. 1). There are no specific permissions required for the sampling activities in the research areas.
All samples were collected using a sampling net with mouth diameter of 80 cm and a mesh aperture of 0.505 mm by vertical dragging from 200 m (or bottom) to surface water. Samples were fixed with 5% buffered formaldehyde for preservation.
Specimens were dissected under a zoom-stereomicroscope (Zeiss Discovery V2.0) and mounted in permanent slides with CMC-9AF mounting medium (Masters Company Inc., Wood Dale, IL, USA). Observations and photomicrographs were obtained with a transmitted-light binocular microscope combined with a differential interference contrast system and AxioVision Image-Pro software (Axio Imager Z2; Carl Zeiss Inc., Oberkochen, Germany). Line drawings were made from photomicrographs and observations of preserved specimens and dissected appendages in slides by Adobe Photoshop CS6 software (Adobe Inc., San Jose, CA, USA).
The type specimens were deposited in the Marine Biological Sample Museum of the Chinese Offshore Investigation and Assessment, the Third Institute of Oceanography, State Oceanic Administration, China (Xiamen, China), under the collection numbers TIO-OMPEu 326-TIO-OMPEu 329 for the new species.
Nomenclatural acts
The electronic version of this article in Portable Document Format (PDF) will represent a published work according to the International Commission on Zoological Nomenclature (ICZN), and hence the new names contained in the electronic version are effectively published under that Code from the electronic edition alone. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix http://zoobank.org/. The LSID for this publication is: urn:lsid:zoobank.org:pub:557FB253-93C6-473E-9E07-227D9D9C1A60. The online version of this work is archived and available from the following digital repositories: PeerJ, PubMed Central and CLOCKSS.
Diagnosis. Height about 60% of length. Carapace oval, external surface with tiny circular pits and small setae ( Figs Frontal organ: Frontal organ extremely long and thin with two articles, article 2 longer with a sharp tip (Fig. 2D).
First antenna: First antenna uniramous with eight articles (Fig. 2E). Articles 1 and 2 long. Article 2 with one disto-dorsal and one disto-ventral plumose setae. Article 3 short with two spinose setae on disto-dorsal margin. Article 4 with one long and one short plumose setae on disto-dorsal margin, three long setae on mid-ventral margin, and one grand sensory seta with about thirty-two very long soft ventral filaments on disto-ventral margin. Article 5 bare. Article 6 very short with one short and bent plumose seta on disto-dorsal margin. Article 7 and 8 fused, very small with seven setae on tip: a-seta very short, spinose b-and g-setae very grand and long, c-seta with one mid filament, d-seta long with bifurcated tip, e-and f-setae long and bare.
Second antenna: Second antenna biramous. Endopod with three articles (Figs. 2F-2G). Article 1 short with three short ventral dorsal setae and one disto-ventral seta; article 2 long and slightly bent with corpulent ventral part, and two grand setae on ventro-distal margin; article 3 thin and bent, approximately equilong to article 2, with one bent proximo-dorsal seta, two small distal setae, twelve small ventral acute teeth, rugged dorsal margin, uneven inner side, and ten small tines on tip. Exopod with nine articles (Fig. 2H). Articles 1-8 with a line of fine spines on medio-distal margin; articles 2-8 with one disto-vental long plumose swimming seta, respectively; articles 3-8 with one spine on disto-dorsal edge; article 1 very long, article 2-9 more and more shorter; article 9 very short with four long plumose setae on tip.
Mandible: Limb biramous (Fig. 2I). Coxale grand, endite with bifurcated tip and cluster of spines (Figs. 2J, 3H). Basale grand, dorsal margin slightly humped with one mid-dorsal seta and two distal plumose setae; ventral margin with a group of proximal short setae, seven plumose setae and one short medio-ventral seta. Exopod tiny with two equilong plumose setae on tip (Figs. 2L, 3G). Endopod with three articles. Article 1 with a group of five setae on disto-ventral margin (two long plumose and three short). Article 2 longer than one; dorsal margin with a group of two long proximo-dorsal setae (one bare and one plumose), a group of four mid-dorsal long setae; ventral margin with a group of two mid-ventral setae (one short and one long plumose), a group of three bare disto-ventral setae (inner one short, outer two equilong). Terminal article very small with two claws and four setae on tip (Figs. 2K, 3I): disto-dorsal claw biggest with numerous spines on distal half ventral margin, short seta, big claw with numerous spines on distal half ventral margin, longest seta, long seta and shortest disto-ventral seta.
Maxilla (Figs. 2M, 3J): Coxale with one plumose seta on disto-dorsal edge. Basale with two disto-ventral long plumose setae. Exopod small with three long distal plumose setae. Endopod with two articles. Article 1 long with one long and two short dorsal setae, and three disto-ventral setae. Article 2 very short, with one very small seta, two plumose setae, three claws and four plumose setae on tip. Maxilla with three endites. Endite I with seven plumose and one serrated setae. Endite II with two plumose and one serrated setae. Endite III with nine plumose and one serrated setae.
Fifth limb (Figs. 4A, 5A-5B): Coxale with three endites. Endite I with four plumose setae. Endite II with two plumose setae and one claw. Endite III with three plumose setae and five claws, inner claw very strong. Exopod with five articles. Article 1 with one plumose and one bare setae on mid-distal margin, main tooth comprising two slices of constituent teeth, medial teeth smooth, lateral teeth jagged. Article 2 with one long bare and one small plumose setae, and two long claws on posterior side. Article 3 with two plumose and one long bare setae on inner lobe and two short slender plumose setae on outer lobe. Articles 4 and five fused, with nine distal plumose setae. All claws of this limb with numerous disto-half ventral spines.
Seventh limb: Limb with about fifty-two articles (Figs. 4C, 5D). All articles very short. Article 40 with one disto-ventral seta with two bells. Article 41 with one disto-dorsal seta with two bells and one disto-ventral seta with three bells. Article 42 with one dorsal seta with three bells. Article 44 with one ventral seta with three bells. Article 45 with one disto-dorsal seta with three bells. Article 52 with two long setae with five and three bells respectively. Terminal article with one long and two short dorsal setae with five, four and three bells, respectively. Comb with six teeth, side opposite comb with two bare bent small pegs. Comb teeth from outside to inside from long to short sequence (Figs. 4D, 5E).
DISCUSSION
According to Chen's key of family Philomedidae Müller, 1906(Chen & Lin, 1995, the current specimens separated from the other philomedids with the following characteristics defining the genus Euphilomedes: (1) the carapace is elongate oval in lateral view with pits and setae, the posterior margin is evenly rounded; (2) the rostrum is broad anteriorly, and the incisure is shallow (compared with other philomedids); (3) article 4 of the first antenna has one to four ventral setae; (4) the endopodal article 2 of the second antenna has two ventral setae; (5) the anterior triangular protuberance of the main tooth of the fifth limb has denticulate margin, the inner lobe of article 3 has three setae, and the outer lobe has two setae; (6) the seventh limb has six to nineteen cleaning setae, the comb has less than fifteen teeth; (7) the furcal lamella is not fused with the main claws, the secondary claws are alternating with the main claws, the edge between furcal lamella and claws has long cilia. With this new species, the genus Euphilomedes contains 32 recent species thus far (Brandão et al., 2017).
Like E. africanus (Klie, 1940), E. bradyi Poulsen, 1962 andE. walfordi Poulsen, 1962, the new species has a row of teeth along the inner margin of article 3 of the endopod of the second antenna. However, E. biacutidens sp. nov. differs from these three closely related species (Table 1) of setae on the tip of the first antenna between these species (detailed differences are given in Table 1); (7) the endopodal article 3 of the second antenna has about twelve small ventral acute teeth and an uneven inner margin (Figs. 2F-2G); (8) the numbers of setae on the endopod of the mandible, endopod and endites of the maxilla, endopod and endites of the sixth limb have significant differences (detailed numbers are given in Table 1); (9) some setae on the fifth limb have developed into long claws (Figs. 4A, 5B); (10) the comb of the seventh limb has six teeth and the side opposite comb has two bare bend pegs (Figs. 4C-4D, 5D-5E); (11) the furcal lamella has twelve claws, the first claw has dorsal and ventral sawteeth (Figs. 4E-4F, 5F-5G). The obvious characteristics of E. biacutidens sp. nov. are the postero-dorsal and posteroventral spines on the right valve; E. sinister Kornicker, 1974 (including two subspecies: E. sinister sinister Kornicker, 1974 andE. sinister pentathrix Kornicker &Caraion, 1977) also shows posterior spines, which is known only in the adult female. However, both species can be easily distinguished from each other by the following remarkable differences ( Table 2): (1) they have different carapace ornamentation; (2) E. biacutidens sp. nov. has the postero-dorsal and postero-ventral spines on the right valve (Fig. 2C, Fig. 3F), but in E. sinister the spines are on the left valve; (3) there are about 32 very long filaments on the sensory seta of E. biacutidens sp. nov., and only five short filaments and three long bifurcated filaments on the sensory seta of E. sinister, and there are significant differences of the setae on the tip of the first antenna between these species (detailed differences are given in Table 2); (4) E. biacutidens sp. nov. has two mandibular claws, E. sinister has three; (5) they have significant differences in the numbers of setae on the endites of the maxillae and the fifth limbs (except endite III of maxilla, with detailed numbers given in Table 2); (6) E. biacutidens sp. nov.has more cleaning setae on the seventh limb, but fewer teeth on the comb (Figs. 4C-4D, 5D-5E).
Additionally, E. biacutidens sp. nov. shows some long claws instead of setae on the fifth limb (Figs. 4A, 5B); this is a diagnostic characteristic of the species and is an unusual characteristic in the genus. The rostrum has a pointed bifurcated ventral corner (Figs. 2B, 3E), which is also a distinctive characteristic not previously observed in the genus.
Finally, the distance between the sampling localities of the holotype and paratypes indicates that the new species may be widely distributed southeast off China (Fig. 1). | 2,966.6 | 2017-06-22T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
On continuous variable quantum algorithms for oracle identification problems
We establish a framework for oracle identification problems in the continuous variable setting, where the stated problem necessarily is the same as in the discrete variable case, and continuous variables are manifested through a continuous representation in an infinite-dimensional Hilbert space. We apply this formalism to the Deutsch-Jozsa problem and show that, due to an uncertainty relation between the continuous representation and its Fourier-transform dual representation, the corresponding Deutsch-Jozsa algorithm is probabilistic hence forbids an exponential speed-up, contrary to a previous claim in the literature.
I. INTRODUCTION
Quantum information protocols have been demonstrated experimentally in both the discrete-variable (DV) and so-called continuous-variable (CV) settings. DV quantum information protocols employ qubits [1] and qudits [2], and CV quantum information protocols regard continuously parameterized canonical position states as the logical elements analogous to qubits for the DV case [3]. CV quantum information is experimentally appealing because sophisticated squeezed light experiments have led to claims of successful quantum information protocols such as teleportation [4], key distribution [5], and memory [6,7], but the theoretical status of CV quantum information is challenged by unresolved issues concerning quantum error correction [8], non-distillability [9], no-go theorems for quantum computation [10,11], and the absence of full security proofs for key distribution.
CV information processing has also been studied for classical models, including the now named Blum-Shub-Smale machine [12] and continuous Turing machines [13]. These models are of background relevance to the research into CV quantum information and are referenced here for contextual purposes.
In this paper, we establish a sound theoretical framework for studying quantum algorithms and apply this framework to study the CV analogue of the early DV quantum algorithm, known as the Deutsch-Jozsa (DJ) algorithm [14,15,16]. The problem solved by DJ algorithm is the following. The best classical algorithm requires 2 n−1 + 1 evaluations in the worst case. If error is tolerated, for any integer m ≥ 2, to achieve an error of at most 2 −m , any probabilistic algorithm requires a number of evaluations that is at least of order m [17]. If the function is accessible on a quantum computer as a quantum oracle, then the DJ algorithm is exact and requires just one evaluation to solve the problem.
Our focus here is on the CV analogue of the DJ algorithm, and we are inspired by the Braunstein and Pati formulation [18] of the CV DJ algorithm; however, our work differs from theirs in that ours relies only on logical states that are elements in the Hilbert Space and thus provides a strict CV version of the DJ problem. We introduce a particular model for the computation of the DJ problem in a CV setting. Within the constraints of this model, our analysis shows that the CV DJ algorithm is necessarily probabilistic and its performance must therefore be compared to the classical case where bounded error is tolerated and not to the classical deterministic case.
We choose the DJ algorithm for the following reasons. Two types of quantum algorithms dominate the field, those that implement a version of the hidden subgroup problem and those that use a version of Grover's search algorithm [1,19]. An early example of the former is the Deutsch-Jozsa (DJ) algorithm [14], which is amongst the oracle class of problems [20] that have been important in demonstrations of quantum speed-ups. Finally the CV DJ algorithm has a head start in the work of Braunstein and Pati so our analysis can build on their concepts [18].
Our paper is presented as follows. In Sec. II, we review the DJ algorithm. Although this algorithm is well known, our review serves as a foundation for careful construction of the CV version of this algorithm. Furthermore we compare the DJ algorithm's performance against both deterministic and probabilistic strategies, especially because the CV case can only be properly compared against probabilistic strategies because the CV DJ algorithm can never be deterministic. Our description of the DV DJ algorithm comprises three steps so that these steps can be discussed separately during the construction of the CV analogue.
Our approach emphasizes a recasting of the DV DJ algorithm in that we do not need the target qubit. This approach leads to an easier adaptation to the CV case. In Sec. II, we review the formalism of rigged Hilbert spaces (RHS) [21] since our CV algorithm, as well as any other CV quantum algorithms, must work in a RHS. This will have implications when we discuss the limitations of error inherent in our CV DJ algorithm in Sec. III and in Sec. IV.
In Sec. III, we adapt the DJ problem to the CV case and develop the CV DJ algorithm through the same three fundamental steps of the algorithm. We pay particular attention to the challenge of encoding a finite N -bit string into functions over the real numbers. Overcoming this challenge enables us to recognize that perfect encoding results in the inability to determine if the encoding is of a constant string or balanced in a single execution of the algorithm. We show that this probabilistic nature of the algorithm is the result of an uncertainty relation between the continuous representation and its Fourier-transform dual representation.
In Sec. IV, we determine an upper bound on the query complexity of the CV DJ algorithm. We note that because the CV DJ algorithm is shown to be probabilistic, its performance can only logically be compared to the classical probabilistic algorithm and not to the classical deterministic algorithm. We conclude that the formalism presented herein is applicable to a wide range of oracle identification problems in a CV setting.
II. BACKGROUND
We cast the DJ problem into the class of 'oracle identification problems' in Subsec. II A. We then review deterministic algorithms in Subsec. II B and probabilistic algorithms in Subsec. II C. In Subsec. II D, we analyze an alternative representation of the quantum DJ algorithm that uses n qubits instead of the traditional n + 1 qubits. In Subsec. II E, we present a primer on the rigged Hilbert space and close with a discussion of the concepts required to transition from discrete variables to continuous variables.
A. The Oracle Identification Problem
The DJ problem is an identification problem in which we are given a function from some candidate set S = {f 1 , f 2 , . . . , f M } of functions. The candidate set S = S 0 ∪S 1 is the disjoint union of two collections of functions, and our task is to determine which of the two collections the function f is drawn from. with the promise that either f ∈ S 0 or f ∈ S 1 , determine the index b such that f ∈ S b .
For N = 2 n , we impose lexicographic order on the Nbit strings of {0, 1} n . We can then specify any function f z by writing all its N function values in a list z ∈ {0, 1} N of length N . The i th bit z i in the list is 1 if f takes the value 1 on the i th bit-string of {0, 1} n . There are 2 N functions from n bits to one bit, and thus our candidate set has cardinality upper bounded by M ≤ 2 N . In the following, we often write f z to denote the function that corresponds to the N -bit string z.
We are interested in finding an efficient strategy to identify the property of whether f belongs to set S 0 or to set S 1 without necessarily determining f itself. In the DJ case, the property we are interested in is whether f is balanced or constant [14,15,16]. The cost of the algorithm is the number of queries made to the oracle.
With the promise of balanced or constant functions, there are far fewer than 2 N functions. The number of balanced and the number of constant functions is readily ascertained from the binomial theorem applied to power sets. The strings z of length N that correspond to the constant functions are the string consisting only of 1s and the string consisting only of 0s. There are thus just two constant functions. The strings z of length N that correspond to balanced functions are the strings in which exactly half of the bits are 0 and half are 1. There are thus precisely N N/2 balanced functions.
B. The Classical Deterministic Approach
On a classical Turing machine, Problem 1 can be solved deterministically. A deterministic algorithm corresponds to submitting queries in the form of n-bit inputs and obtain the one-bit output for each query. There are N unique input strings, but the promise of balanced versus constant functions implies that only N/2 + 1 are required to determine whether the given function is balanced or constant, with certainty.
The reason that fewer than N/2 + 1 queries is insufficient is that only N/2 queries may reveal all output bits being the same, suggesting a constant function, whereas the remaining N/2 outputs could all be the opposite of the first N/2 queries.
C. The Classical Probabilistic Approach
In Subsec. II B, we saw that fewer than N/2+1 queries is insufficient for a deterministic algorithm, but that case seems highly unlikely. More formally, fewer than N/2 + 1 queries will identify most of the balanced functions as non-constant in much fewer than N/2 + 1 queries. Here we ask the question about how many queries are required if we are prepared to tolerate a small number of errors. The upper line represents the n-qubit "control" state, and the lower line represents the 1-qubit "target" state.
In fact a probabilistic algorithm achieves an exponentially small error of 2 −m with a number of queries that is only linear in m [17]. To understand how a probabilistic algorithm can help, consider that, although a single query with a random input provides no information, two queries with two random inputs can be highly informative. If the output from the second query differs from the first output, then the function is proved not to be constant and therefore must be balanced. If, on the other hand, the second output is the same as the first, then the outcome is not certain, but the more times the outputs are the same, the more confident one can be about the function being constant.
We calculate the probability of successfully determining whether the given function f z is balanced or constant. A lower bound on the success probability Pr for m queries can be achieved by examining a samplingwithout-replacement strategy, which is expressed as Here the equality is calculated assuming sampling without replacement and shows dependency on N , whereas the inequality in Eq. (2.2) is based on sampling with replacement and is independent of N . The failure probability 1 − Pr declines exponentially in m, the number of queries. In Subsec. II D, we study the quantum DJ algorithm next where we show that the problem can be solved with a single query independent of N . Although this exponential speed-up is impressive when compared to the classical deterministic approach, it is much less so when compared to the classical probabilistic approach.
D. The Quantum DJ Algorithm
The quantum DJ algorithm has been shown to solve Problem 1 in a single query [14]. The quantum DJ algo- rithm is usually studied via its corresponding quantum circuit. We present a standard circuit version [16] in Fig. 1. The state represented by the lower line in Fig. 1 is referred to as the target qubit. In order for easier adaption of this circuit to the CV setting, we choose an alternative, and equivalent, circuit formulation -one without the target state. We take this approach to avoid some of the difficulties the target state introduces in [18]. The unitary operator associated with the oracle function changes slightly in this alternative circuit. We discuss these differences before proceeding with analysis of the circuit.
This simpler algorithm without the target qubit is given in Fig. 2. Oracle application is the critical part of the algorithm. The oracle construct originally proposed by DJ is expressed, for x ∈ {0, 1} n and y ∈ {0, 1}, as This construction yields a matrix representation for the U f as a permutation matrix, hence always unitary [1]. With respect to the ordered basis B = {|0 · · · 0 |0 , |0 · · · 0 |1 , . . . , |1 · · · 1 |0 , |1 · · · 1 |1 } , the unitary matrix U f can be expressed in the following insightful form with X the 2 × 2 NOT operator in this case. Here U f is a 2 (n+1) × 2 (n+1) matrix, which results from there being 2 n strings (the arguments of f ) and an additional target qubit.
The operator U f can also be expressed in the alternative ordered basis B ′ = {|0 · · · 0 |− , |0 · · · 1 |− , . . . , |1 · · · 0 |+ , |1 · · · 1 |+ } , with 1 1 the 2 n × 2 n identity operator. Furthermore the operatorÛ f is expressed as the 2 n × 2 n matrix (2.5) and thus provides a reduced representation for U f . It is apparent that the operatorÛ f acts on a 2 n × 2 n subspace of U f since We make the assumption that if we have the oracle U f , we we also have the oracleÛ f . We thus conclude that the construction employing both control and target qubits is not strictly necessary. That is, one could construct this algorithm employing the n-qubit control state only. Apparently the choice of representation simply depends on the nature of the actual physical implementation.
We now present a step-by-step analysis of the alternative circuit presented in Fig. 2. We shall analyze the CV circuit in the same steps for cross reference and comparison.
State preparation
We use the hat notation |Ψ in order to emphasize that this analysis is of the algorithm presented in Fig. 2, which employs n-qubit states and not of that presented in Fig. 1, which employs (n + 1)-qubit states . The n-qubit input state of the circuit in Fig. 2 is a string of qubits prepared in |0 · · · 0 . The next step in state preparation is to place the state |Ψ 0 into an equal superposition of all computational basis states for H the single qubit Hadamard operator.
Oracle application
Given the definition of the reduced operatorÛ f defined in Eq. 2.5, its effect on the equal superposition of basis states expressed in the state |Ψ 1 is to effectively encode the N -bit string z unitarily into the state |Ψ 2 .
which is a convenient representation. We shall show that this representation naturally extends to the CV setting.
Measurement
Measurement proceeds by first undoing the superposition created during the state preparation step. This is achieved through the application of the operatorÛ 3 = H ⊗n , which modifies the state after oracle application The resultant state is We rewrite Eq. (2.8) with the operator H ⊗n expressed in terms of a recursive definition as follows the combination of Eq. (2.9) and Eq. (2.10) allows us to see that all of the rows (and columns) of the operator H ⊗n have an equal number of positive and negative ones except for the first row, which consists entirely of plus ones. It is this feature that permits the constant and balanced functions to be distinguished in a single measurement. For the two constant cases, Eq. (2.9) may be expressed as as only the first row does not result in amplitude cancellation of the 2 n constant amplitude components of the state |Ψ 2 . Each of the balanced functions result in the amplitudes of the state |Ψ 2 having an equal number of positive and negative ones. This feature coupled with action of the operator H ⊗n results in the first component of the state |Ψ 3 having zero amplitude for all the balanced functions. We express this result as where we use the symbol x to represent that the nonzero value(s) will land on the other N − 1 components depending on which of the N N/2 balanced functions the oracle is set to. It is interesting to note that the number of rows in the state |Ψ 3B potentially having a non-zero value is N −1 whereas the number of balanced functions is exponential in N . This means that many of the balanced states can be expressed as real-valued mixtures of the computational basis states with the condition that the amplitude of the first component is always zero.
For the final measurement step, we employ the projection operator [1] defined for m ∈ {0, 1} N as follows We are only concerned with the first component as discussed above, so for the constant cases we have and for all balanced cases we have as required.
We have completed the study of the quantum DJ algorithm in a form that allows us to adapt readily to the CV setting. Our strategy will be to construct a CV algorithm analogous to that shown in Fig. 2 and whose operator representation is given by This approach is simpler, and we can worry about whether or not an implementation will require target states when a particular implementation is considered. Before delving into the CV algorithm, we present some background CV information.
E. CV Background
The transition from DV to CV quantum information requires an extension of Hilbert spaces to rigged Hilbert spaces [21], which allows the use of position states |x with x ∈ R but restricts dual states to so-called 'test functions'. An inner product between position states and test functions is meaningful, but the inner product between two position states leads to the Dirac relation x ′ |x = δ(x− x ′ ), which must be treated carefully. For n the size of Problem 1, the target-less quantum DJ algorithm requires n qubits, which requires a Hilbert space of size N = 2 n [22,23]. The Hilbert space for CV problems seems quite generous in this respect as it is infinitedimensional.
In fact the CV Hilbert space is congruent to the space of square-integrable complex functions over the real field L 2 (R) [24]. (2.17) The inner product of two functions f, f ′ is with positive definite norm and distance metric defined by respectively. Typically in CV quantum information discourse, the position states |x are introduced as a basis set of the Hilbert space with each |x an eigenstate of a position operatorx, with x ∈ R. Unfortunately the state |x does not exist in the Hilbert space; this problem is evident in the standard inner product (2.20) As δ is not a proper function, position states are not proper states. Fortunately the position states are correct as a representation; for example f (x) = x|f is the position representation of test function f within the context of the rigged Hilbert space. Also Eq. (2.20) is meaningful in the context of distribution theory. A rigged Hilbert space is a pair (H, Φ) such that H is a Hilbert space and Φ is a vector space that is included by a continuous mapping into H: Φ ⊆ H. Elements of Φ are referred to as 'test functions', and the dual to Φ is Φ * ⊇ H * , for H * dual to H and Φ * comprising generalized functions, or 'distributions'. The inner product f ′ |f is in [0, 1] for any f ′ ∈ Φ * and for any f ∈ Φ [21].
Note that the adaptation of the DV DJ algorithm to the CV regime needs to be done in the context of a computational problem. Here the relevant problem is still Problem 1, and the notion of the oracle remains unchanged. Thus, in the CV case, our task is still to determine whether the function f z belongs to the set of constant functions or to the set of balanced functions.
III. CV REPRESENTATION OF THE DJ PROBLEM
We begin by giving a strategic overview in order to convey the key concepts of our approach to developing a Illustration of the concept for encoding an N -bit string in a region of momentum extending from −P to +P using the N = 4, z = 0101 example. Note that each of the bits zj are uniquely represented.
CV computation model. We follow this with a subsection giving some preliminary definitions allowing us to set the stage for detailed analysis. We then proceed with a stepby-step analysis of our CV DJ algorithm.
A. Strategy Overview
Although we are now working with CV, instead of DV, quantum information, the computational problem to be solved remains Problem 1. In other words, we want to learn whether the function f z is constant or balanced with as few oracle queries as possible. Another way to think of this is that we wish to determine the index b ∈ {0, 1} such that f z ∈ S b . We now give a conceptual overview of our model for CV quantum computation of the DJ problem, which we follow later with a rigourous treatment.
In our model of CV quantum computation, we will use the continuous position and momentum variables of a particle. For x, p ∈ R, we use the particle's position wave function, φ(x), to describe where the particle is concentrated and the particle's momentum wave function,φ(p), to describe its momentum distribution. The position and momentum wave functions are Fourier transform pairs, and the relationship between the particle's position and its momentum is governed by Heisenberg's uncertainty principle.
There are many position and momentum wave function pairs on which we could base our computational model. We select our particular pair as follows. First, we wish to encode the unknown N -bit string, z, in the momentum domain. We do so because encoding in the momentum domain is the continuous analogue of the discrete case, where encoding is performed on an equal superposition of computational basis states. Second in order to fix one of the degrees of freedom of the problem, we want each of the bits comprising the string z to be unambiguously represented in the momentum space. By unambiguous The N -bit string z = 0 · · · 01 · · · 1 modulates this momentum "substrate". (d) The inverse Fourier transform of the encoded "square wave" produces a "generalized" sinc function whose infinite position extent necessitates an optimal measurement "window" parameterized by ±δ.
we mean that each of the bits are represented by equalsized, non-overlapping, contiguous regions in the momentum space.
Since we want each of the N -bits comprising the string to be represented unambiguously, we naturally think of each bit as being manifested by a finite-width square pulse whose position in momentum space represents the bit position in the string z and whose magnitude represents the bit value. Continuing along this line of reasoning to the representation of the entire string z, we can imagine we have a region of momentum extending from −P to +P . All the contiguous momentum pulses within this region thus have "width" δ p = 2P/N , and for j ∈ {0, N − 1}, the j th momentum pulse is centred at position −P + (j + 1/2)δ p and takes on value (−1) zj . We illustrate this concept in Fig. 3 for a particular N = 4 case.
The picture that thus emerges is that each of the 2 N possible strings may be represented by a uniquely shaped "square wave" having extent ±P comprising N pulses each of width 2P/N and having magnitude ±1. With the encoding concept clear, we conceptually illustrate the four key stages of the algorithm in Fig. 4. We begin with a position wave function centered at x = x 0 and illustrated in Fig. 4 (a). Note that this position wave function is a sinc function since sinc/pulse functions are Fourier transform pairs. In Fig. 4 (b), we present the momentum wave function, a pulse function, which acts as the "substrate" into which the N -bit strings are encoded. In Fig. 4 (c), the pulse function is encoded with the particular N -bit string z = 0 · · · 01 · · · 1. Finally, the inverse Fourier transform of this "square wave" is presented in Fig. 4 (d). Since the inverse Fourier transforms of finite pulses in the momentum domain have infinite extent in the position domain, we need to limit the extent of our measurement to ±δ.
In summary, we see that our algorithm will need the parameters N , P and δ. We note that as N gets large, the individual pulse width associated with a single bit becomes small appearing to pose a limit on the maximum value of N . We will return to this issue once we have determined the relationship between P and δ.
There are many potential models for quantum computation in a CV setting. We have chosen to study one where we unambiguously encode an N-bit string into the continuous momentum variable of a particle. Within the constraints of this model, we will show that the CV DJ is necessarily probabilistic and prove an upper bound on the query complexity of the CV DJ problem.
We speculate that we can't do better than this. For example if the momentum/position pair are described by Gaussian/Gaussian functions, as would be the case for the physically meaningful states of quantum optics, imperfect encoding of the N -bit string in the momentum domain will result in increased position error. Whether or not this will in turn impact the "big Oh" representation of the query complexity requires further research as does a general proof of a lower bound. The challenge will be to show that another strategy can do better than the model described herein.
B. Algorithm Preliminaries
We now proceed to formalize some of the concepts presented in the previous subsection. Here we describe a 'natural' way of encoding a finite-dimension, N -bit string in a continuous domain. We define the following function, along with its Fourier dual, to help us achieve this end.
For P > 0, the 'top hat' function ⊓(p; P, P 0 ) = p| ⊓ (P, P 0 ) will be especially useful in bridging the gap between DV and CV quantum information because so the state |⊓ is, in some sense, a momentum eigenstate |p = P 0 in the limit P → 0. The inverse Fourier transform of the function e ıx0 ⊓ (p; P, P 0 ) is where x 0 defines the position of the sinc function. The limit of φ(x) as P goes to ∞ yields δ(x−x 0 ). The position eigenstate |x = x 0 is likewise formed in the limit P → ∞. Now imagine we want to sum a contiguous string of "pulses" described by the top hat function (3.1) with all pulses having width δ p and the j th pulse having complex amplitude ψ j . This results in the composite function ψ(p) = j ψ j ⊓ (p; −P + jδ p , −P + (j + 1)δ p ) .
This function can also be used as a basis [24] of CV kets in Dirac notation as |ψ = ∞ −∞ dp p|ψ |p , thus allowing us to encode quantum information in the CV domain. Note that ψ(p) is the complex amplitude for real-valued p. This affords a consistent way of encoding a discrete wave function over a continuous domain.
Before proceeding with a formal analysis of the algorithm, we give an overview of our proof strategy. The oracle is either set to one of two constant strings or to one of N N/2 balanced strings. A string and its complement have indistinguishable probability distributions, so there are a total of one constant probability distribution plus 1 2 N N/2 balanced probability distributions representing the possible oracle settings. In order to simplify the analysis, we wish to replace this exponential number of balanced probability distributions with a single "worstcase" balanced probability distribution. Thus we seek a particular balanced string (and its complement) whose probability distribution is most likely to "fool" us into concluding it is a constant string.
Intuitively, the balanced strings that have the fewest number of changes between adjacent bits in the interval [−P, P ] will be the most "constant like" of the balanced strings. There are no balanced strings with zero changesthis is the key feature that separates the constant strings from the balanced strings. There is however, a single pair of balanced strings having only one change. These strings exhibit the feature that the first N/2 bits are constant and the second N/2 bits are the complement of the first. We call these strings the anti-symmetric balanced (ASB) strings. One of these two strings is illustrated in Fig. 4 (c). Note that all other balanced strings have more than one change.
Our proof strategy begins by making the assumption that the ASB case is the "worst case" of all balanced cases. We use this assumption to determine the optimum value of δ, which is the extent of our measurement in the position domain and is illustrated conceptually in Fig. 4 (d). Given this optimum value of δ, we then prove by induction that the worse balanced case is indeed the ASB case.
FIG. 5: Quantum circuit implementing the CV DJ algorithm without the use of the target state.
C. The CV Quantum DJ Algorithm
Our strategy is to create a CV analogue of the alternative formulation of the discrete DJ algorithm presented in Fig. 2. The CV extension of this is presented in Fig. 5. Our construction of a CV DJ algorithm employs some of Braunstein and Pati's techniques [18] and avoids the pitfalls. In particular, we employ position states as a logical representation (states in Φ * ) analogous to the discrete computational basis states. Encoding is not, however, into the position states but rather into test functions f z ∈ Φ with z ∈ {0, 1} N . Furthermore we employ a Fourier transform to operate as a CV version of the DV Hadamard transform (extending the Hadamard transformation to the CV case is not unique [2,25]).
For x the canonical position and p the canonical momentum, the Fourier transform maps a function φ(x) to its dualφ(p) according to [26] F : φ(x) →φ(p), (3.4) such thatφ Note that we make use of the momentum variable p as the Fourier dual of the position variable x. The function φ can be a test function in Φ and φ(x) is the inner product of φ with the position in Φ * : φ(x) = x|φ . The momentum state |p is the Fourier transform of |x and φ(p) = p|φ . With these concepts in order, we now proceed through the CV DJ algorithm analogous to the three steps in the DV DJ algorithm. The function notation φ(x) andφ(p) is more convenient here rather than the Dirac notation in the previous section.
State preparation
We have argued previously that we need the Fourier transform of the input state to be the top hat function defined in Eq. (3.1). We add several conditions that do not take away from the generality of the solution. First, we want the top hat to have zero phase, which gives x 0 = 0 and to be centred at P 0 = 0. Second, we want the pulse to have extent ±P . This gives us the simplest form of the sinc function for the initial state (3.5) We note that the limit of φ 0 (x) as P → ∞ gives a δ(x).
Thus we can think of the quantity P as playing the role of the standard deviation in a Gaussian distribution. The final step in state preparation is to perform the Fourier transform, which yields the top hat function with extent ±Pφ This function forms the raw substrate, which will be 'modulated' by the individual N -bit strings z.
Oracle application
We perform encoding by partitioning the real numbers representing momentum into non-overlapping, contiguous and equal-sized bins. In this digital-to-analogue strategy, the width of each p-bin is 2P/N , and The oracle encodes the index z into the function f z as follows: (3.8) where the factor (−1) zi serves to modulate the phase of the top hat function according to the bit value.
Example 1. Consider the case n = 2; hence N = 2 2 = 4. As one case, the function corresponding to the four-bit string 0011 is The only two four-bit strings yielding constant functions would be 0000, for which the function is identically unity over the whole domain [−P, P ], and 1111, for which the function is identically −1 over [−P, P ]. Four cases are presented in Fig. 6. We refer to the function f 0011 (p) as the "lowest-order" antisymmetric balanced wave as it has just one zero crossing in [−P, P ]. In the limit that N → ∞ with P fixed, ⊓ i (p) → δ(p − p i ) for p i the midpoint of the i th bin. The limit N → ∞ thus gives a prescription for approaching a continuous variable representation where the z index seems to approach a continuum; however this limit yields a countable, rather than uncountable, set {z}, and the finite domain [−P, P ] has important ramifications on the nature of the functions corresponding to Fourier transforms of ⊓ i (p). We express the state after encoding as where we observe the "modulating" effect of the encoded string f z on the momentum "substrate"φ(p).
In the context of the digital-to-analogue strategy, the constant functions are analogous to direct current (DC) signals and the balanced functions to alternating current (AC) signals. The number of zero-crossings corresponds to frequency information, and the question of whether the output is balanced or constant is essentially a problem of querying whether there is a non-zero frequency component of the output signal. As noted previously, the ASB function has the lowest frequency component. We now proceed to analyze the measurement stage.
Measurement
We have the strings z ∈ {0, 1} N encoded into the momentum state (3.10). The next step prior to the final measurement is to take the inverse Fourier transform of this pulse train. For z j the j th bit of z, this is expressed as The expression given in Eq. (3.11) can be simplified to yield where we have defined We see that the magnitude of an individual generalized sinc function, φ (N ) z (x), is determined by a vector sum of N phasors, which is modulated by a particular N -bit string z.
Note that the phasors, e ıϕj (x) , are equiangular divisions of the angular interval and they exhibit the pairwise complex conjugate property ϕ j (x) = −ϕ N +1−j (x). In Fig. 7, we present the phasors for N = 8 with x = π/2 and x = π/4 to illustrate these features. Note that the phasors are added constructively or de-constructively depending on the phase of the angles, which results from the term (−1) zj . This effect defines the magnitude of the resulting sinc function.
We note that the only functions with φ z (x) defined by Eq. (3.12) implies the strategy for measurement that will distinguish between the constant and balanced cases.
In order to refine this strategy, we focus on two cases. The first of these cases is for the two constant functions for which Eq. (3.12) gives the probability distribution where we have The second case deals with the two balanced functions having the lowest 'frequency' content, which occurs when the first N/2 bits and the last N/2 bits have opposite values.
We think of these two balanced functions as having the lowest frequency content since Eq. (3.8) has a single zero crossing in the interval [−P, P ] for these two balanced strings only. All other balanced strings have more than one zero crossing and thus higher frequency content. For this pair of balanced functions, which we call the antisymmetric balanced (ASB) functions, Eq. (3.12) gives the probability distribution where we have Note that of the N N/2 balanced functions, there are many that are also antisymmetric about the midpoint. However, we reserve the term ASB for these two lowest-order antisymmetric balanced functions.
We will use these cases to bound the success probability of distinguishing between the constant and all balanced cases. We first illustrate the concept of frequency in the following example.
Example 2. Again consider the case P = 1, n = 2; hence N = 2 2 = 4. As one case, the function corresponding to the four-bit string 0011 is This function corresponds to the N = 4 ASB function. The probability distributions for the four distinct N = 4 cases are presented in Fig. 8. We clearly see that of the three balanced cases, the N = 4 ASB function has probability peaks closest to x 0 = 0. Our measurement strategy is to measure the probability distribution in a small band around the position x 0 = 0 parameterized by ±δ. The CV analog of the projection operator given in Eq.(2.13)is defined as Due to the symmetry of the sinc functions about x 0 , we set a = −δ and b = +δ. We now need to determine the optimal value of δ that will maximize our ability to distinguish between the constant and balanced cases. We will determine the optimum value of δ by first assuming that the probability distribution P ASB (x) given by Eq. (3.15) dominates all other balanced probability distributions in the region [−δ, δ]. After using this assumption to determine a value for the optimal delta, we will state and prove a theorem justifying our assumption. As an illustration that our assumption is true for the N = 4 case, we plot the four distinct cases in Fig. 9. The ability to effectively distinguish between two random events is proportional to the separation of the individual probabilities of occurrence. Thus we need to select δ such that we get as much separation between the constant distribution and the ASB distribution as possible. Given this concept we can think that when we make a measurement we are distinguishing between two events, the probabilities for which we define as follows
20) and
Pr ASB (δ) = Pr φ (N ) FIG. 9: For P = 1, the optimal value of δ = π 2 . This graph shows that only the Constant and the Antisymmetric Balanced Functions significantly contribute to probability between ±δ.
We can determine the optimum value of δ by maximizing the expression |Pr Const (δ) − Pr ASB (δ)|. It suffices to find the value of δ for which d dδ |Pr Const (δ)−Pr ASB (δ)| = 0, which may be expressed as This occurs where cos(P δ) = cos(P δ) 2 for δ = 0, which gives a global maximum at δ = π 2P . It is interesting to think of this result as an uncertainty relationship (3.23) We shall return to this concept in our discussion in the conclusion. We have determined the optimum value for δ based on our assumption that for −δ ≤ x ≤ δ the balanced probability distribution P ASB (x) dominates all other balanced probability distributions. We now proceed to prove this assumption.
In order to proceed with the proof, we define a set Φ of m pairwise conjugate angles with 2m = N . Note that N is not restricted to being equal to 2 n for the purpose of this proof. Also for the purpose of this proof, we set P = 1 and incorporate x into the definition of ϕ j = N −(2j−1) N x for −π/2 ≤ x ≤ π/2. We let Φ = {ϕ 1 , ϕ 2 , . . . , ϕ m , ϕ m+1 , . . . , ϕ 2m } with j = 1, . . . , m and note the pairwise conjugate property ϕ j = −ϕ 2m+1−j . Now consider S = 2m j=0 g(j)e ıϕj where g : [2m] → ±1 subject to the balanced condition j g(j) = 0, then Theorem 1. Max |S| occurs under the specific balanced conditions and which we refer to as the asymmetric balanced functions (ASB).
Proof. Proof is done by induction on m. We begin with the base case m = 1, N = 2. This case is trivial since the only balanced cases are the two ASB cases represented by the strings {01, 10}. We proceed with the base case for m = 2, N = 4. This case is a little more involved. We begin by labelling the angles and phasors as shown in Fig. 10. There are 4 2 = 6 balanced cases, and we have to consider the strings {0011, 0101, 0110, 1100, 1010, 1001}. Since the latter three are complements of the first three, we have to consider only three vector sums.
With reference to Fig. 10, we have S {0011} = e ıϕ1 + e ıϕ2 −e −ıϕ1 −e −ıϕ2 . We simplify and express the resultant along with the three other cases as Clearly |S 1 | > |S 3 |. We use the trigonometric identities, 26) to establish the relationship between |S 1 | and |S 2 |. We note that max ϕ1−ϕ2 2 = 1 2 − 1 N x and min ϕ1−ϕ2 2 = 1 N x for all N and the specified range of x. Since cos x > sin x for 0 ≤ x < 1/2, we conclude that and thus |S 1 | > |S 2 | for 0 ≤ x ≤ π/2. This proves that the theorem is true for the m = 2, N = 4 base case. We are now ready to prove the inductive step. We consider two cases. Case (i) assumes every pair is balanced. By this we mean that g(j) = −g(2m + 1 − j). By inspection, this gives the same result as |S 1 | and |S 3 | for the m = 2, N = 4 case. Case(ii) assumes that Case(i) is not true and is proved by induction. Since Case(i) is not true, there must exist two non-balanced pairs for which g(j) = g(2m + 1 − j) = +1 and g(k) = g(2m + 1 − k) = −1. As an illustration in the m = 4, N = 8 case, the balanced string {01000111} has this property. The inductive step is 2m j=1 g(j)e ıφj ≤ |S ({l, 2m + 1 − l, k, 2m + 1 − k}) | + |S(w)|, (3.27) where S(w) is maximized for the m = 2, N = 4 base case. Only when |S ({l, 2m + 1 − l, k, 2m + 1 − k}) | itself is maximized is equality achieved and the total sum maximized. This occurs for the ASB strings.
We have established that we can bound the probabilities of determining whether an unknown function is balanced or constant in a single query in a CV setting. In the next section will determine an upper bound for the query complexity of a CV algorithm in terms of success probability in terms of the number of queries.
IV. BOUNDING THE QUERY COMPLEXITY OF THE CONTINUOUS VARIABLE DJ ALGORITHM
Before we bound the query complexity, we make some important observations regarding the comparison between the discrete DJ algorithm and the CV DJ algorithm.
First, we note that probability distributions P C (x) and P ASB (x) defined by Eqs. (3.14) and (3.15) respectively, are in H 2 , the Hilbert space of L 2 (R) functions over the interval [−∞, ∞]. This implies that since we are measuring over a finite interval, the CV DJ algorithm is necessarily probabilistic. Furthermore, we noted that P and δ are related by the uncertainty relation given in Eq. (3.23). This leads to the conclusion that even in the limit of the improper delta function δ(x − x 0 ), the CV DJ algorithm remains probabilistic. This conclusion is contrary to that made in [3].
Second, we compare the operator descriptions of the DV DJ and the CV DJ, which we express as The first equation represents the quantum DJ algorithm operator expression given in Eq. (2.16). The second equation is the analogous CV operator expression determined by concatenating the steps of the previous section. There is a high degree of similarity between these two expressions, but there are mathematical subtleties.
1. The CV position state φ 0 (x) is not a perfect analog to the computational basis state |0 · · · 0 except in the limit. However, this limit creates a state that is not in the RHS we argued is necessary for consistency [21].
2. The continuous Fourier transform is not equal to the CV extension of the Hadamard operator in a CV parameterized system with a finite Hilbert space. It is however, a convenient extension when the Hilbert space is infinite.
3. Finally, the diagonal operatorÛ f given by Eq. (2.5) has each entry taking on the value ±1 dependent on the value of f z . The CV analogue to this operator is the function f We now determine numerical values of the probabilities determined in Eqs. (3.20) and (3.21). We can readily calculate the probability of detecting if the function is constant where the sine integral is given by Note this probability depends only on the product P δ. If the function is the lowest-order antisymmetric balanced (ASB) (4.4) For P δ = π/2, the numerical values of these two probabilities are Pr Const = 2 (π Si (π) − 2) π 2 ≈ 0.77, (4.5) and Pr ASB = 4π Si (π/2) − 2π Si (π) − 4 π 2 ≈ 0. 16. (4.6) Given this probabilistic nature of the CV DJ algorithm, we need to develop a strategy to bound the error probability. We will employ the technique sometimes called probability amplification [27,28]. Our strategy will be to make m repetitions of the CV DJ algorithm where we assume that the oracle is set to the same function for each of the repetitions. Each repetition ends with a measurement. From this sequence of measurements we want to determine whether the unknown function is balanced or constant with high probability. Proof. We will adopt the convention that when we make a query to the CV DJ algorithm we either detect something (algorithm returns a 1), or we do not (algorithm returns a 0). We can thus treat multiple queries as a sequence of Bernoulli trials [29]. We assume that we have set our measurement limits to the optimal ±δ. The two events we are trying to uncover are the constant cases where, for ease of calculation we set the probability of detecting something is Pr C ≥ 3/4, and the balanced cases where the probability detecting something is Pr B ≤ 1/4. Note that we have set the probabilities to these rational numbers for illustrative purposes and to simplify the calculation. We can make this arbitrary setting, and we will get the same result as long as the probabilities are bound from 1/2 by a constant.
If each measurement is based on an independent preparation of the state φ 0 (x), then each of the queries are independent. After a series of m queries, we can use the Chernoff bounds of the binomial distribution to amplify the success probability [28,29]. The simplest (but somewhat weak) Chernoff bound on the lower tail is given by [28] as and on the upper tail as where µ is the expected mean of the resulting binomial distributions after m queries, and ǫ is the relative distance from the respective means. First, we bound the lower tail corresponding to the distribution of the constant case for which we have µ = m P C . Here we set ǫ = 1 3 , which expresses the probability for the value being less than half way between the two means as Pr[X < (m/2)] < e − m 24 . Similarly, we bound the upper tail for the balanced case for which we have µ = m P B . Here we set ǫ = 1, which expresses the probability for the value being greater than half way between the two means as Pr[X > (m/2)] < e − m 16 . Clearly the success is worse for the lower tail allowing us to bound the success probability of the CV DJ algorithm after m queries as This gives an error probability that is O (e −m ) as required.
We note that this is of the same order as the exponentially good success probability we have for the classical probabilistic approach given by Eq. (2.2). Also note that this query complexity is independent of the value of N . We have made no attempt to obtain a tighter bound preferring to show only that the success probability of the CV DJ algorithm is of the same order of that of the classical probabilistic approach to solving the DJ problem.
V. CONCLUSIONS
In this paper we have presented a rigorous framework for the analysis of the DJ oracle identification problem in a CV setting. The rigged Hilbert space (RHS) affords a consistent transition from the traditional discrete Hilbert space to the CV setting. Our framework allows us to define a consistent way of encoding N -bit strings into functions over the real numbers.
We have used this framework, and the selection of the sinc/pulse Fourier transform pair, to prove that a CV implementation of the DJ algorithm cannot provide the exponential speed-up of its discrete quantum counterpart. Additionally, we have presented a bounded-error, upper bound on the query complexity of the DJ problem within the constraints of our model. The lack of speed-up results from an uncertainty principle between the ability to encode perfectly in a continuous representation and the subsequent inability to measure perfectly in the Fourier-dual representation. This uncertainty relationship is manifest in Eq. (3.23), which relates P , the encoding extent, to δ, the measurement extent. A natural extension of this work would be prove a lower bound perhaps exploring the techniques along the lines of [30] from the perspective of different Fourier transform pairs.
This uncertainty relationship appears to be a natural feature of the CV setting, but it could also be used to advantage. There is likely to be oracle function symmetries that are particularly suited to different CV settings. For example in Sec. III, we showed that balanced functions with a higher number of zero crossings create sinc functions with frequency components further away from x 0 . It appears that an oracle identification problem designed to separate balanced functions according to frequency separation could be implemented in a CV setting and possibly provide advantage over classical or discrete quantum settings.
Furthermore, it would be interesting to classify the balanced functions from the perspective of different coherent states [31] in CV parameterized settings of both finite and infinite dimensions. The former would naturally involve the study of the coherent spin systems [32]. Furthermore the use of squeezed spin states should be studied [33]. Infinite dimension systems would naturally involve the study of implementations involving the coherent states of quantum optics [32,34].
Additionally, we have set up this framework in a manner that should allow any oracle identification problem to be analyzed in a similar manner in the CV setting. An implementation of a discrete quantum oracle, for example [35,36], requires a unitary operator representing the oracle. Provided we can create a diagonal representation of this oracle along the lines ofÛ f given in Eq. (2.16), our framework will naturally extend to it. Of course we need to be able to create an implementation of these oracles and that remains an important open question.
Other avenues of the extension of this framework include CV implementations of other hidden subgroup problems. The solution of Simon's problem [37] in this setting would be an obvious starting point as would the exploration of a CV implementation of Shor's algorithm [38]. Additionally, the CV framework could be extended to include analysis of noisy oracles along the lines of [20,39].
In closing we note that the transition from a discrete quantum information setting to a CV setting has many subtleties. In particular the improper delta functions must not be used. Limiting behaviour can be explored, but only if the limits are taken from the perspective of functions defined in the rigged Hilbert space. | 12,429.2 | 2008-12-19T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Robust Frame Synchronization Scheme for Continuous-Variable Quantum Key Distribution with Simple Process
In continuous-variable quantum key distribution (CVQKD) systems, high-quality data synchronization between two legitimate parties, Alice and Bob, is the premise of the generation of shared secret keys. Synchronization with specially designed frames is an efficient way, but it requires special modulating devices to generate these special frames. Moreover, the extra requirement of special modulating devices makes it technically impossible for some passive preparation schemes. We propose a novel approach to realize synchronization in this paper, which is different from those special-frame-based methods. In our proposed scheme, Alice publishes parts of the original signals as the synchronization frames and Bob takes these frames to perform the synchronization algorithm. Besides, a synchronization feature is applied to deal with phase shifts. The simulation results based on practical data demonstrate that the proposed synchronization scheme not only maintains a high success rate but simplifies the data processing flow at the same time, which dramatically reduces the computational complexity.
Introduction
Quantum key distribution (QKD) has become a popular topic for its confidentiality, which allows two legitimate parties far away to share secure secret keys through an untrusted channel with unconditional security [1][2][3]. Generally speaking, dominated protocols of QKD can be divided into two categories, which can be defined as discrete-variable QKD (DVQKD) [4,5] and continuous-variable QKD (CVQKD) [2,6,7]. In the DVQKD scheme, secret keys would be encoded on polarization states, phases, or other discrete variables of single photons. In the CVQKD system, information is encoded on the position and momentum quadrature of the light field. Then the receiver, Bob, uses homodyne detectors or heterodyne detectors to measure one or both quadrature components. By controlling excess noise, the CVQKD system can be achieved beyond 100 km at present through standard single-mode optical fibers [8,9]. Moreover, the CVQKD can utilize existing optical communication components, which provides a prospect of good integration with classical optical communications.
In a typical CVQKD system, Alice first prepares quantum states. Then secret information, which is produced from the true random number generator, is encoded on the position or momentum quadrature of quantum states by amplitude and phase modulations. After that, the modulated quantum states, which can be expressed as |x A + ip A , are sent to Bob through a quantum channel. Affected by quantum noise and other classical noise, Bob will receive a noise state |x B + ip B .
Transmission of the quantum signal over a lossy and noisy channel may highly affect the performance of the frame synchronization algorithm. For homodyne detection, Bob randomly chooses X or P measurement bases. Afterward, he compares them with Alice's bases and selects the variables with the same bases. After reconciliation and privacy amplification processes, Alice and Bob will share the same key data.
It is worth noting that synchronization in CVQKD plays an important role. Simply speaking, if the data of Alice and Bob are not aligned, decoding key information in Bob's side will be independent with the one Alice prepared, which results in inconsistent secret key strings after the reconciliation process, and thus deteriorates the overall performance. In a CVQKD system, clock synchronization makes two communication entities share the same clock to acquire accurate data. So far, clock synchronization schemes include the transmitted local oscillator (TLO) [10] and local local oscillator (LLO) schemes [11,12] , where the latter can thoroughly remove the related loopholes [13,14] introduced by transmitted LO signals. Frame synchronization determines the head of every signal string, so even minor synchronization errors will lead to a huge decrease in the mutual information between Alice and Bob. Most previous methods tend to use specific modulations to generate synchronization frames, and the well-organized frames are periodically inserted into the data frames by Alice [15][16][17]. Although these methods have been proved efficient in some situations, the performances of them are far from satisfied under low signal-to-noise ratio (SNR) scenarios. To overcome this shortcoming, a frame synchronization scheme based on phase disassembling and matching by comparing correlation was put forward [18]. However, in the synchronization procedure, computing correlation requires a lot of multiplication and the previous calculations cannot be reused in subsequent calculations. An expected frame synchronization scheme should have high-efficiency at a low SNR and low computational complexity at the same time.
Besides, in practical CVQKD applications, the quantum state will suffer unpredictable nonlinear effects, and the quadrature components of the optical field of quantum states will suffer phase shifts during signal transmissions [19,20], which means a well-designed frame synchronization scheme should be well tolerable of phase shifts. If the two legitimate entities have been successfully synchronized, the phase shifts can be removed by phase compensation methods [21,22]. So the synchronization process is usually previous to phase compensation, and frame synchronization should tolerate a certain amount of phase shifts.
To simplify the frame synchronization scheme and improve the efficiency and robustness of the CVQKD system, we propose a novel scheme here. In particular, a new feature is designed, which can tolerate phase shifts and synchronize in a strong noise environment. Each synchronization process requires only a few addition and subtraction operations and a Hamming distance comparison. In particular, we analyze the performance of this method under different phase shifts and various SNR settings. The results show that this scheme can tolerate different phase shifts and performs well at a low SNR. Moreover, the proposed scheme also keeps a good balance between performance and computational complexity.
The rest of the paper is organized as follows: In Section 2, we first introduce the synchronization process and the designed feature in detail, then illustrate the reason why this feature can tolerate different phase shifts. In Section 3, the simulations of the proposed algorithm under different parameter settings are performed. Finally, a brief conclusion is given in Section 4.
Synchronization in CVQKD
In the common frame synchronization scheme of CVQKD [15,16], the training frames should be added to realize data synchronization between Alice and Bob. The synchronization frames are modulated into a special format, known by Alice and Bob and can be easily recognized. However, in some special CVQKD schemes, it is difficult or even impossible to add synchronization frames into key data by modulation devices, such as the passive-state-preparation CVQKD scheme [23]. In the passive-state-preparation CVQKD scheme, Alice can split the output of a thermal source by a beam splitter and one mode is measured by herself while the other mode is transmitted into the other legitimate entity, Bob. As Alice directly split the output of the source and did not use any modulation devices to encode information onto the mode, it is hard to add synchronization frames into the signal. This inspired us to look for ways to synchronize using random number strings. Moreover, the traditional synchronization process usually needs a high range switch of light intensity. These light switching schemes make CVQKD systems more complicated and unstable.
These issues prompted us to improve the training-frame-based scheme into a modulation-free one without specified synchronization frames. In addition, phase drifts between the LO and signal will introduce extra trouble into the synchronization process. A practical scheme should overcome the phase drifts to successfully implement synchronization. In classical optical and wireless communications, synchronization can be performed by measuring the Hamming distance between the outputs of the transmitter and the received signals [24]. The Hamming distance equals to the different bits of two 0-1 sequences S 1 , S 2 . Comparing to the calculation of correlation, the Hamming distance has low computation complexity. Here, we can first convert the signals into 0-1 sequences by certain algorithms and then measure their Hamming distance. It should be mentioned that these transform algorithms must be robust against different environment noises.
Finding Robust Feature
In this part, we will mainly analyze the influence of the phase shift on the synchronization process, then introduce a robust feature. Alice sends quantum states |X A + iP A to Bob through a quantum channel with Gaussian distributed noise ξ and phase shift ∆ϕ. In fact, the noise in the channel can be divided into two parts. The one added by the channel is called channel-added noise. It can be expressed as χ line = 1/T − 1 + ε c (T is the transmittance of the quantum channel and ε c means the excess noise). The other noise is added by the thermal motion of detectors, called detection-added noise. The detection-added noise can be expressed as χ hom = (1 − η + ν el )/η (homodyne detector) or χ het = (1 + (1 − η) + 2ν el )/η (heterodyne detector), in which η means the attenuation factor and ν el means the thermal noise caused by electronics in homodyne detectors or heterodyne detectors. And the total noise referred to the channel input can be given by χ tot = χ line + χ hom /T. From reference [25], X A and P A are Gaussian distributed random variables. For simplicity, here we temporarily omit the attenuation. When Bob measures the quantum states |X B + iP B with a homodyne or a heterodyne detector, the measurement results can be expressed as where X A = A cos(θ), P A = A sin(θ). Without loss of generality, in the following analyses, we assume that ξ is a Gaussian distributed random variable with expectation 0 and variance σ, and the phase shift δϕ keeps the same within a small period time. From the above formula, we know that if we want to eliminate the effect of phase shifts in synchronization, some stable features must be found.
To cope with the phase shifts, here we introduce a new operator ,n can be defined in the same way) called incremental label. Now we investigate the effect of phase drift on it. The conditional expectation of the operator can be written as where V th is a positive threshold and ∆ϕ ∈ (−π, π) means phase shift. If ∆ϕ ∈ (−π/2, π/2), then the conditional expectation is positive. Otherwise, it will be a negative one. sign(x) is the sign function that outputs the sign of number x. The sign of the above conditional expectation is, So when ∆X (A,n) is larger than a significant positive threshold, the operator ∆X (B,n) can be regarded as a quasi-stable feature. Here we can apply this operator on a string of random numbers and yield a binary sequence. We first apply it on Alice's key string to get the binary sequence S A , and then apply it on Bob's one to get S B . Suppose that the phase shift ∆ϕ keeps the same for a while; if cos ∆ϕ is positive, the result will be S A = S B , else S A = −S B .
In the above discussion, we do not consider one case that ∆ϕ is approaching ±π/2. In fact, when ∆ϕ is close to ±π/2, X B is similar to the quadrature component P A . When this happens, another conditional expectation should be explored, From the above expression, we can see that if cos ∆ϕ approximately becomes 0, taking component P into consideration is another good way. In the following section, we will show how the above conclusion can be applied to real synchronization.
The conditional expectation of the incremental label and its sign give us some ideas that the sign of ∆X (AorB),n is stable in a noisy environment, and a robust 0-1 string can be constructed in this way. The relationship between the conditional expectation and phase shifts can help to deal with the phase drift problems in the synchronization process. This will be elaborated on in the following section.
Incremental Label
Based on the above analysis, the incremental label can be constructed. This labeling method will transform a random number sequence X, which can be expressed as (x 1 , x 2 , · · · ) into a binary sequence Y (y 1 , y 2 , · · · ) by the rules: Step 1. Sum the next L numbers of the current position, such as shown in Figure 1 X i+3 , X i+4 for current position X i+2 and L = 2. Then subtracting the sum of the former L numbers, the output is used as a descriptor. We call the 2L + 1 interval a transformation unit. Step we mark this position with symbol "0" (y i = 0). Step 3. After all the received signals are marked, the synchronization process begins. Every successive N bits of conversion sequence Y are seen as a feature, and we can calculate the Hamming distance of the two signal sequences to measure their similarity.
It should be mentioned that noise with zero expectation will be suppressed and their impact on synchronization is weakened. This transformation method is simple and efficient, and we will show its performance in the next section and analyze computation complexity in the computational analysis section.
To compete with phase drifts, the sender Alice can prepare four transformation sequences of her synchronization frames. Firstly, Alice generates the binary sequences TX A and TP A by using the rules listed in steps 1 and 2. Then their complements, TX A and TP A , can directly get a not operator. For example, if TX A is "0101, " then TX A is "1010; " the rules are the same for TP A and TP A . Bob also transforms the received X B or P B with the same rules.
After the sequence transformations, the similarity can be measured by calculating the Hamming distance between the transformed sequences of Alice's synchronization symbols and every segment of Bob's received signal. Here we want to make the cost function reach its peak value when synchronization succeeds, so the cost is rewritten where D(X 1 , X 2 ) means the similarity of sequences X 1 and X 2 , and H(X 1 , X 2 ) means the Hamming distance of X 1 and X 2 . Here, we define a new function, The location of synchronization is where the function F(A, B) reaches its peak value.
The Synchronization Flow
From the above derivations, we have now found a stable feature to endure phase drifts. The following synchronization scheme is based on this feature.
Step 1. Alice (the sender) selects parts of the random strings as the synchronization frame (see Figure 2a). Step 2. Alice transforms the selected sequences into 0-1 sequences using the incremental label algorithm proposed above. Both X and P components must be transformed. We the get two 0-1 sequences: TX A and TP A .
Step 3. Alice publishes the two 0-1 sequences TX A and TP A through the classical channel.
Step 4. Bob transforms X B or P B into 0-1 sequences by the incremental label algorithm, and matches them to the received two 0-1 sequences TX A and TP A bits by bits. Then he calculates the function F(A, B) = max D(TX A , TX B ), D(TX A , TX B ), D(TP A , TX B ), D(TP A , TX B ) (see Figure 2b).
Step 5. Alice and Bob synchronize at the position where the function F(A, B) reaches its peak value.
To verify the correctness of the proposed scheme, the synchronization process is simulated as follows. The encoded random strings in Alice's side are X A and P A , and Bob's received signals are X B and P B . Figure 3 shows the cost function D(TX A , TX B ) and D(TP A , TX B ) under different phase shifts. The cost function D(TX A , TX B ) reaches its peak value when the synchronization succeeds if the phase shift is 0. However, the value will bottom out when the phase shift comes to π. There will be no peak or valley values when the phase shift reaches π/2 or 3π/2. Similarly, the cost function D(TP A , TX B ) has a valley value corresponding to the π/2 phase shift situation while it reaches its maximum if the phase shift changes to 3π/2. The results also provide further evidence on how reasonable and feasible the proposed new function is in Equation (7).
Performance Analysis
To explore the influence of SNR (signal-to-noise ratio) and phase shifts on the performance of the proposed frame synchronization algorithm, we prepare several strings of data with natural Gaussian distributions generated from ASE output signals with length 200,000. We add Gaussian white noise of different variance to the output signal to simulate different noise environments. Here we randomly select some segments of the signals as synchronization frames, and we define the proportion of the times of successful synchronization as the success rate. To improve the success rate, the parameter L should be longer than 10 and the threshold V th could be set as the variance of the received signals. Figure 4a,b shows that the influence of different phase shifts on the proposed algorithm. The synchronization process operates at an SNR of −13 dB with feature-lengths of N = 512, 1024, 2048. It can be found that increasing the feature-length can significantly improve performance. We can find the success rate will bottom out for the phase shifts ∆ϕ = 45 • , 135 • , 225 • , 315 • , and the success rate seems to be unsatisfactory. This is because when phase shifts take these values, cos(∆ϕ) = sin(∆ϕ), the proposed algorithm merely deals with one quadrature X or P. If Bob applies a heterodyne detector to measure X B and P B simultaneously, these two values can both be used to perform synchronization and, thus, better results can be achieved. The above analyses show that although the proposed scheme is more suitable for protocols based on heterodyne detection, it can also be applied to homodyne-based protocols. Using a heterodyne detector to measure X B and P B simultaneously in the final matching step will get a better result. Merely considering one quadrature X or P can also synchronize well when the phase shifts occur.
Performance Influenced by Phase Shifts
(a)
Synchronization with Different SNRs
In Figure 5a-d, we explore the performance of the proposed synchronization algorithm under different SNR conditions with phase shifts ∆ϕ = 0 • , 45 • , 90 • , 135 • , respectively. We find that the decrease in success rate caused by the low SNR can be effectively improved by increasing the length of features. From the above discussions, we know that the phase shifts ∆ϕ = 45 • , 135 • are two points that the success rate reaches its minimum value, which is also demonstrated in these figures. When setting the feature-length as N = 2048, despite the phase shifts, the success rate will be higher than 90% when SNR is larger than −20 dB. Usually, the data block of a CVQKD system with a repetition rate of 100 MHz has 100,000 characters. If we set the feature-length N to maximal 2048, the fraction, which is used for synchronization, is 2.048%. Our synchronization scheme requires only a small sacrifice of data.
Algorithm Complexity
Complexity is another important factor for a practical synchronization algorithm. In a practical CVQKD system, the synchronization algorithm should work in real-time for high efficiency, otherwise, it will require a mass of storage to store all the received data. At first sight, according to the algorithm flow, every step has 2NL times added to operations (the transformation unit length L is mentioned above, usually it can be set as L = 13), N times of subtraction operation and an N-bit Hamming distance operation. This is because data reuse is not considered. In a practical synchronization, every synchronization step needs only 2L times add operations except for the first step, 1 time subtraction operation, and an N-bit Hamming distance operation. The previously stored transform results can be used. It should be noted that the expression ∑ i+L j=i+1 x j − ∑ i−1 j=i−L x j <= V th (center on X i ) just needs to be calculated one time for every step and a unique binary mark will be allocated to the corresponding location Y i . There is no need to calculate it again when generating the next feature. The analysis shows that the proposed scheme can save computation resources and maintain good performance.
Comparing to the frame synchronization method based on the correlation calculation [18], the proposed algorithm has a much lower computation complexity. In particular, if the feature-length is N, the calculation of a correlation needs multiplication operations to get the result of every x i y i and N − 1 times add operations for the final result. Furthermore, the times of added to operations can be well reduced. One can first divide the entire sequence into several pairs, then calculate the sums of every pair to get the first N/2 results. After the final iterative process, the add operation times can be reduced to log 2 N. However, there are not any multiplication operations in the proposed scheme, which significantly reduces the computational complexity (see Table 1).
Items Add/Substract Multiplication Comparation Hamming Distance
Correlation
Security and Adaptivity Analysis
The realistic system may incur loopholes due to the imperfections of the implementation process, although the CVQKD protocols are theoretically proven to be secure. Traditional frame synchronization methods are performed by alternately transmitting a strong pulse. Although there has not been any practical attack on these frame synchronization schemes, the use of strong pulses can be manipulated, which may incur potential loopholes. Moreover, the frame synchronization methods based on the designing of the special frame may also introduce potential risks, since the synchronization frames can be distinguished from the key data. A well-designed synchronization method should conceal its synchronization frames into key data so that it is hard for an eavesdropper to distinguish them.
Synchronization frames of this proposed scheme are similar to data frames but they are uncorrelated, which is different from the traditional schemes. In our proposed synchronization scheme, we regard parts of the signals as synchronization frames, so the signals and synchronization frames have the same distribution and the same power. If an eavesdropper intends to attack the CVQKD system through the potential loopholes in the frame synchronization method, she must distinguish the synchronization frames from the quantum signals. So she will detect the quantum signals and this will inevitably cause an increase in excess noise. Her attacks will be then found by the legitimate parties in the following key generation steps. Although the synchronization method in this article uses parts of data as the synchronization frame. The revealing of these synchronization frames does not leave any useful information about the secret key.
In our synchronization scheme, a fraction of data is used as a reference frame. It means that the scheme will also work at the cost of a slight drop in the secret key rate as the previously proposed frame synchronization schemes. We can evaluate the influence of using synchronization frames on the secret key rate when considering the finite-size effects, where I AB means the mutual information between Alice and Bob; χ BE is the Holevo bound on the information between Bob and Eve; ∆(n) can be approximated to 7 log 2 (2/ε) n ; N denotes the block length and n denotes the size of the samples used for final key generation. Figure 6 shows the secret key rate curves with or without our synchronization scheme. In the simulation, the lengths of synchronization frames are all 2 12 in different scenarios; the reconciliation efficiency β is set to β = 0.956; the attenuation coefficient of optical fiber is set to γ = 0.2 dB/km and the excess noise of the quantum channel is e = 0.01, as experimentally shown in Ref [8]. These two types of curves almost overlap, which indicates that the data sacrificed for synchronization have no significant influence on the secret key rate. The secret key rates with or without the considerations of using the proposed frame synchronization. The curves from left to right respectively correspond to block lengths of N = 10 10 , 10 11 , 10 12 , respectively. The solid blue lines correspond to the secret key rates without considering the cost of synchronization. The dotted red lines correspond to the secret key rates with consideration of using the proposed frame synchronization.
In Figure 6, we show the two types of curves (the secret key rate curves with or without our synchronization scheme) are almost consistent. Whether this consistency changes as the parameters β, γ, and e change is worth exploring. Figure 7 reveals that the performance of our algorithm does not deviate under different parameter-settings. We keep the lengths of synchronization frames equal to 2 12 . The standard setting in Figure 7 is the block length N = 10 10 , β = 0.956, γ = 0.2 dB/km, and e = 0.01. Keeping other parameters the same, we separately set the parameters β = 0.93, 0.956, 0.98, γ = 0.18, 0.2, 0.22 dB/km and e = 0.008, 0.01, 0.012. It can be seen that the corresponding curves are all nearly coincident. Essentially, it is because the synchronization scheme uses just a little data for the synchronization frames. Another important thing needs to be considered here is whether the proposed frame synchronization scheme is valid or not when the attenuation of the quantum channel fluctuates. Actually, except the threshold V th (the threshold V th could be the variance of the received signal) in the labeling procedure must be changed with the value of the received signal, the whole algorithm flow is independent of fluctuations of channel attenuation. The algorithm generates incremental labels by considering relative values rather than absolute values. So the algorithm can resist attenuation fluctuation to some extent.
We simulate the process of quantum channel attenuation fluctuation and test the performance of the proposed synchronization algorithm in this condition. We first simulate the synchronization performance of Bob receiving signals through a constant attenuation channel. Afterward, we change the channel into a fluctuation one and compare the matching cost of these two situations (see Figure 8). The matching cost curve changes little despite the existence of channel attenuation (Figure 8b,d are almost the same). Therefore, attenuation fluctuation has a limited effect on synchronization.
Conclusions
Synchronization is a crucial step in the CVQKD. Traditional methods always need to construct special synchronization frames. We propose here a simple and robust synchronization scheme without particularly designing the frame for the CVQKD system. In the proposed scheme, the sender Alice only needs to transmit parts of the quantum signals as synchronization frames to the receiver Bob. A novel feature is designed to help find the correct synchronization location. The analysis of our scheme shows that the feature we designed can tolerate phase shifts among range (0, 2π) and the scheme can synchronize well under low SNR conditions. The simulations of the scheme under different parameter settings indicate that the performance can be significantly improved with increasing feature-length. Moreover, the proposed feature has lower computational complexity while maintaining a good synchronization performance. | 6,347.6 | 2019-11-23T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Charge migration of multilayer oil paper on the process of partial discharge under AC voltage
As one of the main reasons for the deterioration of Nomex paper and charge migration, partial discharge (PD) plays a critical role during PD degradation. In this study, the PD characteristics of multilayer oil paper are focused on. In order to explore the characteristics of charge migration, the experiments of the relative permittivity, conductivity, isothermal surface potential decay, scanning electron microscopy and Fourier infrared spectroscopy have been carried out. The results show that the discharge branches of the second layer are two and three times longer than those in the first layer. In addition, the trap level of the first layer for PD degradation samples increases with the degree of PD degradation. The charges captured by deep traps provide seed charges for PD on the surface of the first layer. The charge transportation will accelerate the formation of shallow traps. The charge distribution and shallow traps contribute to the expansion of discharge branches.
INTRODUCTION
The on-board traction transformer plays an important role in power conversion and transmission of electric locomotives [1]. Oil paper insulation is one of the important insulations of onboard traction transformer. Nomex papers with good heating ability and excellent electrical properties are used as inter-turn insulation material for windings of on-board transformer [2]. During the operation process of on-board traction transformer, some defects are produced because of the electrical, thermal and mechanical stresses. The defects, that is, the fracture, metal tip and cavity tend to lead to the unevenness of electrical field, the accumulation of free charges, and even the occurrence of partial discharge (PD). The deterioration of insulation and the chain scission of Nomex paper caused by PD may correlate with the variation of trap parameters and charge migrating characteristics [3][4][5]. Therefore, it is very important to take PD characteristics and the charge transportation characteristics in PD degradation progress into consideration.
At present, most existing achievements have focused on the characteristics of PD of oil paper insulation [6][7][8]. For example, Kunicki and Cichoń [9] described the PD characteristics of oil paper insulation under long-term AC voltage. Li et al. [10] This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. investigated PD characteristics of oil paper insulation under superposed inter-harmonic and pure AC voltages. It was found that the higher dv/dt caused by inter-harmonic component was the main reason for the difference of PD characteristics between superposed inter-harmonic and pure AC voltages. Cui et al. [6] investigated the division of PD progress and reported that the development of PD is depended on the depolarisation of cellulose in pressboard.
Recent research works indicated that PD characteristics are related to the charge transportation and the trap distribution of oil paper insulation [11]. Muhammad et al. [12,13] reported that the shallow traps produced by the aluminum oxide (Al 2 O 3 ) nanoparticles in oil paper insulation contribute to the enhancement of creeping flashover voltage. Wei et al. [14,15] described the variation of trap energy for oil-impregnated paper aged under electrical and thermal stresses. It showed that the trap energy decreases under electrical stress but increases under thermal stress. The variation of trap parameters causes the change of charge transportation characteristics [16]. Moreover, it leads to the variation of PD activities and has been reported in [17][18][19][20]. However, the literature on charge activities during discharge progress is rare. Meanwhile, it is found that charge distribution and trap level are relevant to the intensity of PD via a series of experimental tests. Consequently, it is essential to investigate the charge migration characteristics of multilayer oil paper insulation on the process of PD under AC voltage.
In order to investigate the characteristics of charge migration, PD experiments of multilayer oil paper are designed. The samples at different PD stages are selected according to the discharge characteristics. The tests of relative permittivity, conductivity, isothermal surface potential decay (ISPD), scanning electron microscopy (SEM) and Fourier infrared spectroscopy (FTIR) are carried out and the corresponding properties are analysed. The mechanism of charge migration on the PD process is also discussed.
The experimental set up of PD
The experimental setup of PD measurement under AC voltage is shown in Figure 1, including the AC power source, PD measurement system and test cells. The AC power source was generated by PD-free transformer rated at 100 kV and 10 kVA, which was measured by the divider with a ratio of 1000:1. PD was detected by MPD600 system consisted of the CPL542 and the MPD600. The inter-turn insulation defects of on-board traction transformer are prone to appear under multi-stresses in the operation process. In this experiment, a needle-plane defect model was designed according to the operation condition as shown in Figure 2. In this model, Nomex papers produced by DuPont with three layers were selected, and the thickness of each layer was about 0.18 mm.
Before PD measurements, the samples need to be treated to avoid the influence of moisture and gaseous impurities. Initially, the samples were treated in the air condition of 105 • C for 48 h to remove the moisture. Then, the dry samples were treated in vacuum at 85 • C for 24 h under the pressure of 80 Pa to remove the gaseous impurities. Last, Nomex papers were impregnated in the Karamay 25# transformer oil under vacuum condition as 80 Pa at 85 • C for 48 h. Figure 3 shows the measuring system of surface potential. The experiment was performed at room temperature with relative humidity of 40%. The needle electrode was set to 1 cm above the position where the high voltage electrode was placed in the PD experiment. Samples were charged in corona charging method. Before the experiment, the surface of the samples was wiped with ethyl alcohol. Then, they were treated in the air condition of 80 • C for 24 h to remove the moisture. In order to obtain a uniformly distributed charge, the gate electrode was added between the ground and the needle electrodes, the gate electrode was set to 5 mm above the samples. The needle and the gate electrodes were connected with the DC voltage. Its voltage amplitudes were ±8 kV and ±3 kV, respectively. The voltage applied to the needle and the gate electrodes was held for 10 min. Then, the samples were rapidly shifted to the probe to ensure that the probe was in the same position as the needle. An electrostatic voltmeter (trek model 341-b) connected to a Kelvin probe (model: trek-3455-et) was used to measure the surface potential. The probe was positioned 3 mm above the samples [21]. The curves of surface potential with the increase of the time were recorded. We can use the curves to calculate the trap distribution of samples.
Measurement of trap distribution
The trap density and energy can be expressed as [22,23] where t is the decay time, V represents the surface potential. ε r is the relative permittivity of the sample. ε 0 is the permittivity of vacuum. q is the elementary charge. L is the thickness of the samples. T is the Kelvin temperature. k is the Boltzmann constant. N(E) is the trap density occupied by carriers at trap level E. E t is the trap energy. v is the attempt to escape frequency and is selected as 10 12 s −1 .
PD characteristics
The applied voltage was raised in steps of 0.5 kV until the experiment model was broken down and each step was held for 5 min. The PD characteristics under AC voltage were obtained, and the process of PD as shown in Figure 4 was recorded.
As can be seen from Figure 4, 'tree-like' carbonised tracks on the first layer of oil paper that locates close to the needle tip could be observed when the applied voltage is below 15.5 kV.
They are formed because of the erosion of PD branches on Nomex paper. It can be observed that the intensity of PD and the length of discharge branches increase as the applied voltage increases. During the stage of near breakdown, that is, applied voltage up to 15.5 kV, the first layer of oil paper is punctured and interlayer discharge occurs in the second layer of oil paper. The interlayer discharge branches are not obvious, and as a matter of the fact, they are observed through the first layer. Compared with the discharge below 15.5 kV, the interlayer discharges intensify and the much longer branches occur in the second layer of multilayer oil paper. Also, the sound of the discharge can be heard. A large number of bubbles existing at the tip and the bubble column are formed. The intensified discharges continuously bombard the oil paper sample and lead to the breakdown of oil paper sample quickly. Figure 5 shows the PD patterns at different applied voltages. It indicates that PDs mainly concentrate on the rising edges and part of the falling edges of the applied voltage. With the increase of the applied voltage, the amplitude and number of PD pulses rise, resulting in the difference of PD patterns. PD pattern presents a 'triangular' distribution when the applied voltage is 12 kV but shows a 'trapezoid' distribution between 12 and 15 kV. At 15.5 kV, the PD pattern shows a 'triangular' distribution. These differences are related to the charge transportation.
Dielectric properties
The real part of the relative permittivity ε' of PD degradation samples and untreated samples are tested, and the results obtained are shown in Figure 6. It can be observed that ε' gradually decreases as the frequency increases. The results might contribute to explain that ε' is related to the polarisation characteristics of oil-impregnated Nomex paper. The polarisation occurrs in the interface between chopped fibre and pulp under the action of an electric field. At the same time, the interfacial polarisation is related to the binding capacity of charges. With the increase of frequency, the rate of polarisation cannot overtake the change rate of frequency, resulting in the decrease of ε'. It could be observed that ε' shows a decreasing tendency when the applied voltage is below 14 kV but shows an increasing tendency when the PD degradation is 15 kV. This is due to the presence and enhancement of holes in oil-impregnated Nomex It can be seen in Figure 6(b) that the value of ε' for the first layer is the minimum, while the value of ε' for the third layer is the maximum. This reveals that the number of pores inside the oil-impregnated paper increase as the degree of PD degradation increases, causing the decrease of ε'.
Conductivity properties
Volume and surface resistivity of different PD degradation samples are measured by using Model 8009 and Keithley 6517B. Then, the volume and surface conductivity can be calculated: where σ v is the volume resistivity, σ s is the surface resistivity, ρ v is the volume conductivity and ρ s is the surface conductivity of Nomex paper. The volume and surface conductivity of Nomex paper at different voltages are shown in Figure 7. It can be seen that the volume and surface conductivity of samples keep increasing with the PD degradation. The conductivity of untreated samples is much smaller than that of PD degradation samples. It is due to the appearance of tree-like' carbonised tracks caused by PD. The carbonised tracks are intensified and expanded along the surface of oil-impregnated samples with the increase of PD degradation degree. Therefore, it comes out that carbonisation plays an important role in the growth of volume and surface conductivity. Table 1 shows the volume and surface conductivity of different layers for PD degradation samples at 15 kV. It can be observed that the surface conductivity of the first layer is the largest, while the surface conductivity of the third layer is the smallest. It indicates that the macromolecular chains are broken and polar groups are formed in samples caused by a strong field. As a result, more conductive ions are produced that accelerate the increase of the conductivity. Figure 8 shows the trap distributions of the first layer for different PD degradation samples. It is observed that the hole and electron trap distributions show the same profile. Besides, the curves of each trap distribution have two peaks that present The trap distributions of different layers for samples at 15 kV are shown in Figure 9. It can be seen that a large number of shallow traps have occurred in the second and third layers of oil-impregnated Nomex paper. As for the hole trap distribution of the second layer, the shallow trap density is the largest but the trap level is the lowest. It means that the second layer and third layer is gradually degraded because of the charge transportation characteristics, that is, the trapping, de-trapping, recombination and migration of positive and negative charges.
Surface morphology and chemical composition
The Nomex paper, called Aramid paper, consists of the chopped fibre and pulp. The chopped fibre plays an important role in the skeleton. The pulp fibre is the filler and the adhesive. The surface morphology of Nomex paper is characterised by SEM. Figure 10 shows the SEM of untreated and PD degradation samples. It is noticed that the surface of untreated Nomex paper is smooth, while the chopped fibres and the pulp are gathered tightly. With the increase of PD degradation at 12 kV, the holes and debris are formed on the surface of Nomex paper of the first layer. The bonds between the chopped fibres and pulp are loose slightly. At 14 kV, the chopped fibres begin to break. A large number of holes and debris are formed on the surface of Nomex paper of the first layer. The chopped fibre becomes slender due to electrical stress. When the applied voltage is up to 15 kV, the chopped fibres become loose and disordered, and the pulp is destroyed completely near the needle in the first layer. Figure 11 shows the SEM of the second and third layers for the PD degradation sample. It is shown that the second and third layers of Nomex paper are destroyed slightly.
3.3.2
Chemical composition of Nomex paper
The effect of PD on trap distribution of Nomex paper
The trap distribution of Nomex paper is related to the physicochemical properties. According to the obtained SEM and FTIR results, the surface morphology and chemical composition of Nomex paper changed during the PD progress, resulting in the variation of trap parameters.
For untreated Nomex paper, its surface is smooth and the corresponding trap distribution is depended on the deep trap. During the PD progress, Nomex paper is gradually degraded for the charge activities when the applied voltage is below 12 kV. The broken molecular chains and surface morphology further result in the variation of trap distribution of samples, that is, the density of deep trap decreases and the density of shallow traps increases (Figures 8 and 9). With the increase of applied voltage, samples are degraded for the bombardment of charged particles and heat effects caused by PD. The increased amplitude and intensity of PD cause the destruction of the surface morphology of Nomex paper (Figures 10 and 11). The formation of holes and chain scission may contribute to the increase of trap level. Besides, the chemical erosion caused by PD intensifies, leading to the decrease of absorption peak in 1642-1605 cm −1 for the amide band ( Figure 12). It indicates that chemical chains are broken, while impurities with hydroxyl groups are formed. These impurities existing in the Nomex paper cause the faster injection and the slower de-trapping of charge. It will result in the trap energy level of the first layer for the sample increases as the PD degradation increase. Figure 13. It can be seen that the charge distribution of the positive half cycle is different from the negative half cycle. In the phase from 0 to π/2, the positive charges are gradually accumulated on the surface of Nomex paper and migrate along its surface under the electrical field. The recombination of positive and negative charges will release energy, causing the appearance of PD. In the phase from π/2 to π, the majority of positive charges are accumulated on the surface of Nomex paper. Also, the number of accumulated positive charges increases with the increase of phase. The accumulated positive charges cause the distortion of the electrical field. The distorted field promotes the reappearance of PD ( Figure 5). Besides, negative charges are always accumulated at the position far from the centre. The longer migration distance of the electron at the same condition is found. This is due to the mass of the electron being smaller than ion. In the phase from π to 3π/2, the negative charges are gradually accumulated on surface of Nomex paper and migrate along its surface under the electrical field. The recombination of negative and positive charges will release energy, causing the appearance of PD. In the phase from 3π/2 to 2π, negative charges are accumulated on the surface of Nomex paper. The accumulated negative charges cause the distortion of the electrical field. The distorted field promotes the reappearance of PD ( Figure 5).
Charge migration along surface
The surface current density of free charge produced by the field component along the surface of Nomex paper can be written as [25,26] where σ s is the surface conductivity of Nomex paper, E t (t) is the field component along the surface of Nomex paper.
FIGURE 13
Charge transportation models of Nomex paper under AC voltage per cycle [24]. (a) 0 ∼ π/2, (b) π/2 ∼ π, (c) π ∼ 3π/2, and (d) 3π/2 ∼ 2π It can be seen from Equation (5) that j s (t) is related to E t (t). In the phase from π/2 to π and from 3π/2 to 2π, a large number of charges are accumulated on the surface of Nomex paper. The accumulated charges cause the intensification of electrical field E t (t), which further results in the increase of surface current density. With the increase of applied voltage, it is easier for charges to migrate along the surface of Nomex paper. Moreover, the increase of surface conductivity of the Nomex paper also facilitates the charge migration along the surface (Figure 7). In addition, the migration distance of charge is positively correlated with the amplitude of applied voltage [27]. The increase of applied voltage causes the expansion of the charge distribution along the surface of Nomex paper.
When the applied voltage is below 15 kV, the surface current density intensified and the charge distribution expanded with the increase of applied voltage, resulting in more serious damage on the surface of Nomex paper and the extending of PD branches (Figure 4). In addition, the result of trap measurement indicated that the trap level of the first layer increased with applied voltage (Figure 8). It indicated that the process of charges de-trapping is difficult, and trapped charges will provide seeds for the PD.
Charge migration along volume
Assuming an infinitesimal volume element of Nomex paper with the height h and potential difference, the current density of free charges through the volume can be given [25,26] as where ε is the permittivity of insulation paper with the height h, and σ v is the volume conductivity. As discussed in Sections 3.1.2 and 3.1.3, ε' of the first, second and third layers for Nomex paper gradually increase but σ v decrease (Figures 6 and 7). The value of σ v is much smaller than ε'. Consequently, j v (t) is mainly determined by ε'. According to Equation (6), it is known that less free charges could migrate along the volume of the degraded sample. During the PD progress, free charges gradually migrate from the first layer to the second and third layers under the electric field. The charges transportation along the bulk of Nomex paper will destroy the surface morphology and break the chemical bonds of the Nomex paper. Meanwhile, the degradation of samples further brings about the difficulty in charge migration along volume, resulting in the formation of space charges and the distortion of the electric field. It accelerates the formation of shallow traps in the second layer of the samples. Free charges seized by shallow traps may free quickly from them [28]. As a consequence, the charges trapped in shallow traps are easy to participate in the development of PD. A large number of shallow traps in the second layer may contribute to the long branches of the interlayer discharge when the applied voltage is 15.5 kV (Figure 9).
CONCLUSION
In this study, PD and charge migration characteristics of multilayer oil paper are investigated on the discharge progress. The main conclusions are presented as follows: 1. PD is mainly concentrated on the rising stage and part of the falling stage under the AC voltage. The discharge branches of the second layer are much longer than that of the first layer for PD degradation samples. 2. The trap level of the first layer for PD degradation samples increases as the applied voltage rises, and a large number of shallow traps are formed on the second layer of PD degradation samples. 3. The charges captured by deep traps in the first layer tend to provide seed charges for PD along the surface. The expansion of charge distribution range and the increase of charge accumulation are the main reasons for the increased length and quantity of discharge branches. 4. The shallow traps caused by charge transportation in the second and third layers make more trapped charges participate in PD. | 5,061 | 2021-03-03T00:00:00.000 | [
"Physics"
] |
The PML-Interacting Protein DAXX: Histone Loading Gets into the Picture
The promyelocytic leukemia (PML) protein has been implicated in regulation of multiple key cellular functions, from transcription to calcium homeostasis. PML pleiotropic role is in part related to its ability to localize to both the nucleus and cytoplasm. In the nucleus, PML is known to regulate gene transcription, a role linked to its ability to associate with transcription factors as well as chromatin-remodelers. A new twist came from the discovery that the PML-interacting protein death-associated protein 6 (DAXX) acts as chaperone for the histone H3.3 variant. H3.3 is found enriched at active genes, centromeric heterochromatin, and telomeres, and has been proposed to act as important carrier of epigenetic information. Our recent work has implicated DAXX in regulation of H3.3 loading and transcription in the central nervous system (CNS). Remarkably, driver mutations in H3.3 and/or its loading machinery have been identified in brain cancer, thus suggesting a role for altered H3.3 function/deposition in CNS tumorigenesis. Aberrant H3.3 deposition may also play a role in leukemia pathogenesis, given DAXX role in PML-RARα-driven transformation and the identification of a DAXX missense mutation in acute myeloid leukemia. This review aims to critically discuss the existing literature and propose new avenues for investigation.
THE PROMYELOCYTIC LEUKEMIA PROTEIN
The Promyelocytic Leukemia (PML) gene was originally identified at the breakpoint of the t(15;17) translocation of Acute Promyelocytic Leukemia (APL), which generates the PML/retinoic acid receptor (RAR)α oncogene, an inhibitor of PML and RARα functions (Salomoni et al., 2008) [please refer to accompanying articles 1 and reviews in the field, e.g. (Grimwade and Solomon, 1997;Brown et al., 2009;de The and Chen, 2010), for detailed information on APL pathogenesis]. PML can localize to the cytoplasm [for more extensive discussion on the role of cytoplasmic PML, see (Lin et al., 2004;Giorgi et al., 2010;Pinton et al., 2011) as well as this issue of Frontiers] and the nucleus, where it forms the PML nuclear body (PML-NB), of which it is the essential component (Salomoni and Khelifi, 2006;Salomoni et al., 2008). The PML-NB is a subnuclear structure associated with storage and post-translational modifications (PTMs) of several nuclear factors [ (Salomoni et al., 2008) for more extensive discussion on the role and regulation of PML-NBs, see comprehensive reviews in the field (Zhong et al., 2000b;Bernardi and Pandolfi, 2007;Lallemand-Breitenbach and de The, 2012), including this issue of Frontiers]. The PML-NB is disrupted in APL cells by PML/RARα (Salomoni et al., 2008). Both PML and PML/RARα (via the PML moiety) can be targeted pharmacologically using arsenic trioxide (ATO), which, in part through direct binding, promotes their ubiquitin-dependent degradation (Jeanne et al., 2010;Zhang et al., 2010). ATO is used in APL therapy because of its ability to target the leukemic stem cell pool (de The and Chen, 2010).
Although the PML gene is rarely mutated in cancer, its protein expression is lost in a number of human tumors, suggesting that it acts as tumor suppressor. Indeed, PML limits tumorigenesis in APL, lung, and prostate cancer models (Salomoni and Pandolfi, 2002). However, recent studies have highlighted a potential role of PML in established tumors. In this respect, PML is required for maintenance of the leukemia-initiating stem cell pool in chronic myeloid leukemia (Ito et al., 2008). Notably, ATO phenocopies the effect of PML loss in leukemic stem cells and requires PML for this effect (Ito et al., 2008). Another growth suppressor, p21 controls leukemic stem cell maintenance via regulation of genomic stability (Viale et al., 2009). Furthermore, an additional study from the Pandolfi's group showed that PML plays an important prosurvival role in cancer via regulation of tumor cell metabolism [(Carracedo et al., 2012); for more extensive discussion on the role of PML in tumorigenesis, see comprehensive reviews in the field (Salomoni and Pandolfi, 2002), including this issue of Frontiers].
What is PML nuclear function(s)? Several studies have implicated PML in regulation of transcription (Zhong et al., 2000b;Bernardi and Pandolfi, 2007;Salomoni et al., 2008). In this respect, PML-NBs localize in the proximity of active transcription sites in a cell cycle-dependent manner (Kiesslich et al., 2002). Notably, PML can directly regulate the function of several transcription factors (Bernardi and Pandolfi, 2007). For instance, PML interaction with the p53 tumor suppressor promotes p53-dependent transcription in a PML-NB-dependent as well as -independent manner (Bischof et al., 2002;Bernardi et al., 2004;Bernardi and Pandolfi, 2007;Salomoni et al., 2008). Furthermore, work from our group and others have shown that the tumor suppressor and transcriptional repressor retinoblastoma (pRb) also localizes to PML-NBs (Alcalay et al., 1998), resulting in alterations of its phosphorylation status (Ferbeyre et al., 2000;Regad et al., 2009). Interestingly, not only transcription factors are found in PML-NBs, as a number of chromatin regulators localize to these structures, such as the histone acetyltransferase CREB-binding protein (CBP)/p300, which can acetylate histones as well as transcription factors. In this respect, it has been proposed that in senescent cells PML promotes p53 acetylation via dynamic localization of CBP to PML-NBs (Pearson et al., 2000). It is presently unknown whether PML could affect CBP-mediated acetylation of histone tails, in addition to transcription factors. It is important to note that PML-NBs contain chromatin-associated factors with repressive activity, such as histone deacetylase 1 (HDAC1), the corepressors N-Cor, Sin3A (Khan et al., 2001), and the heterochromatin-associated protein 1 (HP1) (Seeler et al., 1998). Together, these findings suggest that PML could serve as scaffold for multiple chromatin-remodeling complexes, with potential implications for both transcriptional activation and repression. Interestingly, there is evidence that PML-NBs might be involved in regulation of chromatin architecture, as some genetic loci are non-randomly associated with the periphery of PML-NBs (Torok et al., 2009). Furthermore, PML has been implicated in special AT-rich sequence-binding protein 1 (SATB1)-mediated regulation of chromatin architecture and gene expression (Kumar et al., 2007). Although these studies suggest an involvement of PML in chromatin regulation via interaction with histone-modifying enzymes and other chromatin regulators, our understanding of PML and PML-NB role in this context remains limited.
New exciting studies now link PML to the histone loading machinery, with implications for chromatin remodeling and cancer pathogenesis. This will be the main focus of the present review article.
THE PML-INTERACTING PROTEIN DAXX IS A CHAPERONE FOR THE HISTONE VARIANT H3.3
The death-associated protein 6 (DAXX) interacts with PML and is found in PML-NBs as well as heterochromatin (Khelifi et al., 2005;Salomoni and Khelifi, 2006). DAXX recruitment to PML-NBs occurs via binding to SUMOylated PML with DAXX SUMOinteracting motif (SIM) (Zhong et al., 2000a;Lin et al., 2006). DAXX SIM is also required for its ability to localize to heterochromatin [ (Kuo et al., 2005) and our unpublished data]. DAXX loss results in embryonic lethality (Michaelson et al., 1999;Garrick et al., 2006), indicating an essential role in embryogenesis. DAXX was originally identified as a CD95-interacting protein in the cytoplasm, affecting CD95-dependent activation of c-Jun-Nterminal kinase (JNK) (Yang et al., 1997). The link with JNK was reported by subsequent studies, which implicated an apoptosis signal-regulating kinase 1 (ASK1)-dependent mechanism for DAXX-mediated JNK activation (Ko et al., 2001;Perlman et al., 2001;Khelifi et al., 2005). However, it is presently unclear whether endogenous DAXX localizes to the cytoplasm in physiological conditions [for discussion see (Lindsay et al., 2009)].
Recent exciting studies have implicated DAXX in direct chromatin regulation via its ability to act as chaperone for a histone 3 (H3) variant called H3.3. Best understood for PTMs of histones, chromatin modification also occurs via incorporation of histone variants. Unlike canonical H3, H3.3 can be loaded on DNA in a replication-independent manner. H3.3 is believed to be an important carrier of epigenetic information (Szenker et al., 2011). H3.3 is encoded by two genes, H3F3A and H3F3B. H3F3A inactivation via gene trap leads to perinatal lethality (Couldrey et al., 1999), whereas H3F3B knockout embryos display partial embryonic lethality and infertility in surviving homozygous animals (Bush et al., 2013). DAXX acts as a H3.3 chaperone as part of a nuclear complex containing the α-thalassemia and mental retardation X-linked (ATRX) DNA helicase (Drané et al., 2010;Lewis et al., 2010;Dawson and Kouzarides, 2012). ATRX, like DAXX, can associate with PML-NBs (Bérubé et al., 2007) and has been proposed to contribute to DAXX/H3.3 targeting to chromatin, potentially via its ability to bind histone repressive marks in heterochromatin and G-rich DNA repeats Law et al., 2010;Iwase et al., 2011). DAXX and ATRX mediate H3.3 loading onto telomeres and pericentric heterochromatin, with implications for transcription of telomeric and centromeric repeats (Drané et al., 2010;Goldberg et al., 2010;Lewis et al., 2010). Furthermore, H3.3 loading at telomeres has been suggested to play an important role in maintaining chromatin structure (Wong et al., 2009(Wong et al., , 2010. Loading of H3.3 may affect transcription also at euchromatin, as it is enriched at transcriptionally active genes and has been proposed to regulate epigenetic memory of transcriptional competence (Henikoff, 2008;Ng and Gurdon, 2008;Jullien et al., 2012). Loading of H3.3 at transcription start site (TSS) and body of active gene is dependent on the chaperone HIRA . However, H3.3 is also enriched at regulatory regions not immediately adjacent to TSS (Mito et al., 2007;Jin et al., 2009;Goldberg et al., 2010). Deposition at those sites is in part HIRA-independent , but the histone chaperone involved was not known. In this respect, our recent work implicated DAXX in the regulation of H3.3 deposition at promoters and enhancers of immediate early genes (IEGs) in neurons (Michod et al., 2012), thus demonstrating that DAXX is one of the previously unidentified H3.3 chaperones at regulatory regions (Michod et al., 2012). Work from Genevieve Almouzni, John Gurdon, and Peter Adams groups Jullien et al., 2012;Pchelintsev et al., 2013) showed that HIRA could also mediate H3.3 loading at regulatory regions. Notably, DAXX-dependent H3.3 deposition correlates with its ability to Frontiers in Oncology | Molecular and Cellular Oncology modulate transcription, thus suggesting a link between H3.3 loading and transcription (Michod et al., 2012). Among the IEGs analyzed, only a subset of them displayed dependence on DAXX for H3.3 loading and transcriptional activation, thus suggesting that other H3.3 chaperones are involved in IEG regulation in neurons, such HIRA or DEK (Sawatsubashi et al., 2010;Jullien et al., 2012). Finally, both DAXX-dependent loading and transcription are controlled by a calcium-dependent phosphorylation switch affecting serine 669 (S669) (Michod et al., 2012), which is a target of homeodomain-interacting protein kinases (HIPKs) (Hofmann et al., 2003) (Figure 1). In particular, upon neuronal activation DAXX S669 is dephosphorylated by the calcium-dependent phosphatase calcineurin (CaN), leading to increased loading activity and transcription (Michod et al., 2012). Although H3.3 is preferentially found associated with hypophosphorylated DAXX, S669 dephosphorylation does not affect DAXX affinity for H3.3, suggesting that when in complex with H3.3 DAXX is either more effectively dephosphorylated or its HIPK-dependent phosphorylation is inhibited. Since CaN is believed to be mainly cytosolic, it is most likely that DAXX dephosphorylation occurs outside the nucleus, whereas one could speculate that its HIPK-dependent phosphorylation could be nuclear. It is important to note that DAXX S669 phosphorylation status does not affect its chromatin association. One could speculate that HIPKs could associate with DAXX on chromatin and inhibit its chaperone activity. Interestingly, the HIRA chaperone complex contains the CaN-binding protein CABIN1, a CaN regulator (Rai et al., 2011), suggesting that calcium-dependent signaling could regulate multiple H3.3 chaperone complexes.
Together, these studies suggest that DAXX-mediated loading of H3.3 at regulatory regions may affect transcription. One could argue that DAXX ability to regulate transcription could be H3.3-independent, for instance via its interaction with HDAC-II (Hollenbach et al., 2002), CBP (Kuo et al., 2005), or Dnmt1 (Puto and Reed, 2008;Zhang et al., 2013). However, DAXX loss fails to promote any significant changes in histone acetylation or DNA methylation at the BDNF Exon IV promoter (Michod et al., 2012). Another possibility is that DAXX regulates key transcription factors involved in activity-dependent IEG induction, in particular CREB and MEF2 (Hong et al., 2005;Flavell et al., 2008). For instance, DAXX has been recently reported to repress CREB transcriptional activity via direct interaction via its C-terminus and HIPK2 is known to phosphorylate CREB (Sakamoto et al., 2010). However, based on these findings DAXX loss would result in increased CREB-mediated transcription, opposite to what we have observed in neurons (Michod et al., 2012). To incontrovertibly assess the role of DAXX-mediated H3.3 loading in transcription, one should test the ability of recently described DAXX mutants impaired in histone binding (Eustermann et al., 2011;Elsasser et al., 2012) to rescue transcriptional defects observed in DAXX-deficient cells (Michod et al., 2012). It is important to note that histone chaperones are often components of chromatin remodeling complexes, such as the nucleosome remodeling and deacetylation (NuRD) and Polycomb complexes (Lai and Wade, 2011;Margueron and Reinberg, 2011). Thus, DAXX could load H3.3 while being part of a FIGURE 1 | DAXX chaperone activity is regulated by calcium-dependent signaling in neural cells. Neuronal activation leads to calcium (Ca 2+ ) entry and activation of the Ca 2+ -dependent phosphatase, calcineurin (CaN). In turn, CaN dephosphorylates DAXX at serine 669, leading to increased H3.3 loading at selected immediate early genes (IEGs). DAXX loss not only affects H3.3 loading, but also leads to impaired induction of IEGs, thus suggesting that H3.3 loading may modulate IEG transcriptional induction.
larger chromatin-remodeling complex containing histone-and/or DNA-modifying enzymes, which could cooperate with histone loading in promoting chromatin modification and transcriptional changes.
What is the evidence for a role of H3.3 loading in transcriptional regulation? Our work and other studies discussed above suggest a potential role for H3.3 in transcription and/or regulation of the transcriptional state (Ng and Gurdon, 2008;Jullien et al., 2012;Michod et al., 2012). Furthermore, H3.3 downregulation in B cells results in transcriptional repression at the Igh locus (Aida et al., 2013). In contrast, loss of H3f3b does not dramatically alter the transcriptome of mouse embryo fibroblasts (Bush et al., 2013) and HIRA deficiency in ES cells has limited impact on transcription . It is plausible that the impact of H3.3 loading on transcription could depend on the cell type, developmental stage, and environmental cues (e.g., neuronal activation, B cell differentiation stimuli, etc). In this respect, the concept of H3.3 deposition contributing to transcriptional memory at selected loci during development is particularly intriguing. Overall, there is accumulating evidence that H3.3 might regulate transcription. A key question is how. Most active genes are associated with variant nucleosomes containing H3.3 and the histone 2A variant H2A.z, which promotes nucleosome instability (Jin et al., 2009). These properties of H3.3 may explain its enrichment at bivalent genes in flies (Henikoff, 2008) and, along with H2A.z, in mammalian cells (Creyghton et al., 2008;Goldberg et al., 2010). Many genes involved in brain development and postnatal neurogenesis are www.frontiersin.org characterized by bivalent chromatin (Valk-Lingbeek et al., 2004;Marino, 2005;Lim et al., 2009;Sawarkar and Paro, 2010;Schuettengruber et al., 2011;Dawson and Kouzarides, 2012). Bivalency is defined by the presence of both active and repressive histone marks, which keep genes in a poised state. These are the repressive mark trimethylated H3 lysine 27 (H3K27me3) and the active mark H3K4me3, which are generated via the action of Polycomb Repressive Complex 2 (PRC2) and Trithorax complexes, respectively. H3K27me3 has a dual role in amplification of PRC2-mediated K27 methylation and recruitment of the Polycomb repressive complex 1 (PRC1), which mediates ubiquitylation of H2A (another repressive mark) (Wang et al., 2004). H3K4me3 is found associated with several chromatin-remodelers, as well as H3K27 demethylases (Dawson and Kouzarides, 2012). Polycomb group (PcG) and Trithorax group (TrxG) proteins are key regulators of stem cell fate in both the embryonic and postnatal brain (Simon and Kingston, 2009;Margueron and Reinberg, 2011). In particular, the PRC1 component Bmi1 and the TrxG complex component Mll1 are important regulators of neural stem cell self-renewal and neurogenesis (Valk-Lingbeek et al., 2004;Marino, 2005;Lim et al., 2009). Notably, H3.3 is enriched in the H3K4me3 mark (Henikoff, 2008). Furthermore, H2A.z has been proposed to regulate targeting of both PcG and TrxG complexes to chromatin (Hu et al., 2012). Overall, these studies suggest a potential functional involvement of H3.3 and H2A.z loading in regulation of bivalency. In particular, it is conceivable that H3.3 deposition could affect bivalent domains at IEGs in neurons, as potential mechanism for DAXXmediated transcriptional changes (Michod et al., 2012). In general, it is of key importance to generate new genetic systems to better define the molecular function of H3.3 and its impact on fundamental biological processes. In this respect, Jeffrey Mann's group has been involved in generation of new models based on conditional allelic replacement, which bear great promise for advancing our understanding of H3.3 function in vivo .
H3.3 LOADING AND DISEASE PATHOGENESIS
An even greater interest in H3.3 and its chaperones has arisen from the discovery that H3.3 itself, DAXX and ATRX are mutated in human cancer. In this respect, driver heterozygous mutations in the H3F3A gene are found in pediatric glioblastoma multiforme (GBM) (Schwartzentruber et al., 2012;Sturm et al., 2012;Wu et al., 2012) (H3F3B is expressed at much lower levels in neural cells; our unpublished observation). H3.3 is mutated at K27 (K27M) and G34 (G34R or V), with the former found in brainstem tumors of young children (Schwartzentruber et al., 2012;Sturm et al., 2012;Wu et al., 2012) and the latter in the cerebral hemispheres of older children and adolescents (Schwartzentruber et al., 2012;Sturm et al., 2012;Wu et al., 2012). ATRX is mutated in pediatric (Schwartzentruber et al., 2012) and adult GBM (Heaphy et al., 2011), and DAXX in pediatric GBM, albeit very infrequently (Schwartzentruber et al., 2012). ATRX is also found mutated in neuroblastoma, while both DAXX and ATRX are mutated in neuroendocrine tumors of the pancreas (Elsasser et al., 2011;Heaphy et al., 2011;Jiao et al., 2011;Molenaar et al., 2012). Most DAXX and ATRX mutations are mutually exclusive and result in loss of expression (Elsasser et al., 2011;Heaphy et al., 2011;Jiao et al., 2011;Schwartzentruber et al., 2012;Wu et al., 2012), apart from a missense mutation found in acute myeloid leukemia (AML) . It is important to note that pediatric GBM also display mutations of ATRX in the absence of H3.3 mutations, as observed in adult GBM, neuroblastoma and pancreatic tumors, suggesting that alterations in loading of WT H3.3 may per se lead to cancer. H3.3/ATRX-and ATRX-only-mutated GBM tumors often carry p53 mutations, suggesting that loss of p53 tumor suppressive function cooperates with H3.3 and/or ATRX mutations for tumorigenesis. Finally, in pancreatic tumors carrying DAXX mutations, it is conceivable that H3.3 loading could be mediated by other chaperones, thus leading to alterations in its genome-wide distribution, with potential consequences for tumorigenesis. At present, it is unclear what are the expression levels of the H3.3 chaperones HIRA and DEK in pancreatic tumors and other cancers displaying alterations in H3.3 and DAXX/ATRX loading machinery.
The key question is how alterations of H3.3 function can drive/contribute to neoplastic transformation. Analysis of gene expression changes in GBM neoplasms carrying H3.3 mutations showed that H3.3 K27M and G34R/V tumors display distinct transcriptional changes. In this respect, H3.3K27M GBM tumors display deregulation of some PcG targets (Schwartzentruber et al., 2012). Furthermore, mutations of the TrxG component multiple endocrine neoplasia type 1 (MEN1) are found in neuroendocrine pancreatic tumors and is mutually exclusive with DAXX and ATRX mutations, suggesting similar functional roles . Together, these data indicate that alteration of bivalent gene expression may represent one of the mechanisms underlying the transforming role of the H3.3 K27M mutation. It is important to note that deregulation of the machinery controlling the H3K27me3 epigenetic mark has been implicated in pathogenesis of another pediatric brain tumor, medulloblastoma, and adult GBM. In most cases, this leads to increased H3K27me3 (Bruggeman et al., 2007;van Haaften et al., 2009;Jones et al., 2012;Lu et al., 2012;Robinson et al., 2012), via deregulated expression/mutations of H3K27me3 methylases/demethylases, inactivation of chromatin-remodeling factors or metabolic enzymes (Bruggeman et al., 2007;van Haaften et al., 2009;Jones et al., 2012;Lu et al., 2012;Robinson et al., 2012). Increased H3K27me3 in medulloblastoma and adult GBM is expected to lead to increased repression. However, it was unclear what would be the consequences of pediatric GBM mutation of H3.3 at K27 on H3K27me3 and transcription. A very recent study from David Allis group has provided important clues (Lewis et al., 2013). In particular, this work shows that the presence of H3.3K27M negatively affects PRC2-mediated amplification of K27 trimethylation in cis and trans. This occurs via inhibition of the enzymatic activity of the PRC2 methyltransferase EZH2. Interestingly, introduction of K-to-M mutations at other known methylated H3 residues (H3K9 and H3K36) has similar negative effects on enzymatic activity of the dedicated methyltransferases. Together, these data suggest that H3.3K27M acts at least in part as a gain-of-function mutant. Notably, the gain-of-function effect of the K-to-M mutation is not restricted to H3.3, as also canonical H3 is found mutated in GBM and H3K27M displays similar EZH2 inhibitory activity. What would be the effect of H3.3K27M loading on chromatin? Some clues came from another recent study, which showed that H3.3K27M is associated with loss of H3K27me3 at many loci , as expected based on David Allis work. However, several genomic regions gained this mark along with H3K4me3, leading to repression of genes involved in cancer development, such as the tumour suppressor p16INK4a . Thus, the consequences of this mutation on chromatin structure/modifications are more complex than previously thought.
How would G34R/V mutations function? Notably, H3.3 G34 mutations almost invariably coexist with ATRX mutations (Henikoff, 2008;Schwartzentruber et al., 2012) and are associated with the alternative lengthening of telomeres (ALT) mechanism (Heaphy et al., 2011;Bower et al., 2012;Liu et al., 2012;Lovejoy et al., 2012), a recombinogenic mechanism for telomere elongation. ALT cells contain a modified PML-NB called ALT-associated PML-NB (APB), which we and others have implicated in telomeric damage response and potentially telomere recombination (Stagno D'Alcontres et al., 2007;Lovejoy et al., 2012). Thus, it is conceivable that H3.3 G34 mutations could lead to ATRX loss and alteration of telomere maintenance mechanisms, thus in turn contributing to transformation and GBM development. Recent work from Chris Jones laboratory shows that H3.3G34 mutations alter transcription and enrichment of the H3K36me3 active mark at a number of developmentally regulated genes linked to forebrain development and stem cell self-renewal (Bjerke et al., 2013). Remarkably, these mutations lead to increased expression of the MYCN protooncogene (Bjerke et al., 2013), suggesting a potential link between histone variant loading and MYCN-mediated transformation.
Are H3.3 mutations transforming per se? It is important to note that H3.3K27M is unable to promote glioma even in a p53 null background (Lewis et al., 2013), suggesting that either other genetic events are needed or, more likely, the cell targeted in this model is not the correct one. In this respect, it is possible that the type of progenitor and/or the developmental stage are crucial for transformation by H3.3 mutant proteins. Definite answers to these outstanding questions will be achieved only upon development of more sophisticated genetic models, which are currently in the pipeline in many laboratories in the field.
Alterations of the H3.3 chaperone complex might extend to non-neoplastic conditions, such as the ATR-X syndrome, which is driven by ATRX mutations (Gibbons and Higgs, 2000). Furthermore, ATRX interacts with MeCP2 and cohesin, mutated in the Rett and Cornelia de Lange (CdLS) syndromes, respectively (Kernohan et al., 2010). It is presently unknown whether alterations of H3.3 loading may participate in the pathogenesis of these conditions.
ROLE OF PML AND PML-RARα IN REGULATION OF H3.3 LOADING?
As mentioned earlier, DAXX localizes to PML-NB via a SUMOdependent mechanism involving its SIM (Figure 2). Thus, it is conceivable that PML could regulate DAXX function by altering its subcellular localization. In this respect, PML was reported to negatively regulate DAXX repressive function through recruitment to PML-NBs (Li et al., 2000). However, PML role in regulation of H3.3 deposition is still unclear. Clues have come from a recent study reporting H3.3/H4 dimers localization to PML-NBs (Figure 2) (Delbarre et al., 2012). The authors of this study report that exogenously expressed H3.3 along with H4 and DAXX localizes to PML-NBs in G1-enriched mesenchymal stem cells, thus potentially regulating the nucleoplasmic pool of H3.3. Although exogenous H3.1 and H3.2 failed to localize to PML-NBs, it is still possible that H3.3 could be targeted to PML-NBs only when expressed at supraphysiological levels. One obvious question is whether PML regulates incorporation of endogenous H3.3 into chromatin. In this respect, it could hypothesized that PML-mediated localization of H3.3 and its chaperones to PML-NBs inhibits H3.3 loading into chromatin (Delbarre et al., 2012). FIGURE 2 | DAXX associates with both PML and PML-RARα?. DAXX and H3.3/H4 dimers are found at PML-NBs, suggesting that PML may regulate H3.3 loading. Furthermore, DAXX association with PML-RARα is required for transformation in vitro. Although it is presently unknown whether H3.3 also associates with PML-RARα, it is conceivable that PML-RARα via interaction with DAXX could modulate H3.3 loading. While DAXX is recruited to PML-NBs via a SUMO-interacting motif (SIM)-dependent mechanism, it is still unclear whether a similar mechanism is implicated in its targeting to chromatin (see question mark in middle panel). DAXX also interacts with DNA methyltransferase 1 (Dnmt1) and targets its activity to chromatin, suggesting that DAXX coordinates multiple epigenetic modifications.
Another recent report implicated PML and PML-NBs in regulation of ATRX and H3.3 association at telomeres during S-phase in ES cells (Chang et al., 2013). As a result, PML downregulation caused telomere dysfunction and altered telomeric enrichment of selected epigenetic marks. Although it cannot be excluded that PML-NBs are sites for localization of extrachromosomal telomeric DNA, this work suggests that PML is either directly or indirectly involved in regulation of telomere replication/maintenance potentially via an H3.3/ATRX-dependent mechanism. A question arising from this study is whether PML role in regulating H3.3 association with telomeric DNA is maintained also in ALT cells, which are often ATRX-negative, in particular in G34R/V, ATRX-deficient GBM cells. Finally, PML could play a more indirect role in regulation of H3.3 loading via modulation of DAXX PTMs, in particular its phosphorylation at S669 (Michod et al., 2012). In this respect, while the S669 phosphatase CaN is mainly cytosolic, the S669 kinase HIPK2 has been shown to localize to PML-NBs (Krieghoff-Henning and Hofmann, 2008), suggesting that localization of HIPK2 and DAXX to PML-NBs could affect its phosphorylation and as a result its chaperone activity.
Promyelocytic leukemia-mediated regulation of DAXX function could be particular relevant in the central nervous system (CNS), given the roles played by the two proteins in this context. In this respect, our previous work has shown that PML is expressed in neural progenitor/stem cells (NPCs) in the developing neocortex as well as in postnatal neurogenic niches [(Regad et al., 2009;Salomoni and Betts-Henderson, 2011) and our unpublished data]. As a result, PML loss leads to alterations of corticogenesis and smaller brains (Regad et al., 2009), as well as aberrant postnatal neurogenesis (our unpublished data). While PML and DAXX are expressed in the germinal area of developing neocortex and in NPCs within adult neurogenic niches, PML expression is downregulated in postmitotic neuroblasts and neurons (Regad et al., 2009;Michod et al., 2012) (and our unpublished data). It is therefore possible that PML could regulate DAXX chaperone function in NPCs, thus potentially affecting epigenetic changes driven by H3.3 loading. In turn, this could have implications for cell fate regulation and neurogenesis. In contrast, PML-mediated control of DAXX function would be absent in differentiated neurons.
What about the oncogenic form of PML, PML-RARα? SUMOylation within the PML moiety of PML-RARα is required for transformation (Zhu et al., 2005) and is responsible for recruitment of DAXX via its C-terminal SIM (Figure 2; see also accompanying review articles by Hugues de The in this issue of Frontiers). Mutation of the critical SUMOylation site within PML-RARα (K160R) releases DAXX and results in defective differentiation block (Zhu et al., 2005). In contrast, fusion of K160R PML-RARα with DAXX restores its transforming capacity (Zhu et al., 2005). A subsequent study from de The's laboratory reported that a DAXX-RARα chimera carrying a multimerization domain can repress RA-dependent transcription, inhibit differentiation, and promote transformation. In contrast, a multimerizationprone RARα mutant, despite inhibiting RA-dependent transcription and differentiation, was unable to transform hematopoietic progenitors (Zhou et al., 2006), suggesting that molecular determinants of the differentiation block and transformation may not be identical. However, a separate study by Eric So's group showed that fusion of the FKBP oligodimerisation sequence with RARα can promote transformation (Kwok et al., 2006). The presence of a SUMOylation site (Rodriguez et al., 2001) in FKBP (which would in principle still recruit DAXX) and non-identical experimental settings may explain the different results. Overall, these findings suggest that SUMO-dependent PML-RARα association with DAXX contributes to block of differentiation and transformation.
PML-RARα has been reported to associate with epigenetic regulators and mediate epigenetic changes: (i) PML-RARα multimerization properties lead to increased density of corepressors and chromatin remodeling factors at retinoic acid (RA) target genes (Lin and Evans, 2000;Minucci et al., 2000;de The and Chen, 2010); (ii) PML-RARα interacts with DNA methyltransferases, thus leading to DNA methylation of a number of RA target genes (Di Croce et al., 2002). Interaction with the H3.3 chaperone DAXX could provide PML-RARα with additional weaponry to promote epigenetic changes. One could argue that H3.3 is mainly associated with active genes, not with repression. In this respect, it is important to note that there is little in vivo evidence that repression of RA target genes is sufficient to initiate APL (de The and Chen, 2010), and it is now recognized that PML-RARα also possess gain-of-function properties through its ability to bind target sequences that are not recognized by the normal RARα-RXRα heterodimers (de The and Chen, 2010). Among these sites, there are many genes controlling stem cell self-renewal and myeloid differentiation (Purton et al., 2006;Viale et al., 2009;de The and Chen, 2010). Finally, mouse APL leukemias express high levels of the IEG c-Fos (Yuan et al., 2007), which we have shown to be regulated by DAXX in neural cells (Michod et al., 2012). Overall, the existing literature suggests that PML-RARα promotes transformation through a combination of dominant-negative and gain-of-function activities. Interaction with DAXX could contribute to the latter. Considering H3.3 enrichment at bivalent genes and the alteration of PcG/TrxG activities in hematopoietic tumors (Mills, 2010;Muntean and Hess, 2012), it could be hypothesized that DAXX-mediated H3.3 loading could affect bivalent gene expression in APL cells. In this respect, there is a functional crosstalk between RA-dependent transcription and the PcG machinery, as many homeobox genes contain RA responsive elements (RARE) (Mainguy et al., 2003;Ringrose and Paro, 2004) and RA targets are also PcG targets (e.g., CYP26a1 and RARβ). Notably, PML-RARα-regulated loci display increased H3K4me3 (Hoemme et al., 2008), thus suggesting that PML-RARα may regulate this epigenetic mark. As H3.3 is enriched in H3K4me3, it is conceivable that PML-RARα could direct DAXX-mediated H3.3 deposition at a number of its gene targets. As H3K4me3 is lost from a number of bivalent loci during differentiation of human hematopoietic stem cells , it is possible that PML-RARα could reestablish/maintain H3K4me3 at these loci through modification of TrxG complex activity and/or loading of H3.3. PML-RARα is also found in complex with PRC2 components (Villa et al., 2007;Martens et al., 2010), suggesting it could affect H3K27 trimethylation. However, subsequent genome-wide analysis showed that RA treatment fails to significantly affect H3K27me3 (Martens et al., 2010). More work is needed to define PML-RARα role in regulation of bivalent chromatin and the contribution of H3.3 loading to Frontiers in Oncology | Molecular and Cellular Oncology FIGURE 3 | H3.3 is mutated in human cancer. Driver H3.3 mutations are found in pediatric glioblastoma (pGBM), suggesting that alterations of H3.3 function may lead to brain cancer. It is possible that PML via its ability to recruit DAXX to PML-NBs could regulate loading of mutant H3.3 proteins, thus potentially affecting brain tumorigenesis.
its epigenetic activity. It is important to note that a DAXX missense mutation of unknown functional consequences has been identified in AML , suggesting that alterations of H3.3 loading may occur in non-APL hematopoietic neoplasms.
Although it is presently unknown whether H3.3 is loaded as SAHF, these data suggest that PML and its oncogenic version may regulate H3.3 loading by acting on multiple chaperones.
CONCLUSION
The discovery of DAXX chaperone function provides the fascinating possibility that PML and its oncogenic form PML-RARα could promote epigenetic changes in part via regulation of H3.3 loading. In this respect, PML-RARα could utilize DAXX chaperone activity to modify the epigenetic and transcriptional status of its target genes, as part of its gain-of-function activities in transformation of hematopoietic progenitors. In contrast, PML could play a more indirect role in regulation of loading of wild-type H3.3 as well as its GBM-associated mutants by controlling the availability of soluble H3.3/H4 dimers and/or DAXX PTMs, with potential implications for cell fate regulation and transformation (Figure 3). In this respect, pharmacological degradation of PML via ATO treatment could represent a strategy to affect H3.3 loading in cancer cells. More broadly, an increased understanding of H3.3 loading, its function and regulatory pathways has the potential to lead to a paradigm shift in the field of cancer epigenetics.
ACKNOWLEDGMENTS
I would like to thank David Michod (Institute of Child Health, UCL), the members of my laboratory. I also would like to thank my numerous collaborators, in particular Pierluigi Nicotera (DZNE, Bonn, Germany), for support and stimulating scientific discussion. The funding bodies supporting H3.3-and PML-related work in my laboratory are: the Medical Research Council (MRC), The Brain Tumor Charity (formerly known as Samantha Dickson Brain Tumor Trust), the Association for International Cancer Research (AICR) and Cancer Research UK (CRUK). Finally, a special thank to the Brian Cross family for their generous support of my laboratory through The Brain Tumor Charity. | 7,824.6 | 2013-06-07T00:00:00.000 | [
"Biology"
] |
Effective theory analysis for vector-like quark model
We study a model with a down-type SU(2) singlet vector-like quark (VLQ) as a minimal extension of the standard model (SM). In this model, flavor changing neutral currents (FCNCs) arise at tree level and the unitarity of the $3\times 3$ Cabibbo-Kobayashi-Maskawa (CKM) matrix does not hold. In this paper, we constrain the FCNC coupling from $b\rightarrow s$ transitions, especially $B_s\rightarrow \mu^+\mu^-$ and $\bar{B}\rightarrow X_s\gamma$ processes. In order to analyze these processes, we derive an effective Lagrangian which is valid below the electroweak symmetry breaking scale. For this purpose, we first integrate out the VLQ field and derive an effective theory by matching Wilson coefficients up to one-loop level. Using the effective theory, we construct the effective Lagrangian for $b\rightarrow s\gamma^{(*)}$. It includes the effects of the SM quarks and the violation of the CKM unitarity. We show the constraints on the magnitude of the FCNC coupling and its phase by taking account of the current experimental data on $\Delta M_{B_s}$, $\mathrm{Br}[B_s\rightarrow\mu^+\mu^-]$, $\mathrm{Br}[\bar{B}\rightarrow X_s\gamma]$ and CKM matrix elements as well as theoretical uncertainties. We find that the constraint from the $\mathrm{Br}[B_s\rightarrow\mu^+\mu^-]$ is more stringent than that from the $\mathrm{Br}[\bar{B}\rightarrow X_s\gamma$]. We also obtain the bound for the mass of the VLQ and the strength of the Yukawa couplings related to the FCNC coupling of $b\rightarrow s$ transition. Using the CKM elements which satisfy above constraints, we show how the unitarity is violated on the complex plane.
Introduction
After the discovery of the Glashow-Iliopoulos-Maiani (GIM) mechanism [1], this suppression mechanism of the flavor changing neutral current (FCNC) is firmly verified in K, D and B meson systems. The unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix [2] is also verified. As investigated in Refs. [3,4], the CKM unitarity is consistent with current data, which characterizes one of the most successful aspects in the standard model (SM).
As an extension of the quark sector, vector-like quark (VLQ) is considered. Here, VLQ is a quark whose representations in gauge group for left-and right-handed components are the same. As models including VLQs, some new physics scenarios are considered in the literature. Such vector-like extensions of the SM include the universal seesaw model [5]. This scenario introduces gauge singlet vector-like fermions to explain hierarchical structure of fermion masses. Furthermore, in the context of left-right symmetry, the seesaw mechanism induced by vector-like fermions gives solution to the strong CP problem [6].
The model with VLQ leads to the rich phenomenology which can be testable in experiments [7]- [12]. In particular, FCNCs induced by VLQ give rise to deviation from the SM prediction. Furthermore, the unitary relation of the CKM matrix, e.g., V * ub V us + V * cb V cs + V * tb V ts = 0, no longer holds. The unitarity triangle is modified as a quadrangle due to correction which arises from FCNCs. On the other hand, the direct detection of the VLQ is under way in collider experiments [13]. Then, the prediction and constraint on the mass and couplings of VLQ from the flavor observables provide them with important information.
In this paper, a model including one additional down-type VLQ is discussed. Integrating out VLQ, one can find that tree level FCNC arises from interaction with Z boson and Higgs bosons. On the basis of the effective field theory (EFT), we derive loop functions which correspond to the Inami-Lim functions in the SM. In order to examine the FCNC, phenomenological analysis is carried out for b → s transition. Specifically, experimental data ofB → X s γ, B s → µ + µ − and the mass difference in B s −B s system are utilized to constrain the model. The constraints on the magnitude of the FCNC coupling and its phase are shown by taking account of the current experimental data as well as theoretical uncertainties.
This paper is organized as follows: In Sec. 2, we integrate out down-type VLQ and determine Wilson coefficients of the EFT up to one-loop level. Loop functions are summarized in Sec. 3. In Sec. 4, the phenomenological analysis for b → s transition is given. Section 5 is devoted to summary and discussion.
Integrating out VLQ fields
In this section, we derive a low energy effective Lagrangian by integrating out VLQ fields. For this purpose, we show a full Lagrangian which includes one down-type SU (2) singlet VLQ in addition to the SM quarks. We assume that the mass of the VLQ is much larger than the electroweak (EW) scale. Then the Lagrangian L Full which is invariant under SU (3) where i = 1, 2, 3 denotes the indices for generations. d 4 L,R are VLQs. y u and y d represent Yukawa couplings of up-type and down-type quarks respectively. The matrix for Yukawa coupling of up-type quarks is taken to be real diagonal, while that of down-type quarks is a 3 × 4 matrix. M 4 denotes the mass of VLQ. Note that the mixing term between the lefthanded VLQ d 4 L and right-handed SM down-type quarks d i R is allowed in general. However we can remove the mixing term by the rotation of the down-type quarks. Hence we can take the Lagrangian as Eq. (1). The covariant derivatives are defined as follows: where λ a , τ I and Y X are Gell-Mann matrices, Pauli matrices and U(1) Y hypercharge of a field X (X = q L , u R , d R ) respectively.
Matching full theory and effective theory
In order to obtain the higher dimensional operators which represent the effect of the VLQ in the energy scale between M 4 and the EW scale, we integrate out the VLQ fields d 4 L,R in Eq. (1). At first, we perform tree level matching at VLQ mass scale M 4 . In Fig. 1, we show the Feynman diagram (left figure) for scattering of a pair of quark and anti-quark into a Higgs pair (q i q j → φφ † ), in which the VLQ is exchanged. We assume that the external particles have momenta much smaller than the mass of the VLQ. Then the amplitude of the left figure can be reproduced up to O(M −2 4 ) accuracy by computing the Feynman diagram of the right figure with the following low energy effective Lagrangian [11,12,14,15,16], Figure 1: The Feynman diagrams for scattering of a pair of quark and anti-quark into a Higgs pair (q i q j → φφ † ). The left figure shows the diagram of the full theory in which the VLQ is exchanged, while the right figure shows the diagram of the effective theory where the VLQ is absent and already integrated out.
where i, j = 1, 2, 3. The effective Lagrangian is written in terms of dimension-six operator and its coefficient is determined so that it reproduces the amplitude of the left figure in Next we consider one-loop level matching between the full theory and the effective theory to obtain effective interactions which contribute to the radiative transition of the quarks. The procedure is as follows: (i) We calculate the amplitudes of the Feynman diagrams for the decay of q i L into q j L and one of gauge fields B, W I or G a at one-loop level (See the top figures in Fig. 2.). These diagrams include the VLQ in the internal line. In this calculation, we renormalize the amplitudes with the MS scheme.
(ii) We calculate the same transitions as those of the procedure (i) with the effective operator in Eq. (5) obtained by tree level matching (See the bottom-left and bottomcenter figures in Fig. 2.). In this calculation, we also renormalize the amplitudes with the MS scheme. Figure 2: The Feynman diagrams for the decay of q i L into q j L and one of gauge fields B, W I or G a at one-loop level with the full theory (top figures) and the effective theory (bottom-left and bottom-center figures). The circular marks denote the tree level effective operator and the square mark denotes the new effective operators.
(iii) We introduce new effective operators and determine their coefficients so that the renormalized amplitudes in procedure (ii) match with those of the full theory computed in procedure (i) (See the bottom-right figure in Fig. 2.).
We can obtain the following effective Lagrangian L one−loop Eff at one-loop level: where To derive the above expressions, we use the equation of motion in the leading order of the expansion with respect to 1/M 2 4 , namely that of the SM. In Eqs. (9)-(11), F µν B , W Iµν , and G aµν are the field strength of U(1) Y , SU(2), and SU(3) c respectively. The matching scale µ is typically taken to be VLQ mass scale M 4 . The lepton doublet is denoted by l L . The effective Lagrangians L B Eff , L W Eff and L G Eff contain the effective operators which contribute to the decay of q i L → q j L B, W I , and G a , respectively. Since we use the equation of motions, the effective Lagrangians also contain the operators such as 4-Fermi operators which do not contribute to these processes.
Finally, the whole Lagrangian L Eff obtained by integrating out the VLQ fields is given as: where L tree Eff is given in Eq. (5) or Eq. (6) and L one−loop Eff is given in Eqs. (8)- (11). In Eq. (12), the kinetic term of the SM quark doublet q L is where the term Z ji (µ) comes from the first term in Eq. (8). To rewrite the kinetic term into a canonical form, we perform the following rescaling of q L , Then the kinetic term of the quark doublet becomes In terms of the rescaled fields introduced in Eq. (16), the Yukawa interactions in Eq. (13) are changed: where we redefine the SM Yukawa coupling as The rescaling of the field Eq. (16) is absorbed into the Yukawa couplings. After the diagonalization of the mass matrices based on these couplings, it contributes to the CKM matrix as one-loop corrections. Since we only consider the charged current interaction in one-loop diagrams in the next section, these corrections lead to two-loop order effects and they are neglected.
The tree level effective operator in Eq. (5) is also changed by the rescaling in Eq. (16): where we redefine the Yukawa coupling between the SM quarks and the VLQ as As we will see in the next subsection, this redefinition of the Yukawa coupling in Eq. (22) adds O( 1 ) corrections to the CKM matrix and the FCNC coupling. In the next section we will take into account only leading order contributions in 1/M 2 4 , and these corrections are neglected.
For the one-loop effective Lagrangian, the rescaling in Eq. (16) leads to two-loop order corrections and we can simply take q L q L in the one-loop effective Lagrangian.
Electroweak symmetry breaking
In this subsection, we derive the Lagrangian for the broken phase of the SM gauge symmetry. We substitute the following forms for the Higgs doublet in the Lagrangian Eqs. (6)- (11): Here we do not take into account the running effect from the VLQ mass scale to the EW scale for the coefficients in the effective interactions Eqs. (6)- (11). For the effective Lagrangian L tree Eff in Eq. (6), we obtain where the ellipsis represents the terms including more than four fields, h ji d represents y j4 d y i4 * d and c w (s w ) denotes cosine (sine) of the weak mixing angle θ w . The mass matrices of the up-type and down-type quarks which correspond to L SM in Eq. (13) are denoted by m u,d ≡ vy u,d / √ 2. Adding the tree level effective Lagrangian Eq. (25) to the SM Lagrangian L SM in Eq. (13), the mass matrix of the down-type quarks changes into We diagonalize this mass matrix. At first, we introduce 3 × 3 unitary matrices K L and K R . These unitary matrices diagonalizes the matrix m d : where the prime indicates the mass basis of the SM. In this mass basis, the mass matrix of the down-type quarks changes into where The mass matrix in Eq. (27) is not diagonal. In order to diagonalize the mass matrix including contribution of O(v 2 /M 2 4 ), we introduce unitary matrices V L and V R , where the double prime denotes the mass basis of the model with VLQ. The physical masses of the down-type quarks are denoted by m p d = (m d , m s , m b ). The mixing angles of these unitary matrices are the order of O(v 2 /M 2 4 ). Hereafter we omit the double prime on the quark fields of the mass basis and h d denotes the h d in the right-hand side of Eq. (28). Finally, we obtain the following Lagrangian after the transformation in Eqs. (26) and (29), where the ellipsis represents the terms which contain more than four fields. Each part of the Lagrangians is given below: In Eqs. (31)-(37), L and R denote the chiral projection operators, L ≡ 1−γ 5 2 , R ≡ 1+γ 5 2 . The electromagnetic charge of up-type and down-type quarks are denoted by Q u and Q d respectively. The 3 × 3 CKM matrix V CKM is defined as, The FCNCs arise from the 3 × 3 non-diagonal matrix Z NC in the Z, h and χ 0 interactions in Eqs. (34), (36) and (37). The matrix Z N C in the neutral currents is defined as follows: Using Eqs. (38) and (39), we obtain the relation between the CKM matrix V CKM and the matrix Equation (40) shows that the unitarity of the CKM matrix for the three generations does not hold due to the deviation from the unit matrix of the matrix Z NC in Eq. (39). Taking the limit of M 4 → ∞, the unitarity relation is restored.
Next we rewrite the one-loop level effective Lagrangian Eqs. (8)- (11) in terms of the mass basis defined in Eq. (29). Below we write the part of the dipole operators and omit the other parts of the effective Lagrangian: where the field strength Z µν , F µν A and W ±µν are defined as, respectively. Note that the coefficient of the photon dipole operator with the down-type quarks is consistent with the case of the full theory calculation up to O(M −2 4 ) [17]. 3 Effective Lagrangian for ∆B = 1, 2 and b → sγ ( * ) processes In order to analyze the B meson system, we derive the effective Lagrangian for ∆B = 1, 2 and b → sγ ( * ) processes in the model with VLQ. Here we focus on contributions derived from the effective Lagrangian in Eq. (30). There are three sources to the effective Lagrangian. The first contribution is the same as the case of the SM. The second contribution corresponds to the diagrams which include the FCNC couplings. The third contribution comes from the violation of the CKM unitarity. In the following computations, we use the 't Hooft-Feynman gauge.
∆B = 1 process
At first we consider the ∆B = 1 process to calculate the branching ratio of the B s → µ + µ − process in the next section. The diagrams which contribute to the effective Lagrangian up to O(Z NC ) are shown in Fig. 3. Thebs → µ + µ − process occurs at tree level in the model with VLQ since there is the tree level Z FCNC among the down-type quarks. In one-loop level, the contribution comes from the Feynman diagrams in Fig. 3(a) and Fig. 3(b) which are also present in the SM. However these amplitudes include the additional contributions due to the violation of the CKM unitarity. Since these contributions are suppressed by the loop factor e 2 /(16π 2 ) compared with the contribution of the tree diagram in Fig. 3(c), we neglect the contribution from the violation of the CKM unitarity in the computation of thebs → µ + µ − process. Then the effective Lagrangian for the bs → µ + µ − is given as follows: where λ t bs ≡ V tb * CKM V ts CKM , and α em = e 2 /(4π) denote the fine structure constant of the electromagnetic interaction. The Inami-Lim function Y 0 (x t ) is [18,19], where The first term in Eq. (44) comes from the diagrams in Fig. 3(a) and Fig. 3(b) with the CKM unitarity relation for the SM ( i=u,c,t λ i bs = 0), that is the SM contribution. The second term of Eq. (44) comes from the diagram in Fig. 3(c), so this term is the new contribution in the model with VLQ.
∆B = 2 process
Next we show the effective Lagrangian for ∆B = 2 process in order to compute the mass difference of B s meson in the later section. In Refs. [20]- [23], ∆B = 2 process was computed up to O(Z 2 NC ) and O(Z NC · α em /(4π)). The diagrams which contribute to the ∆B = 2 process are given in Fig. 4. From the diagrams in Fig. 4, we obtain the effective Lagrangian for bs ↔ sb process as [20]- [23]: where is the Inami-Lim function [18]. Also Y 0 (x t ) is given in Eq. (45). The first term in Eq. (46) comes from the diagram in Fig. 4(a) with the CKM unitarity relation. The second term in Eq. (46) is obtained from the violation of the CKM unitarity in the diagram in Fig. 4(a) in addition to the contribution from the diagram in Fig. 4(b). The CKM unitarity relation is used for the one-loop Z FCNC vertex in Fig. 4 ) contribution is neglected. The third term comes from the diagram in Fig. 4(c). Note that the effective Lagrangian in Eq. (46) contains only the Inami-Lim functions which are gauge-parameter independent [23].
b → sγ ( * ) process
Finally we derive the effective Lagrangian for b → sγ ( * ) process to evaluate theB meson radiative decayB → X s γ. In addition to the contribution from the effective Lagrangian in Eq. (41), the diagrams in Fig. 5 are also contribute to the b → sγ ( * ) process. The effective Lagrangian for b → sγ process was calculated in terms of the full theory [17,24] while the effective Lagrangian for b → sγ * process was not calculated. Here we will give the effective Lagrangian for both b → sγ and b → sγ * in terms of the effective theory.
In the model with VLQ, there are no FCNC by the quark-quark-photon interaction at tree level. Therefore the leading order contributions which contain the FCNC couplings come from the violation of the CKM unitarity in the diagrams in Fig. 5(a) and the one-loop diagram in Fig. 5(b) [17,24]. In order to obtain the effective Lagrangian for b → sγ ( * ) process, we compute the amplitudes of the diagrams in Fig. 5. We introduce several counterterms when we renormalize amplitudes for Figs. 5(a) and 5(b). As mentioned in Ref. [17], the counterterms of the renormalization for the quark fields remove the divergence of the diagrams in Fig. 5(a) with the CKM unitarity and that of the diagram in Fig. 5(b). However these Z, χ 0 γ W ± , χ ± , c ± , t Figure 6: The diagram where photon mixes with Z or χ 0 at one-loop level. c ± denotes the Faddeev-Popov ghost.
counterterms cannot remove all the divergence arising from these diagrams. There still remains the divergence which comes from the violation of the CKM unitarity in Fig. 5(a). Therefore we have to introduce another counterterm. We consider the renormalization for the neutral gauge bosons Z and A [25]. The bare fields Z µ 0 and A µ 0 are related to the renormalized fields as follows: where Z ij , (i, j = Z, A) are the renormalization constants. The divergence coming from the violation of the CKM unitarity in the diagrams in Fig. 5(a) is exactly cancelled by the counterterm given as The renormalization constant √ Z ZA and √ Z AZ are determined by the diagrams in Fig. 6 where Z (or χ 0 ) and photon mix at one-loop level. The finite part of the transition amplitude of the diagram in Fig. 6 contribute to the effective Lagrangian for the b → sγ * process. Finally, we can obtain the effective Lagrangian L Eff (b → sγ) for the on-shell photon and the effective Lagrangian L Eff (b → sγ * ) which vanishes for the on-shell photon as follows: where the indices "CC"denote the contributions from the diagrams in Fig. 5(a) with the CKM unitarity relation, namely the SM contributions. Also indices "uv" and "NC" imply the contributions from the violation of the CKM unitarity and the diagram in Fig. 5(b) which include the neutral current respectively. The index "Mix" indicates the contributions from the Z-photon and χ 0 -photon mixing diagrams. Concretely these effective Lagrangian are obtained as: and The Inami-Lim functions in Eqs. (52) and (55) are given as follows [18]: where the subscripts "u" and "W " indicate the contributions which are proportional to the electromagnetic charge of the up-type quarks and the W boson respectively. The functions F ZZ , F Z and F Z in Eq. (54) are given as 1 , where r p ≡ (m p d /M Z ) 2 and w p ≡ (m p d /M h ) 2 . The functions F 1 , F 2 , and F 3 , come from the diagram Fig. 5(b) where the exchanged particles are Z, χ 0 and h respectively. The functions f ZZ , f Z in Eq. (57) are obtained as follows: where The effective Lagrangians for the b → sg ( * ) process can be obtained by replacing the external 1 The terms linear to Z sb NC in Eq. (54) and the loop functions F Z (r p ) and F Z (r p ) in Eqs. (64), (65) do not agree with the corresponding terms of equations (23), (24) and the loop function F NC 1 (r α ) of Ref. [17].
photon line which attached to quarks with gluon line in Fig. 5. They are obtained as follows: and 4 Analysis of B s -B s mass difference, B s → µ + µ − ,B → X s γ processes and violation of the CKM unitarity In this section, we will make numerical calculations for the mass difference of the B s meson ∆M Bs , the branching ratio of the B s → µ + µ − and the branching ratio of the inclusive radiative decay of theB mesonB → X s γ in the model with VLQ. In addition to these processes, we consider the constraint from Eq. (40). We will use the new physics parameters defined as in the following computations.
B s -B s mass difference
We can obtain the mass difference of the B s meson in the model with VLQ as [20]- [23]: where η B , B s and f Bs represent the QCD factor, the bag parameter of B s meson and the B s meson decay constant respectively. Here we use the QCD correction of the SM. The numerical values for the parameters in Eq. (81) are shown in Table 1. The function ∆ 1 (r sb , θ sb ) is given below, We cannot use the SM value for the product of the CKM matrix elements |λ t sb | in the model with VLQ since the new physics parameters r sb and θ sb affect the determination of the CKM matrix elements. Instead we determine the |λ t sb | by using Eq. (81) in the following computations. Therefore the |λ t sb | is obtained as the function with respect to the new physics parameters r sb and θ sb .
Branching ratio of
The branching ratio of the B s → µ + µ − process in the model with VLQ is given as follows: where η Y is the NLO QCD correction [26,27]. The life time of the B s meson is denoted by τ Bs . These values are shown in Table 1. The function ∆ 2 (r sb , θ sb ) is given below,
Branching ratio ofB → X s γ
TheB meson inclusive radiative decayB → X s γ is governed by the effective Hamiltonian at the b-quark mass scale µ = O(m b ) [26,28], where O i and C i (i = 1 ∼ 6) denote the 4-Fermi operators and their Wilson coefficients respectively. The effective operators O 7γ and O 8G are given by, In the calculation of the branching ratio for theB → X s γ, it is convenient to introduce the so-called "effective coefficients" C (0)ef f i [29,30]. The effective coefficient for the effective operator O 7γ at the scale µ = O(m b ) is given as [26,30,31]: where η = α s (M W )/α s (µ) with α s = g 2 s /(4π) and In Eq. (88), the indices "(0)" mean the leading order contributions. Since we do not take into account the running effect from the VLQ mass scale to the EW scale, we obtain the Wilson coefficients C 8G at the EW scale (taken as M W ) as: where the Wilson coefficients, come from the SM contributions in Eq. (52). The Wilson coefficients are obtained from the VLQ contributions in Eqs. (41) and (42). The Wilson coefficients C NP2 7γ and C NP2 8G are given as follows: where The Wilson coefficients with the suffix "uv" come from the effective Lagrangian in Eq. (53) whose origin is the violation of the CKM unitarity. The Wilson coefficients with the suffix "NC" are obtained from the effective Lagrangian in Eq. (54) by taking the limit r p → 0 and w p → 0. Here we neglect O(Z 2 NC ) terms. The Wilson coefficients C uv 8G and C NC 8G can be obtained from the effective Lagrangian corresponding to the b → sg diagrams.
In our numerical calculation, we use the NLO expression for the branching ratio Br[B → X s γ] given as [32]: where δ N P sl and δ N P rad are non-perturbative correction for the semi-leptonic and radiativeB meson decay rates, respectively. The quantity R quark at NLO is summarized in Ref. [32] as: The function g(z) with z = m 2 c,pole /m 2 b,pole corresponds to the phase space factor for the semileptonic decay. The function F (z) contains the NLO correction for the semi-leptonic decay and the difference between the pole mass and MS mass of the b-quark. The δ is the lower cut on the photon energy in the bremsstrahlung correction: The term A(δ) originates from the bremsstrahlung corrections and the virtual corrections [32]- [35], where the functions f ij (δ) can be found in Ref. [32]. The term |D| 2 in Eq. (101) is constituted by the NLO Wilson coefficient for O 7γ and the virtual corrections for b → sγ [32]- [34]. Here D is defined as, where C (1)ef f 7γ can be found in Ref. [32]. In our numerical calculation, we take µ b = m b , E γ > 1.6 GeV and neglect the O(α s ) correction to the new physics contributions. Therefore the Wilson coefficients C NP1 7γ,8G and C NP2 7γ,8G are only included in the first term in Eq. (104).
Violation of CKM Unitarity
The violation of CKM unitarity is shown in Eq. (40). For p = b, q = s, we obtain the following relation: This relation can be rewritten as follows: where we define The relation in Eq. (105) leads to a quadrangle in the complex plane as shown in Fig. 7.
Numerical Analyses
In the following numerical analyses, we obtain the constraints on FCNC couplings by using the current experimental data of rare B decays B s → µ + µ − andB → X s γ. We also take Table 1.
At first we analyze the branching ratio of B s → µ + µ − process by using the expression in Eq. (83). Note that the branching ratio depends on the new physics parameters r sb and cos θ sb . We equate Br[B s → µ + µ − ] VLQ in Eq. (83) with the experimental value, As the experimental value, we adopt the branching ratio measured by LHCb [37], In Fig. 8, we show the dependence of Br[B s → µ + µ − ] VLQ on the absolute value of the FCNC coupling Z sb NC . The different colors of the dots in the scattered plots represent the different ranges for the value of the new physics parameter |θ sb |. All the colored region satisfy the quadrangle constraint Eq. (106) with 0 ≤ γ s ≤ 2π. Note that the expression of the branching ratio and quadrangle constraint depend on θ sb through its cosine and the plotted regions do not depend on the sign of θ sb . The experimentally allowed range of the branching ratio in Eq. (109) is shown as the blue shaded region. The horizontal solid line corresponds to the central value of the experimental branching ratio in Eq. (109). In Fig. 8, as Z sb NC approaches zero, Br[B s → µ + µ − ] VLQ comes close to the SM prediction [27], As Z sb NC increases from zero to 3 × 10 −4 , Br[B s → µ + µ − ] VLQ decreases for |θ sb | < π/2, while it increases for π/2 < |θ sb | < π. As Z sb NC becomes larger, Br[B s → µ + µ − ] VLQ increases regardless of the range of the |θ sb | since the third term in Eq. (84) is dominant. The dependence V cb 0.04181 +0.00028 −0.00060 [3] on |θ sb | for the smaller Z sb NC can be also understood from the Eq. (84) since the coefficient of the term which is linear to r sb is proportional to cos θ sb .
Next we analyze the branching ratio ofB → X s γ process by using the expression in Eq. (100). Here we denote the branching ratio in the model with VLQ as Br[B → X s γ] VLQ . The new physics parameters r sb and θ sb are included in the Wilson coefficients in Eqs. (91) and (92). In order to obtain constraints on r sb and θ sb , we take account of the current average [38], of experimental data [39]- [45]. In Fig. 9, we show the dependence of Br[B → X s γ] VLQ on |Z sb NC |. The different colors of the dots in the scattered plots represent the different ranges for the value of the new physics parameter |θ sb |. All the colored region satisfy the quadrangle constraint Eq. (106) with 0 ≤ γ s ≤ 2π. The experimentally allowed range of the branching ratio in Eq. (111) is shown as the blue shaded region. The horizontal solid line corresponds to the central value of the experimental branching ratio in Eq. (111). In Fig. 9, as |Z NC | approaches zero, the value of Br[B → X s γ] VLQ comes close to that of the SM prediction at NNLO accuracy [46], We note that the number of the purple colored dots is much less than that of the red colored ones, since the quadrangle constraint for π/4 ≤ θ sb ≤ π/2 is tighter than that for 0 ≤ θ sb ≤ π/4. For the smaller Z sb NC , the filled regions with colored dots are the almost same as each other. Thus Br[B → X s γ] VLQ depends on |θ sb | weakly compared with Br[B s → µ + µ − ] VLQ .
In the left figure of Fig. 10, we show the region allowed by the experimental data for the parameter r sb and θ sb . The blue dots satisfy both the constraint from the Br[B s → µ + µ − ] Exp and the quadrangle constraint Eq. (106) with 0 ≤ γ s ≤ 2π. The green dots satisfy both the constraint from the Br[B → X s γ] Exp and the quadrangle constraint. The values of r sb and θ sb in the region where the blue and green region overlap each other satisfy all the three constraints. The blue region has the shape of a ring. The region inside the ring is excluded because r sb and θ sb in this region leads to the predictions of Br[B s → µ + µ − ] smaller than the experimental value. One finds that the stringent constraint on the parameters r sb and θ sb comes from the Br Using the definition of Z NC in Eq. (39), we obtain the constraint on the VLQ mass M 4 and the product of the Yukawa coupling |y s4 d y b4 * d |. This result is shown in the right figure of Fig. 10 where we use v = 246 GeV [36]. Since the constraint on r sb and θ sb from the Br[B s → µ + µ − ] Exp is stronger than that from the Br[B → X s γ] Exp (See Fig. 10 left.), we show the region with blue dots where (M 4 , |y s4 d y b4 * d |) satisfy the constraint from the Br[B s → µ + µ − ] Exp and the quadrangle constraint Eq. (106). One finds that the lower limit on the VLQ mass is around 5.5 TeV for y s4 d y b4 * d ∼ 1.
Finally, we show the violation of the CKM unitarity on the complex plane in Fig. 11. The definition of each side is the same as that of Fig. 7. In order to obtain Fig. 11, we choose r sb = 0.018 and |θ sb | = π/6 and use the central values of the CKM matrix elements in Table 1. The side for λ t bs is connected with the real axis at (1, 0). The left figure is the case of θ sb = π/6 while the right figure is that of θ sb = −π/6. One can see that the side for Z sb NC can be as large as that for λ u bs and the sign of θ sb affects the value of the angle β s in Fig. 7. Re Im Figure 11: The violation of the CKM unitarity on the complex plane. In order to obtain these figures, we choose (r sb , θ sb ) = (0.018, π/6) in the left figure and (r sb , θ sb ) = (0.018, −π/6) in the right figure. of the VLQ is much larger than the EW scale. We matched the effective theory with the full theory not only at tree level but also at one-loop level and obtained the effective operators which are related to the radiative transition of the quarks. These operators correspond to the contribution from the diagrams including the VLQ in the internal line. One can find that the coefficient of the photon dipole operator with the down-type quarks is consistent with the case of the full theory calculation given in Ref. [17]. The other contributions to the radiative transitions come from the violation of the CKM unitarity and the diagrams which include the FCNC couplings among the SM quarks. We obtained the effective Lagrangian for the b → sγ ( * ) process arising from these contributions.
From our numerical results, we obtained the constraints on the FCNC coupling Z sb NC and the new physics parameters (r sb , θ sb ) defined in Eq. (80). We found that the dependence of Br[B → X s γ] VLQ on θ sb is weaker than that of Br[B s → µ + µ − ] VLQ and the constraint on the model parameters r sb and θ sb from the Br[B s → µ + µ − ] is more stringent than that from the Br[B → X s γ]. One can discriminate the cases of different |θ sb | through the Br[B s → µ + µ − ] as shown in Fig. 8. When the |Z sb NC | is order of 10 −4 , the Br[B s → µ + µ − ] becomes small (large) for |θ sb | 0 (|θ sb | π) compared with that of the SM. In Fig. 11, we showed the violation of the CKM unitarity on the complex plane when we chose r sb = 0.018 and |θ sb | = π/6. The difference in the sign of the θ sb affects the angle β s , therefore we have to investigate the constraint from the observables related to the β s to further restrict the form of the violation of the CKM unitarity.
Although we focused on the case of b → s transitions, the effective Lagrangian obtained in this paper can be applied to the FCNC transition for the other combinations of the downtype quarks. The 4-Fermi operators in Eqs. (9)-(11) and the effective Lagrangian for the off-shell photon contribute to b → sl + l − processes includingB →K * l + l − .
Finally, we add a comment on the renormalization group (RG) effect. One can not neglect the effect when the VLQ mass is much heavier than the EW scale. When M 4 /M W ∼ 100, one may expect about 10% corrections to the Wilson coefficients. Moreover, the expressions of the FCNC coupling, CKM matrix elements and down-type quark masses will be modified. Including them, we will carry out the precise analysis elsewhere. | 8,978.2 | 2018-01-16T00:00:00.000 | [
"Physics"
] |
Linking Social and Vocal Brains: Could Social Segregation Prevent a Proper Development of a Central Auditory Area in a Female Songbird?
Direct social contact and social interaction affect speech development in human infants and are required in order to maintain perceptual abilities; however the processes involved are still poorly known. In the present study, we tested the hypothesis that social segregation during development would prevent the proper development of a central auditory area, using a “classical” animal model of vocal development, a songbird. Based on our knowledge of European starling, we raised young female starlings with peers and only adult male tutors. This ensured that female would show neither social bond with nor vocal copying from males. Electrophysiological recordings performed when these females were adult revealed perceptual abnormalities: they presented a larger auditory area, a lower proportion of specialized neurons and a larger proportion of generalist sites than wild-caught females, whereas these characteristics were similar to those observed in socially deprived (physically separated) females. These results confirmed and added to earlier results for males, suggesting that the degree of perceptual deficiency reflects the degree of social separation. To our knowledge, this report constitutes the first evidence that social segregation can, as much as physical separation, alter the development of a central auditory area.
Introduction
Over the last decade the importance of social influences on vocal development has become an evidence in a variety of species [1,2]. Recent studies reveal how social cues affect speech development in human infants [3], and also how direct social contacts and interactions are required for infants to maintain perceptual abilities to discriminate phonetic units [4]. Attention and motivation are key elements in learning to communicate: children involved in a social situation are more ''awake'' and attentive, and therefore more prone to react and memorize [5]. Thus, early awareness of infant is a good predictor of their later language skills [6]. Social interactions activate attentional processes, enabling the processing and integration of information [7], while the intersensory redundancy they provide facilitates attentional focusing on certain aspects of the sensory stimulation [8]. However, the processes involved in this link between ''language brain'' and ''social brain'' are still poorly known: the interface between language and social cognition remains a mystery [2].
Songbirds are good candidates for trying to unravel this mystery: like humans, they are sensitive to social influences for vocal learning and they are active in their choice of tutor [e.g . 9]. Again, the exact processes involved are not well known, but here also social stimulations may enhance attention and arousal, as well as motivation. According to Hultsch et al. [10], the positive effects of social exposure on song learning could come from perceptual mechanisms that make young birds more attentive to the tutors' vocalizations. Indeed, socially deprived birds appear to show hearing deficits [11]. Interestingly, visual stimuli may activate auditory central parts of a songbird's brain [12]. Moreover, selective attention is one of the processes that may alter hearing by changing the micromechanical properties of the cochlea [13]. This could explain that vocal copying, as well as perceptual abilities, are tuned on particular tutors [e.g . 4]. Social bonding appears essential in many social songbird species, as well as in humans, for vocal learning [14], and one wonders what consequences the lack of such a bond would have. Children that interact more with peers than with adults develop poorer language skills [15], and neglected children show poorer language abilities than normally developing children, but also than abused children [16]. The fact that autistic children, who are characterized by impairments of their social interactions, also present selective impairments in attention to vocal-speech sounds [17], and abnormal cortical voice processing [18], further emphasizes the link between social and perceptual development.
In this present pioneering study, we aim to improve our knowledge of this link by testing the hypothesis that social segregation prevents the proper development of a central auditory area. Our previous studies showed that neuronal preferences and general characteristics of development (proportion of auditory sites, response types) of the field L (which is a homologue of the primary auditory cortex of mammals) of male European starlings depend not only on early auditory experience during development [19], but also on social experience per se. Thus, young male starlings that could hear adult song, but were socially deprived, showed, when adult, deficits similar to those of auditory deprived animals: larger auditory area, poor selectivity, altered tonotopy [20], recalling findings for auditory deprived young rats [21]. Still more intriguing was the finding that young males raised in direct contact with adult males, although presenting a much better structured auditory area than the above-mentioned deprived animals, also showed consistent differences compared to wildcaught males, with a higher proportion of auditory sites and lower neuronal specialization. As these young males preferentially developed bonds with their peers, this suggested that social segregation from the adults, by lowering their selective attention towards their song, may have induced these abnormalities [22]. Social segregation has also been suggested to be responsible for limited recoveries in early deprived animals, when later they were placed with adults [23].
In order to test this hypothesis, we needed a situation where social segregation would be more clear-cut than in the previous study with young males. Therefore, we focused here on young females raised with male tutors. Female starlings are known to form strong same-sex social pairs, to prefer to sing near another female, and to learn song from same-sex tutors [22,24,25]. The aim was not to examine whether the effects would differ according to the tutor's sex, but to ensure that placing young females with male tutors would induce social segregation [26]; this has been confirmed by behavioural observations and song recordings [22]. Electrophysiological recordings performed on these females when they were adult revealed perceptual abnormalities that made these male-tutored females resemble more socially isolated birds than normal adult females. These findings agree with preliminary data for males, and constitute, to our knowledge, the first evidence that social segregation can, as much as physical separation [20], alter the development of a central auditory area.
Results
We investigated the effects of adult male tutoring on the development of auditory responses of six hand-raised female European starlings (MT) when they had become adult (2 years old). We compared these results to those obtained for four adult wild-caught females (WC), by the same electrophysiological procedure. The use of the same procedure for every bird, based on systematic regular recordings in the same sagittal plane (2761 neuronal sites tested; X +SE~212:38+9:69 sites=bird; see material & method), enabled us to compare the number of responsive sites. This revealed clear differences between groups of birds. Indeed, the proportions of auditory sites significantly differed between the two groups ( fig. 1A; MT~93:29%+1:31, WC~61:23%+0:64; Mann-Whitney, U = 0, p = 0.05); the MT females showed a much higher proportion of responding sites than the WC females. We compared these data to an additional group of three females raised in social deprivation (SD, in pairs with one young male or isolated; see material & method), but that had heard the aviary vocal interactions through loudspeakers [see 22 for details]. Interestingly, the proportion of responsive sites of the MT birds was similar to that of the SD females ( fig. 1A; SD~92:52+0:87). The MT females therefore appeared to be closer to deprived animals than to adult wild-caught birds. Note that their male peers were less affected as the proportions of their auditory sites differed significantly from that of SD animals (92%/ 98.5%), and male tutors (80%) [20]. These results clearly reflect the degree of social segregation and vocal copying, as young males, although staying mainly in same-age/same-sex groups, remained more in proximity and copied more their tutors than did the young females [22].
Neuronal specialization also differed between the two groups of females: a majority of neuronal sites in WC females responded to 1 to 4 stimuli, whereas most neuronal sites in the MT birds responded to all, or most stimuli, who again showed a pattern that was closer to that of SD animals ( fig. 1D).
The proportions of specialized neurons were estimated by counting the recording sites that responded to 100% of the stimuli. This method gives a good indication of the number of nonspecialized (or generalist) neurons in field L complex [19,20]. As the fact that some types of stimuli (individual-specific whistle themes) were not common to all subjects could bias this evaluation, we compared here the responses to the six test stimuli that were common to all subjects (class I whistles). This analysis confirmed the preceding results: more auditory sites responded to only one stimulus in WC females than in MT females (WC~7:03+1:39; Again, results for MT females were similar to those for SD animals (SD~62:93%+1:97). Note that, our results for MT males raised under the same conditions were intermediate: they presented a lower proportion of generalist neuronal sites than did SD animals (37%/46%), but a higher proportion than male tutors (2%) [20].
Finally, PSTHs greatly differed between the two categories of animals subjects ( fig. 2): as WC females showed a typical pattern of phasic selective responses to precise parameters of the stimuli, whereas MT animals showed a tonic, non-selective pattern. Again characteristics of MT females appeared close to those of SD animals.
Discussion
Young female starlings raised with only peers and adult male tutors neither established close social bonds with males, nor did they copy their songs, restricting song sharing to peers that were equally inexperienced birds [22]. When tested as adults, it appeared that these females showed abnormalities in neuronal responses to the playback of species-specific stimuli in the main central auditory area (Field L), compared to WC adult females.
Several features were affected: they had 1) a larger auditory area (larger proportion of responsive sites), 2) a lower proportion of specialized neurons (sites responding to only one stimulus) and 3) a larger proportion of generalist sites (sites responding to all stimuli), associated with tonic, non-selective responses. Comparison with available data on SD birds showed similar abnormalities, suggesting that social segregation from adults may induce the same effects on perceptual development as a physical separation. These results are consistent with previous data on male peers who were, however, ''intermediate'' in that they differed not only from WC adults but also from SD young males. This reflected an intermediate social situation where young males, although forming mostly a same-sex/same-age group, showed some proximity with the adult males and copied some of their songs [20,22].
These findings therefore strongly suggest that the degree of deficiency reflects the degree of social separation, be it physical or merely social segregation.
Overall, the observed abnormalities were similar to those described for other acoustically-deprived animals. Larger auditory areas have been observed in rats [21] and starlings [19] raised without proper auditory stimulation. Young male starlings deprived of auditory experience with adult song, also showed, when adult, a higher proportion of generalist, and lower proportion of specialized neuronal sites [19]. Interestingly, similar impairments were observed in birds that could hear adult song but had no contact with adults [20].
One could argue that the acoustic environment in the laboratory did not offer the variety of sounds that WC animals may experience in the field. However, first, the aviaries were placed in rooms with large windows allowing birds to hear sounds from outdoors (birds, dogs, cars etc., the usual sounds of a university campus) as well as from indoors, such as human voices, doors, other bird species, indicating that their acoustic environment was not totally impoverished. Second, while the acoustic environment could explain to some extent the differences observed between WC and MT females, it cannot explain the differences observed in males. Moreover, further experiments have shown that, under the same conditions, young males and females can develop normal song repertoires if they are placed in a dyadic situation with an adult, that is forced bonding, and do not if social bonding does not occur [26]. Finally, we have observed that young females do not learn better from playbacks of female songs than from playbacks of male songs, showing that mere auditory cues are very unlikely to be involved in sexual lines of learning [27]. Therefore, although acoustic conditions could explain some part of the differences observed between experimental and WC birds, it cannot fully explain it, which leaves room for the impact of social influence.
Central deficiencies in the auditory area clearly reflected differences in vocal copying according to social experience, both in the females that are described here and in the males that were raised with them. Thus, the fact that young males raised with an adult male did not copy much of the latter's song [20,22] suggested that social segregation may have altered selective attention towards the tutor. The present results for females, which are even less prone to copy from adult males than young males, further reinforce this hypothesis. Since social influences may be mediated by attentional processes [22,28,29], the processing and integration of sensory information may have been altered [5,7]. Selective attention has been shown to alter hearing by changing the micromechanical properties of the cochlea [13]. Moreover, Sturdy et al. [11] showed that zebra finches require social interactions with conspecifics to develop normal auditory perceptual abilities.
Finally, Humans who lack experience with a language during development are considered ''deaf'' to some non-native language characteristics [30], and this is confirmed by Kuhl et al.'s [4] recent findings that infants need direct social interactions to maintain discriminative abilities.
The aim of the present study was to investigate in more depth the effects of social segregation on neural development suggested by previous studies [20,23]. For that, we studied an extreme situation that we knew would not allow social bonding, even between birds in the same aviary, that is young females raised with male tutors [22,24]. This extreme situation yielded the expected results, and confirmed preliminary findings for males. As we do not yet have data for females raised with adult females, no comparisons can be made. However, we expect that this situation would yield more ''mixed'' results, like those obtained for young males [26].
This pioneering study shows that young birds, when socially segregated from adults, exhibit abnormalities in the development of their central auditory area, and this to the same extent as socially-deprived animals. This confirmed previous indications in this direction. Mere environmental acoustic conditions cannot explain the entire array of evidence that this study, added to earlier reports [20,23], has us enabled to present. The present results certainly add to the evidence that social and vocal brains are linked [2] and they shed new light on findings such as those of Gervais et al. [18] showing that socially-impaired autistic children present abnormal cortical voice processing. Indeed, the lack of social bonding due to the autistic syndrome might be responsible, through a lack of selective attention, for the perceptual impairments.
Further studies will be necessary to confirm our results that are, to our knowledge, the first ones to suggest an impact of ''social isolation'' on sensory development, and they have important general implications that go far beyond birdsong research.
Experimental animals
The experiment included two series of animals. (1) A ''core'' experiment with two groups of birds: one group of four wildcaught (WC) female starlings and one experimental group of six aviary-raised female starlings. The WC females were our ''controls'' as, in their wild environment; they had benefited from both female and male influences and were likely to have been able to learn their song from adult females [24]. The experimental females were raised in aviaries with peers (males and females) and only male adults [see 22]. The aviaries were in a room where all laboratory noises as well as external sounds (human voices, street traffic …) could be heard. (2) For a larger comparison we used additional data from three socially-deprived birds: one female raised in a pair with a male and two females raised in isolation in sound-proof chambers [22]. As no differences were evidenced between these two groups (e.g. mean proportions of responsive sites: raised in isolation = 92.1061.32 and pair raised = 93.37), data from these three birds were pooled (socially deprived birds: SD). Note that, these birds could hear, through loudspeakers, the vocalizations emitted in the aviaries.
Data for song production of the experimental birds have been described in Poirier et al. [22] and revealed that the male-tutored (MT) females copied mainly songs of same-sex peers and very little songs of adult males.
Stimuli
When the animals were 2 years old, neuronal responses to 22 species-specific stimuli were electrophysiologically tested, while the birds were awake and restrained, (fig. 3). The song repertoire of each female was recorded by placing them in individual sound-proof chambers and automatic song recordings were made until the complete repertoire of each bird was recorded [22]. The auditory stimuli were a variety of species-specific songs chosen for their behavioural relevance. Hausberger [31] described three classes of starling song. Class-I whistles are simple, very loud, and mostly unitary songs. They correspond to four whistle types, namely the inflection (IT), the harmonic (HT), the simple (ST), and the rhythmic (RT) themes, that are found in the repertoires of all male starlings in most populations. These whistles are the basis of songmatching interactions, and they are clearly categorized and recognized by the birds, despite local variations [32]. Only one of them (HT) is occasionally produced by females. Class-II whistles are loud and simple structures composed of one or several notes. They are mostly individual-specific within a colony, but they can be shared by close social partners, both males and females [24,33]. Finally, class-III songs (also called warbling) are sung in long, complex, and quiet sequences composed of three parts containing motifs that are repeated one to several times with increasing tempo [34,35]. Most of the motif types are individual-specific, but the second and third parts of a sequence include clicks and high-pitched trills that are found in all male, but not in female sequences.
Given that we were mostly interested in the songs' social implication, we decided to put more emphasis on whistles, which are more specifically involved in social exchanges [see, e.g., 31,32,33] and not on warbling song, which is involved in mate choice and breeding [34,35]. Starlings tend to sing successions of whistles separated by 1-8 seconds. Such sequences can include successions of up to 200 whistles, with repetitions of each whistle type in the repertoire (Fig. 4). According to the social context, these successions of whistles may be followed, or not, by a sequence of continuous warbling [32,36].
Class-I and class-II whistles were chosen for their social relevance: we used each type of class I universal whistles that are usually used in male-male interactions, and unfamiliar, familiar and bird's own exemplars of class II individual-specific whistles [31]. This covered the whole range of starlings' whistle repertoires [37]. Familiar whistles were whistles that had been heard by the birds (adult songs) but that were not present in their own repertoire. The stimuli were broadcast with intervals of 300 ms. This time interval was sufficient to avoid adaptation to stimuli. This method has been used for several decades and no adaptation has been reported in the Field L using this kind of stimulus set [38]. The stimulus set was presented in an anechoic, sound-attenuating chamber through a loudspeaker placed 20 cm in front of the bird's head. The maximum sound pressure at the birds' ears was 60 dB SPL measured by a sound calibrator (LEA S.S.T.4S). The stimulus set was repeated 10 times at each recording site.
Multi-unit recordings
All neuronal recordings were made during the non-breeding season (autum and winter) in order to avoid possible seasonal influences, as known in other songbirds. Multiunit recordings were chosen here to characterize neuronal preferences in the field L. This recording method is very stable and allowed us to record activity from a large number of neuronal sites (X +SE~212:38+9:69 sites=bird). Whereas such recordings do not enable precise evaluation of single cell selectivities, they do give a gross idea of the local neuronal ''preferences'' [39,40,41,42].
Before the neurophysiological experiments, a stainless steel well was implanted stereotaxically on the bird's skull under halothane anaesthesia (0.4 l/min of carbogene -95% O2 -5% CO2saturated in halothane -2bromo-2chloro-1,1,1trifluoroéthaneand 0.6 l/min of carbogene). After implantation, the birds were allowed to rest for 3 days, during which they were kept in cages with conspecifics. During the experiments, the well was used for fixation of the head and as the indifferent electrode.
The electrodes were made by Frederick Haer & Co. (Bowdoinham, USA) and consisted of a tungsten wire insulated by epoxylite, with a fine tip (angle 10-15u). The range of the electrode impedance was 2-4 MV. An Amiga 4000 computer was used to record action potentials. A home-made analogue/digital card was used to digitize the recordings (22 kHz, 8 bits), and action potentials were counted with a programmed window discriminator.
The implant was located precisely with reference to the bifurcation of the sagittal sinus: 2.5 mm rostral and 1 mm in the left hemisphere. These values were the coordinates of the centre of the recording plane that was parallel to the sagittal plane. The recording planes were at precisely the same locations for all birds. Recordings were performed at 30 to 40 sites along the path of one electrode penetration. One recording session usually lasted about 3 hours. During recording sessions, the birds were awake and kept in a jacket in order to limit their movements. The recorded plane covered a large part of field L centred on the L2 sub-area described in wild starlings [40]. Penetrations within one recording plane were 200 mm apart. Recordings started, for each penetration, 600 mm below the brain surface, at a site that gave no auditory response, and continued until 4000 mm below the brain surface, where auditory responses were no longer detectable. The recording plane was considered completed when no response was obtained in both outermost penetrations. Twelve penetrations were necessary to complete a recording plane for most animals. The dimensions of the recording plane were 2.4 mm caudo-rostral and 3.6 mm dorso-ventral (8.64 mm 2 area). After the last recording session, four recording sites were marked by injecting alcian blue to provide orientation points to check the location of the electrode tracks in the forebrain [e.g. 19,39,40].
Data analysis
Experimental data were recorded with a temporal resolution of 0.1 ms. Peri-stimulus time histograms (PSTHs) were calculated, using a temporal resolution of 2 ms, for all the recording sites and all the stimuli. Spontaneous activity was determined from the recording of activity during 100 ms before the beginning of each auditory stimulus. To determine whether there was an activation or an inhibition, the evoked activity was compared to the spontaneous activity using a Student-Fisher t test. We decided that there was activation when p was below 0.01. Since we were trying to determine whether there was a response or not, we were confident in our results using a 0.01 level. However, given the low number of spikes during spontaneous activity (2-3.5 spikes/s), the contrast between spontaneous activity and inhibition was difficult to confirm statistically. We therefore decided to use a p-value of 0.05 for inhibition, which is still a good level [43].
Different measures of responses were made: -Proportions of responsive sites that differ according to early experience: these proportions are larger in inexperienced animals [19,21]. -The degree of specialization of the neurons, which was difficult to characterize because of the difficulty to evaluate selectivity properly [44]. We chose an indirect evaluation of neuronal specialization: -1-the proportion of neuronal sites that responded to only one stimulus (specialized sites) and -2-the proportion of sites that responded to all stimuli (generalist sites). This measure proved to be useful in a previous study on developmental plasticity [19]. | 5,563.6 | 2008-05-21T00:00:00.000 | [
"Biology",
"Psychology"
] |
Scheduling for a Single-Terminal Intermodal System Recovery with Poisson Arrivals
This paper studies the recovery of an intermodal freight system from a major disruption and develops a model for optimising vehicle schedules under disrupted conditions. The proposed model optimises the recovery of a single-terminal system with relatively short feeder routes on which vehicle roundtrip times are exponentially distributed and arrivals at the terminal are Poisson-distributed. Mathematical expectations are used to formulate the deterministic equivalent for the scheduling problem and a genetic algorithm is applied to optimise the schedules on main routes. The model developed in this paper can be applied to single-terminal transfer systems with any combination of transportation modes using discrete vehicles, as long as the feeder arrivals do not deviate much from the assumed Poisson distributions. Since its computational time is relatively insensitive to the numbers of vehicles on feeder routes, this model can be used to efficiently optimise intermodal systems with numerous vehicle arrivals.
INTRODUCTION
Efficient transfer coordination in an intermodal transportation network can reduce the dwell times of cargos at the transfer terminals where various routes interconnect, thereby also increasing the vehicle utilisation rates, reducing the need for direct routes to connect many origins and destinations, reducing storage requirements at transfer terminals, and improving total system efficiency.In this paper we analyse an intermodal freight system with a single transfer hub and develop a model that optimises the schedule of vehicles on main routes while assuming Poisson arrivals on feeder routes.This model determines the departure times on main routes that minimize the supplier's overall system cost, including storage, vehicle, in-terminal operation and late delivery penalty costs.
The optimisation problem addressed in this paper is related to some classical problems of operations research, such as machine scheduling, lot sizing, and supply chains.Somewhat related machine scheduling problems can be found in [1] to [3].[4] to [7] address the scheduling problem in transfer systems, but under different conditions from those considered here.For example, [4] and [5] analyse different transfer coordination policies and determine the thresholds in the intermodal systems with complex multistop routes and lower variance in travel durations, both typical for normal operations.In this paper, we analyse the case with high variances in travel times, which are typical of disrupted operations and model the arrivals as a Poisson process.[6] and [7] deal with scheduling takeoff times, a problem that will be studied in this paper.Both papers optimise departures on a single airline route and under different demand assumptions from those considered here (i.e.[6] assumes uniform demand, whereas [7] adopts time dependent demand).This paper is based on the same framework as [8] and develops a model which, unlike [8], is suitable for intermodal systems with numerous arrivals of vehicles on feeder routes.In [8], Marković and Schonfeld develop a scheduling model which assumes generally distributed vehicle roundtrip durations and vehicles operating on multiple feeder routes.Low computational efficiency of the stochastic program used in [8] enabled only the optimisation of schedules in systems with relatively few arrivals on feeder routes.In this paper we provide a computationally less demanding model by assuming exponentially distributed vehicle roundtrips and fixed fleet size on feeder routes.These assumptions allow us to model the arrivals as a stationary Poisson process and derive the expectations needed to formulate a scheduling problem that is optimised much more efficiently than the stochastic program in [8].Thus, the model developed here can efficiently optimise large intermodal systems with numerous arrivals on feeder routes.
In this paper we analyse the recovery of a system from a major disruption during which large amounts of freight have accumulated along the feeder routes, which are assumed here to be served by trucks.To dissipate the backlogs we let the trucks on feeder routes operate nonstop and deliver cargo to the terminal where the freight is transferred to main routes, which are assumed here to be aircraft routes.Thus our transfer terminal represents an airport hub.We use pre-determined fleet sizes on feeder routes and seek to optimise the number of departures and specific departure schedules on main (air) routes.We consider one-directional flow going from origins along the feeders' routes towards destinations at the main routes, as might be expected in emergency evacuations or recoveries from major disruptions.
In Section 1 we describe the operations within the observed intermodal system and explain the tradeoff between different types of costs.The anticipated types of costs, which are included in the objective function, are formulated in Section 2. Section 3 explains the constraints, while Section 4 provides the model formulation that is further tested on numerical examples designed in Section 5. Finally we draw conclusions and suggest possible extensions of this work.
PROBLEM
We consider an intermodal system with relatively short truck routes that feed cargo to major airplane routes (Fig. 1), which has suffered a major disruption.In order to reduce the backlogs accumulated along the feeder routes while the system is inoperative, each truck operates nonstop and fully loaded between an origin and the hub, without pausing between such round trips while backlogs persist.The trucks collect freight from multiple origins along their feeder routes and deliver it at the airport hub.When the takeoff on route is scheduled at time , the airplane is filled to capacity with freight, as long as freight backlogs persist.If the airplane cannot carry all the freight waiting at the airport, the remaining freight has to wait for the next flight with available capacity.On the other hand, if prior to the takeoff, there is little freight in the terminal's storage connecting to route l, the airplane's capacity is underused and an additional flight may be needed later.For simplicity, we assume that all trucks are similar and all operate at equal maximum capacity.Moreover, we assume that airplanes have similar capacities.Finally, we assume that the expected amount of cargo waiting for connections can never exceed a preset multiple (e.g.0.8) of the terminal's storage capacity.
Our objective is to find the optimal (i) number of takeoffs on each air route and (ii) corresponding schedule, for the given probabilistic durations of roundtrips on truck routes.In computing total cost we consider the storage cost, in-terminal operation cost, penalty for late delivery, and airline service cost.A tradeoff exists between the aforementioned types of costs.The earlier one schedules the takeoff, the lower are the storage and penalty costs associated with the freight that successfully connects.However, the earlier the takeoff is scheduled, the greater are the chances that an airplane's capacity will be underused due to insufficient level of stock.Operating less than full airplanes may require running additional flights, thereby increasing the airline service cost.
COSTS
In this section we introduce the notation used and explain how various types of costs are computed.We begin with the assumptions that allow us to model the arrivals on feeder routes as a Poisson process.We then compute the arrival intensities, which are further used in the development of storage, in-terminal operation, penalty, and airline cost.
Suppose that a single truck operates on a relatively short feeder route i whose starting and end point is the terminal where the truckload connects to the airplane route l.Let's assume that the duration of the truck's roundtrip is exponentially distributed with a mean denoted as 1/ λ i l .Moreover, it is reasonable to assume that the observed transportation process has the following three properties: 1.The probability that a truck will accomplish more than one roundtrip within an infinitesimal time interval is negligible.2. The duration of a roundtrip does not depend on the duration of the previously completed roundtrip.3. The probability that a roundtrip will end within the time interval t depends on the interval's length, rather than on the time period in which t was observed.Having adopted the above assumptions, we can model the truck arrivals as a Poisson process with the mean arrival rate λ i l according to [9] and [10].If we assign to feeder route i more than one truck, the arrival rate on route i is given in Eq. ( 1), in which n i represents the number of trucks assigned to feeder route i.
Furthermore, if we define with I l the set of feeder routes connecting to main route l, the arrival rate of truckloads connecting to route l is: (2) If we denote the j th takeoff time on route l as t j l , the expected number of truckloads connecting to route l and arriving to terminal between two consecutive flights is:
Storage Cost
To compute the storage cost, we need to keep track of the inventory level.Moreover, since multiple feeder routes connect to multiple main routes, we need to know the stock for each main route.Therefore we define the variable S j l , which defines the inventory level of freight connecting to main route l, after the j th takeoff.We also define A j l representing the amount of freight transported in the j th flight on route l.Considering the inflow and outflow of freight into the terminal storage, Eq. ( 4) has to hold for all flights on all routes.Please note that S t l l 0 0 , and A l 0 all equal 0.Moreover, we assume that all the freight arriving at the terminal before the last scheduled takeoff has to be flown.Thus we also set S n l l to equal 0.
Moreover, since we do not know in advance if there will be enough freight in the terminal's storage to fill the airplane, we specify in Eq. ( 5) that the airplane will be loaded with all the connecting freight found in terminal that can fit within the airplane's capacity, denoted A c .
Based on the previous derivations, in Eq. ( 6) we can compute the storage cost between two consecutive flights for freight connecting to route l.Please note that C DT denotes the storage cost per truckload-hour.
We can further compute the total storage cost for freight connecting to route l by summing Eq. ( 6) over all the flights in J l .
SC S S t t C
Finally, we can compute the total storage cost by summing Eq. ( 7) over the set L, which denotes main routes.
In-terminal Operation Cost
Here we analyse the loading and unloading cost due to the cargo transfer from trucks to airplanes.We assume that the in-terminal operation cost is lower when a truck arrives slightly before the takeoff and takes its truckload directly to the airplane, instead of unloading it in the terminal storage.Therefore, let's define parameter d so that a truck arriving within the ( , ) t d t j l j l − interval takes its truckload directly to the airplane.Now we can compute the expected number of truckloads that will be loaded directly on the airplane: Here we assume that d is smaller than the interval between two consecutive flights on the same route.Thus, the expected number of truckloads being loaded directly onto the airplane depends on the number of takeoffs rather than on their schedule.
If we denote C td to be the unit in-terminal operation cost for the case of direct transfer to the airplane, C ti to be the unit cost for the case of indirect transfer to the airplane, and G to be the total number of truckloads, then the total in-terminal operation cost is:
Penalty Cost
A penalty is imposed for late delivery, reflecting the lower value of freight that is delivered later.Here we assume that the time of the takeoff is relevant Scheduling for a Single-Terminal Intermodal System Recovery with Poisson Arrivals for computing the penalty cost.We define a penalty function f p as the piecewise linear function of takeoff time starting from the beginning of the observed time period (the moment the system starts recovering from a disruption), as shown in Fig. 2.
Fig. 2. Penalty function f p
Now we can compute the penalty cost by summing the penalty for all the flights on all the air routes, as shown in Eq. (11).Please note that we again use A j l as defined in Eq. ( 5), which denotes the number of truckloads carried on the j th takeoff on route l.
Airline Cost
The last type of cost considered is the airline service cost, which covers the use of both airplanes and airport facilities.It is proportional to the number of the airplane roundtrips (takeoffs).We denote the number of takeoffs on route l as n l .Moreover, we denote as C A l the cost of an airplane roundtrip on route l.Finally, the total airline service cost is:
CONSTRAINTS
In this section we analyse several constraints needed in order for the mathematical model to fairly represent the real world.The first constraint that we consider is the time window constraint for takeoffs.Utilisation of airport facilities is often restricted to certain time slots.Thus each takeoff must be scheduled within a prespecified time interval.Therefore, the time window constraint is: Since limited airport capacity might require a minimum time interval between any two flights, we introduce the following constraint.
The last constraint we consider is the terminal's storage capacity.We assume that the expected amount of freight at the terminal should never exceed the multiple m s of the storage capacity S c .Since we previously explained how the expected inventory level for freight connecting to route l at takeoff time t j l is computed, we must now ensure that the total expected inventory never exceeds the storage capacity m s S c .Thus we define parameter p t and control the total inventory level at time p t .In order to do so we first need to find the inventory level of freight connecting to route l at time p t .We introduce s l which denotes the takeoff time on route l prior to p t and define k l which equals the takeoff index j.
Now we can compute the expected inventory of freight connecting to route l as: Finally we can define the storage capacity constraint by summing Eq. ( 17) over the set of main routes and setting the sum below the storage capacity S c multiplied by m s (a safety factor).Please note that the constraint in Eq. (18) should hold for any real value of time parameter p t .
MODEL
In the previous section, the types of costs and constraints considered were explained.Now we can present the mathematical formulation of the model in Eqs. ( 19) to (30), which represents a nonlinear program.Here we provide a compact formulation of the objective function using Eqs.(8), and (10) to (12).In Eqs.(20) to (30), we provide the constraints and other previously derived relationships.
MinTC SC IC PC AC
subject to: The total cost function is a function of the number of takeoffs and takeoff times, as explained in the problem statement.The nonlinear model shown in Eqs. ( 19) to (30) optimises the schedule while taking into consideration the capacity of airplanes, airport and terminal storage, and time windows for takeoffs.In the following section we apply a genetic algorithm (GA) to optimise the schedule in two case studies.Interested readers may refer to [11] and [12] for more information about GA's.
APPLICATION
In order to test our model, we designed two case studies.In the first case, the schedule in an intermodal system with a single main route is optimised.In this simplified optimisation problem we examine the anticipated tradeoff in types of costs through sensitivity analysis.In the second case we analyse a complex system with multiple main routes and time windows.
Case Study with a Single Air Route
We analyse a system with ten feeder truck routes connecting to a single airplane main route.In Table 1 we provide the average roundtrip time on each feeder route, as well as the number of vehicles operating on each truck route.We seek to optimise the number of takeoffs and corresponding schedule assuming that all the freight arriving at the terminal before the last takeoff has to be transported.For this case, we assume the input data from Table 2.The optimisation results for 4 to 9 takeoffs are presented in Table 3.We present an optimised schedule for six different numbers of takeoffs and corresponding costs in dollars.Please note that within "other costs" we consider storage, penalty, loading and unloading costs.Moreover, by marginal savings in other costs we consider savings in storage, penalty, loading and unloading costs due to introducing an additional roundtrip flight.The results presented in Table 3 show that the minimum total cost occurs in the case with five takeoffs.Therefore we can conclude that at the cost of 7000 $/flight, one more flight than necessary to satisfy the demand should be introduced.Moreover, we can observe that storage, penalty and loading/ unloading cost decrease with the increase in the number of takeoffs.This outcome was expected and it confirmed the tradeoff between types of cost that was explained in the problem statement.We also note that the marginal savings in storage, penalty and loading/ unloading cost decreases with the number of aircraft roundtrips, which is another anticipated outcome.
Based on the values for storage, penalty and loading/unloading cost we can explore how different flight costs affect the optimised number of takeoffs and thereby the schedule.In Fig. 3, we plot total cost for the case of 4, 5, 6, 7, 8 and 9 roundtrips vs. aircraft roundtrip cost.Fig. 3 also shows five threshold values for airplane roundtrip cost which determine the optimal number of takeoffs.Those values are 1357, 1871, 2771, 4277 and 7509 dollars, respectively.Clearly, for a relatively low cost per plane roundtrip, the total system cost is optimised by scheduling more takeoffs than necessary to satisfy the demand.As the airline cost increases, the optimal number of takeoffs decreases until it eventually drops to the minimum number needed to satisfy the demand.
Case Study with Multiple Airline Routes
Here we consider the case of multiple feeder routes connecting to three air routes of similar lengths.Since we consider the case with three dozen feeder routes, we do not present the expected roundtrip duration and the number of vehicles on each route, as we did in the previous example.Instead, we provide the computed arrival rates of vehicles connecting to three air routes.Moreover, we assume the same values from the previous numerical example, but this time we include time windows into our analysis.The aforementioned data are provided in Table 4. Please note again that, for each route, all the freight arriving at the terminal by the time of the last flight has to be loaded into airplanes and transported to its destination.The optimization results for the case study with three main routes are given in Table 5. NA stands for the cases when no feasible solution is found either due to overloading of the terminal storage or due to not delivering all the freight that has arrived by the time of the last takeoff on each air route.
From Table 5 we conclude that at 9000 $/flight, the total cost is optimised by scheduling 4 flights on each main route (case 5), which also equals the minimum number of flights needed to provide a feasible solution.Finally, in Table 6 we provide the optimised schedule for all 8 feasible combinations of flights from Table 5.Since the arrivals were modeled as a Poisson process, the computational efficiency of the model is fairly insensitive to the number of feeder routes or operating vehicles.Therefore the proposed scheduling model can be successfully applied to optimise the performance of busy intermodal systems with numerous vehicle arrivals.
Several assumptions built into this paper may be relaxed in the future in order to make the model more general.The current model could be improved to provide good robust solutions even for the case when some of the three properties of the Poisson process listed in Section 1 do not hold.Moreover, the current analysis assumes fixed numbers of vehicles operating on the feeder routes.Future work may consider variable fleet sizes on feeder routes and thereby nonstationary arrival intensities.
NOTATION
The following symbols are used in this paper: λ i l parameter of the exponentially distributed duration of truck roundtrip on feeder route i connecting to main route l i index of feeder route l set of feeder routes I l set of feeder routes connecting to main route l; clearly I I l ∈ j index of on route l J l set of takeoffs on route l l index of main route k l index of the takeoff on route l prior to time p t L set of main routes t j time of the j th takeoff on route l n l number of takeoffs on main route l n i number of trucks on feeder route i r i arrival rate on feeder route i A c capacity of an airplane S c capacity of terminal's storage m s storage multiple C DT in-terminal dwell cost C A l flight cost on route l SC l storage cost associated with freight connecting to main route l SC storage cost d the amount of time such that a truck arriving within the (t j -d,t j ) interval will take its truckload directly to the airplane S j l inventory level of freight connecting to main route l after the j th takeoff b d the expected number of truckloads that will be transferred directly from trucks to airplanes C ti cost of in-terminal operations C td cost of in-terminal operations when the truck takes its truckload directly to the airplane IC overall cost for in-terminal operations f p ( t j l ) penalty function per truckload loaded into airplane at moment t j l PC overall penalty cost AC overall airline cost TC total cost t min minimum time interval between any two takeoffs a j l the lower bound for the j th takeoff on route l b j l the upper bound for the j th takeoff on route l A j l amount of freight carried in the j th takeoff on route l p t control parameter used to check the inventory level R + set of nonnegative real numbers Z + set of nonnegative integers
Table 1 .
Vehicle size and roundtrip duration
Table 2 .
Input data
Table 3 .
Optimised Schedules and Costs
Table 4 .
Input data | 5,182.8 | 2013-09-15T00:00:00.000 | [
"Engineering",
"Business"
] |
Immunogenicity and Durability of Antibody Responses to Homologous and Heterologous Vaccinations with BNT162b2 and ChAdOx1 Vaccines for COVID-19
During the COVID-19 pandemic, vaccines were developed based on various platform technologies and were approved for emergency use. However, the comparative analysis of immunogenicity and durability of vaccine-induced antibody responses depending on vaccine platforms or vaccination regimens has not been thoroughly examined for mRNA- or viral vector-based vaccines. In this study, we assessed spike-binding IgG levels and neutralizing capacity in 66 vaccinated individuals prime-boost immunized either by homologous (BNT162b2-BNT162b2 or ChAdOx1-ChAdOx1) or heterologous (ChAdOx1-BNT162b2) vaccination for six months after the first vaccination. Despite the discrepancy in intervals for the prime-boost vaccination regimen of different COVID-19 vaccines, we found stronger induction and relatively rapid waning of antibody responses by homologous vaccination of the mRNA vaccine, while weaker boost effect and stable maintenance of humoral immune responses were observed in the viral vector vaccine group over 6 months. Heterologous vaccination with ChAdOx1 and BNT162b2 resulted in an effective boost effect with the highest remaining antibody responses at six months post-primary vaccination.
Introduction
COVID-19, a disease caused by SARS-CoV-2, has been a major health threat that is spreading rapidly worldwide. SARS-CoV-2 infection was first reported in Wuhan, China, in November 2019, and the WHO declared it a pandemic on 11 March 2020 [1]. Vaccine development progressed rapidly, and an mRNA vaccine BNT162b2 (BNT) and an adenoviral vector vaccine ChAdOx1 (ChAd) were the first vaccines in each vaccine platform approved for emergency use at the end of the same year [2][3][4]. At that time, the viral vector vaccine was a recently added technology, and the mRNA vaccine had not been licensed. Both COVID-19 vaccines were reported to have strong protective efficacies (BNT: 95%; ChAd: 70.4%) with potent humoral immune responses to the original SARS-CoV-2 strain and decent levels of T-cell responses [3,4]. It has been demonstrated that protective efficacy is closely associated with the levels of spike-binding and neutralizing antibodies and suggested as potential immune correlates of protection [5,6]. A vast number of studies have characterized COVID-19 vaccine-induced immune responses in human vaccinees and revealed stronger induction of spike-binding and neutralizing antibodies by the BNT vaccine than the ChAd vaccine. In addition, enhanced protective immunity by heterologous ChAd-BNT vaccinations was reported [7][8][9][10][11][12][13][14][15][16][17][18][19]. However, there is a paucity of knowledge describing the immunogenicity and long-term maintenance of vaccine-induced immunity comparing homologous and heterologous vaccinations. In South Korea, ChAd and BNT vaccines were introduced and utilized for mass vaccination programs during the early period. The dominant interval for the primeboost vaccination regimen for the ChAd vaccine was 10-12 weeks, while that for the BNT vaccine was three weeks. Although most of the two-shot immunizations were carried out in a homologous manner, the BNT vaccine was also used for secondary vaccination of some ChAd vaccinees. In this study, we followed vaccinated individuals in South Korea for up to six months after the first vaccination to examine the immunogenicity and longevity of antibody responses to homologous and heterologous vaccinations of BNT and ChAd vaccines.
Study Design
Vaccination programs against COVID-19 are being conducted nationwide in South Korea. Our study included subjects uninfected with SARS-CoV-2 and vaccinated with either the BNT or ChAd vaccine (Table 1). Subsequently, whole blood specimens were collected from those vaccinated with the BNT vaccine 3-4 weeks after the first vaccination and three weeks, three months, and 5-6 months after the second vaccination. From the ChAdvaccinated subjects, blood samples were gathered 3-4 weeks after the first vaccination. For the second vaccination, either the ChAd vaccine or the BNT vaccine was administered. Blood samples were collected at four weeks and three months after the second vaccination.
Separation of Plasma Specimen
Whole blood specimens were collected, and plasma and peripheral blood mononuclear cells (PBMCs) were isolated using cell preparation tubes (CPT) vacutainer (BD Biosciences). The CPT Vacutainer tubes were centrifuged at 1800× g for 20 min at 4 • C, resulting in the separation of blood into plasma and PBMCs. After the centrifugation, plasma corresponding to the supernatant was carefully harvested.
Enzyme-Linked Immunosorbent Assay (ELISA)
Briefly, 96-well high-binding EIA/RIA plates (Costar) were coated with 50 µL/well of 1 ug/mL spike or NP proteins in phosphate-buffered saline (PBS) overnight. The coated plates were washed twice with 200 µL/well of PBS containing Tween 20 (PBS-T), blocked with blocking buffer (1% blotting-grade blocker (BIO-RAD) in PBS-T) for 30 min at 37 • C. Plasma samples were serially diluted 4-folds in blocking buffer, added to the plates, and incubated at room temperature (RT) for 2 h. The plates were washed three times with PBS-T, and HRP-conjugated goat anti-human IgG (SouthernBiotech) in blocking buffer was added, followed by incubation for 1.5 h at RT. After washing three times with PBS-T, the wells were treated with TMB substrate solution (OptEIA reagent set, BD). Finally, the reaction was stopped by the addition of a stop solution (0.5 M hydrochloric acid). Optical density at 450 nm was measured using a microplate reader (Victor 3, PerkinElmer).
Microneutralization (MN) Assay
Vero cells were seeded at 15,000 cells per well in Opti-PRO™ SFM (Gibco) with 1X antibiotic-antimycotic solution and 4 mM L-glutamine (Gibco) in a 96-well clear plate (Greiner Bio-One) 24 h prior to the experiment. Plasma samples were diluted 3-fold in quadruplicates and mixed with an equal volume of 100 tissue culture infective dose 50% (100 TCID50) SARS-CoV-2 virus. After pre-incubation for 30 min at 37 • C, plasma/virus mixtures were transferred to Vero cells. After 96 h, the cytopathic effect of SARS-CoV-2 on the infected cells was measured via bright-field imaging. The neutralizing antibody (nAb) titer was calculated as the reciprocal of the highest test plasma dilution factor at which 50% neutralization was attained.
Statistical Analyses
Error bars indicate the standard error of the mean (SEM). p values were analyzed using the Mann-Whitney U test for two-group data and the Kruskal-Wallis test for three-group data. The analysis was conducted using Prism 8 (GraphPad Prism Software, San Diego, CA, USA).
Study Design and Participants
Overall, 66 participants were enrolled in this study and were vaccinated twice with COVID-19 vaccines, including BNT162b (BNT, mRNA vaccine) and ChAdOx1 (ChAd, viralvector vaccine), in three different ways: (1) BNT162b × BNT162b (BNT-BNT), (2) ChAdOx1 × ChAdOx1 (ChAd-ChAd), and (3) ChAdOx1 × BNT162b (ChAd-BNT) ( Figure 1A and Table 1). In total, 46 participants received homologous two-time vaccination with BNT162b at a 3-week interval. The remaining 22 participants received a ChAdOx1 immunization as the primary vaccine. Approximately three months later, among the 22 ChAdOx1 vaccines, nine participants received the second shot with the ChAdOx1 vaccine, while thirteen participants received the BNT162b vaccination, representing the heterologous prime-boost vaccination. The majority (60-88.9%) of the participants in all three groups were women, but there was no statistical difference in sex distribution among the groups. The median age of the participants in each group was 41, 53, and 40 years for the BNT-BNT, ChAd-ChAd, and ChAd-BNT groups, respectively. All participants were healthy and had no Figure 1B). To analyze antibody responses, blood specimens were collected approximately at 3-4 weeks post-primary post-vaccination in all three groups ( Figure 1A). In the BNT-BNT group, longitudinal blood specimens were collected at approximately three weeks, 14 weeks, and 5-6 months after the second vaccination. In the ChAd-ChAd and ChAd-BNT groups, additional blood specimens were collected approximately three weeks and three months after the second vaccination. The last time point for blood collection was approximately 5.5-6.5 months after the first vaccination.
Induction of Antibody Responses after the First and Second Vaccinations
Antibody responses, including spike-binding and neutralizing antibodies (nAbs), have been suggested as key protective indicators of SARS-CoV-2 infection [5,6]. First, we measured the spike-specific immunoglobulin G (IgG) levels in plasma specimens 3-4 weeks after the first vaccination. As a primary vaccine-induced immune response, BNT vaccination resulted in a significantly higher spike-specific IgG level than ChAd vaccination ( Figure 1C). In addition, the neutralizing capacity was slightly stronger in the BNT group than in the ChAd group after the primary vaccination ( Figure 1D). After the second vaccination, the BNT-BNT group triggered a potent boost effect and generated a higher level of spike-binding antibodies than the ChAd-ChAd group ( Figure 1E). Interestingly, the ChAd-BNT group displayed IgG levels comparable to those of the BNT-BNT-vaccinated people. Levels of SARS-CoV-2 nAbs showed a similar trend to that of the binding antibodies ( Figure 1F). Overall, BNT-BNT homologous vaccination and ChAd-BNT heterologous vaccination resulted in stronger antibody responses than ChAd-ChAd homologous vaccination.
Longitudinal Analysis of Spike-Binding Antibody and Neutralizing Antibody Responses
Next, to estimate the durability of antibody responses, we tracked the kinetic changes in spike-binding antibodies from the three vaccination groups until approximately six months after the first vaccination. After the peak of the spike-specific antibody response at 3 weeks post-boost, the antibody level started to decay over time in the BNT-BNT group (Figure 2A). In the ChAd-ChAd group, the second vaccination did not substantially enhance the binding antibody response, although the interval between the two vaccinations was much longer (three months) than that in the BNT-BNT group (three weeks). Intriguingly, in the ChAd-ChAd-vaccinated group, spike-specific antibody levels remained stable for the next three months ( Figure 2B). The ChAd-BNT group displayed a significantly augmented secondary antibody response, followed by a moderate decrease three months after the second shot ( Figure 2C). As a result, at approximately six months after the first vaccination (the last time point), in contrast to the peak antibody responses, the heterologous vaccination maintained slightly higher levels of antibodies than the BNT-BNT groups and similar antibody levels in the BNT-BNT and ChAd-ChAd groups ( Figure 2D).
Based on these kinetic changes, we analyzed the fold differences in spike-specific antibody responses at different time points (Figure 2E-G). Despite the overall fold-increase in spike-binding antibodies after the second vaccination in all three vaccinated groups, the heterologous vaccination (ChAd-BNT) group displayed the strongest boost effect ( Figure 2E). Since the interval between the first and second shots vary among different vaccination groups, we tried to analyze the durability of binding antibody responses in this aspect. As shown in Figure 2F, the fold changes for 3 months after the second vaccination were calculated, suggesting a more rapid decrease in IgG levels in the BNT-BNT group but a slightly better persistence in the ChAd-ChAd group. We also assessed the durability of spike-binding antibodies by comparing the antibody levels at three weeks and six months after the first vaccination ( Figure 2G). Remarkably, heterologous vaccination (ChAd-BNT) resulted in a 4.4-fold increase in the level of spike-specific antibodies. The BNT-BNT group showed slightly decreased spike-binding antibodies, while the ChAd-ChAd group remained about the same. heterologous vaccination maintained slightly higher levels of antibodies than the BNT-BNT groups and similar antibody levels in the BNT-BNT and ChAd-ChAd groups ( Figure 2D). Based on these kinetic changes, we analyzed the fold differences in spike-specifi tibody responses at different time points (Figure 2E-G). Despite the overall fold-incr in spike-binding antibodies after the second vaccination in all three vaccinated gro In addition to the spike-binding antibodies, we examined the kinetic changes in nAbs. In all three groups, the second vaccination induced an increase in nAbs ( Figure 3A-C), resulting in a similar trend for spike-binding antibodies, although there was no statistical difference ( Figure 3D). The magnitude of nAb increase after the second vaccination was the highest in the ChAd-BNT group (82.9-fold), followed by the BNT-BNT group (35-fold) ( Figure 3E). After the secondary peak IgG responses, both the BNT-BNT and ChAd-BNT groups resulted in the sharp waning of nAb titers, whereas the ChAd-ChAd group could stably maintain nAbs ( Figure 3F). Over the course of the experiment, heterologous vaccination with ChAd-BNT efficiently triggered the accumulation of nAbs against SARS-CoV-2, and either the BNT-BNT or ChAd-ChAd groups displayed a moderate increase in nAbs ( Figure 3G). Taken together, these data demonstrate the unique dynamics of vaccine-induced humoral immune responses depending on different vaccine platforms and vaccination regimens, particularly indicating the potent immunogenicity and long-lasting antibody response of the heterologous ChAd-BNT vaccination.
were calculated, suggesting a more rapid decrease in IgG levels in the BNT-BNT gro but a slightly better persistence in the ChAd-ChAd group. We also assessed the durabil of spike-binding antibodies by comparing the antibody levels at three weeks and months after the first vaccination ( Figure 2G). Remarkably, heterologous vaccinati (ChAd-BNT) resulted in a 4.4-fold increase in the level of spike-specific antibodies. T BNT-BNT group showed slightly decreased spike-binding antibodies, while the ChA ChAd group remained about the same.
In addition to the spike-binding antibodies, we examined the kinetic changes nAbs. In all three groups, the second vaccination induced an increase in nAbs (Figure 3 C), resulting in a similar trend for spike-binding antibodies, although there was no sta tical difference ( Figure 3D). The magnitude of nAb increase after the second vaccinati was the highest in the ChAd-BNT group (82.9-fold), followed by the BNT-BNT group ( fold) ( Figure 3E). After the secondary peak IgG responses, both the BNT-BNT and ChA BNT groups resulted in the sharp waning of nAb titers, whereas the ChAd-ChAd gro could stably maintain nAbs ( Figure 3F). Over the course of the experiment, heterologo vaccination with ChAd-BNT efficiently triggered the accumulation of nAbs against SAR CoV-2, and either the BNT-BNT or ChAd-ChAd groups displayed a moderate increase nAbs ( Figure 3G). Taken together, these data demonstrate the unique dynamics of v cine-induced humoral immune responses depending on different vaccine platforms a vaccination regimens, particularly indicating the potent immunogenicity and long-lasti antibody response of the heterologous ChAd-BNT vaccination.
Discussion
Recently, numerous studies have investigated humoral and cellular immune sponses triggered by various COVID-19 vaccines and their protective capacity agains verse variants [20][21][22][23]. Evidence on the durability of vaccine-induced immunity dep ing on vaccine platform technologies and administration regimens is essential becau can impact public health policies, such as vaccine choice and timing of booster vacc tion. In this study, we addressed the induction and longevity of antibody response homologous or heterologous vaccinations of the mRNA vaccine BNT162b2 and the v vector vaccine ChAdOx1. Here, despite relatively small cohort sizes for the ChAd-C and ChAd-BNT groups (Table 1), we revealed that the BNT-BNT group showed st formation and a relatively rapid decrease in humoral immune response, whereas ChAd-ChAd vaccination generated moderate antibody responses with stable ma nance. Strikingly, the BNT boost after ChAd priming elicited a very potent boost ef resulting in the highest levels of binding and neutralizing antibodies 6 months after first vaccination (Figures 2C and 3C).
First, the different working mechanisms of mRNA vaccines and viral-vector vacc may influence the modalities of vaccine-induced immune responses. Several previou ports have described superior induction of humoral immune responses by mRNA vac platforms, including BNT162b2 and mRNA-1273, in contrast to the relatively stable d bility of the viral-vector vaccine [7,9,[24][25][26]. We observed a clear difference in the kin changes in the antibody responses between the ChAd-ChAd and ChAd-BNT groups ter ChAd vaccination, the BNT boost effectively increased the levels of binding and tralizing antibodies despite a mild increase in the ChAd boost vaccination (Figures and 3B,C). It is possible that the generation of viral vector-specific antibodies upon mary ChAd vaccination inhibited efficient delivery, resulting in attenuated subseq vaccine-induced immunity.
Moreover, in addition to the discrepancy in vaccine platforms, the interval betw the first and second shots may be another factor affecting the longevity of the imm response. According to previous studies, sufficient time is required to establish g
Discussion
Recently, numerous studies have investigated humoral and cellular immune responses triggered by various COVID-19 vaccines and their protective capacity against diverse variants [20][21][22][23]. Evidence on the durability of vaccine-induced immunity depending on vaccine platform technologies and administration regimens is essential because it can impact public health policies, such as vaccine choice and timing of booster vaccination. In this study, we addressed the induction and longevity of antibody responses by homologous or heterologous vaccinations of the mRNA vaccine BNT162b2 and the viral-vector vaccine ChAdOx1. Here, despite relatively small cohort sizes for the ChAd-ChAd and ChAd-BNT groups (Table 1), we revealed that the BNT-BNT group showed strong formation and a relatively rapid decrease in humoral immune response, whereas the ChAd-ChAd vaccination generated moderate antibody responses with stable maintenance. Strikingly, the BNT boost after ChAd priming elicited a very potent boost effect, resulting in the highest levels of binding and neutralizing antibodies 6 months after the first vaccination ( Figures 2C and 3C).
First, the different working mechanisms of mRNA vaccines and viral-vector vaccines may influence the modalities of vaccine-induced immune responses. Several previous reports have described superior induction of humoral immune responses by mRNA vaccine platforms, including BNT162b2 and mRNA-1273, in contrast to the relatively stable durability of the viral-vector vaccine [7,9,[24][25][26]. We observed a clear difference in the kinetic changes in the antibody responses between the ChAd-ChAd and ChAd-BNT groups. After ChAd vaccination, the BNT boost effectively increased the levels of binding and neutralizing antibodies despite a mild increase in the ChAd boost vaccination ( Figure 2B,C and Figure 3B,C). It is possible that the generation of viral vector-specific antibodies upon primary ChAd vaccination inhibited efficient delivery, resulting in attenuated subsequent vaccine-induced immunity.
Moreover, in addition to the discrepancy in vaccine platforms, the interval between the first and second shots may be another factor affecting the longevity of the immune response. According to previous studies, sufficient time is required to establish good-quality immunological memory [27][28][29]. Our data from the BNT-BNT group may imply that a Vaccines 2022, 10, 1864 9 of 11 longer than 3-week interval is optimal for steady maintenance of the antibody response ( Figures 2G and 3G).
In our study, the neutralizing capacity of the variant strains of SARS-CoV-2 was not measured because the focus of our study was to investigate the immunogenicity and longevity of the antibody response to the original vaccine antigen. It is noteworthy that the durability of humoral response is primarily determined by memory B cells and long-lived plasma cells, and it was shown that the spike-specific memory B cells persisted for a longer time, although the levels of binding antibodies declined over 8 months [30]. We did not assess vaccine-induced T-cell responses that could support the germinal center reaction or T-dependent antibody response. Examining the immunogenicity and durability of T-cell responses depending on different vaccine platforms will provide a beneficial contribution to the vaccine field.
In conclusion, we identified distinct immunogenicity and durability of vaccine-induced immune responses, depending on various vaccine development technologies and vaccination strategies. Homologous vaccination with BNT and ChAd vaccines displayed strong induction and the rapid decay of antibody response and moderate induction and slow waning of antibodies, respectively. Heterologous vaccination with ChAd and BNT triggered a potent boosted antibody response, and the response remained at the highest level at approximately 6 months after the primary vaccination. The data obtained in this study will be advantageous in determining an effective vaccine strategy for public health. | 4,405.8 | 2022-11-01T00:00:00.000 | [
"Biology"
] |
Protein Kinase D1 Mediates Anchorage-dependent and -independent Growth of Tumor Cells via the Zinc Finger Transcription Factor Snail1*
Background: The protein kinase D (PKD) family is involved in the control of cell motility and proliferation. Results: PKD1 controls growth of cancer cells through phosphorylation of Snail1 at Ser-11. Conclusion: Only PKD1, but not PKD2, mediates isoform-specific control of pancreatic cancer cell proliferation through Snail1. Significance: We demonstrate for the first time isoform-specific control of pancreatic cancer growth by a single phosphorylation of a substrate. We here identify protein kinase D1 (PKD1) as a major regulator of anchorage-dependent and -independent growth of cancer cells controlled via the transcription factor Snail1. Using FRET, we demonstrate that PKD1, but not PKD2, efficiently interacts with Snail1 in nuclei. PKD1 phosphorylates Snail1 at Ser-11. There was no change in the nucleocytoplasmic distribution of Snail1 using wild type Snail1 and Ser-11 phosphosite mutants in different tumor cells. Regardless of its phosphorylation status or following co-expression of constitutively active PKD, Snail1 was predominantly localized to cell nuclei. We also identify a novel mechanism of PKD1-mediated regulation of Snail1 transcriptional activity in tumor cells. The interaction of the co-repressors histone deacetylases 1 and 2 as well as lysyl oxidase-like protein 3 with Snail1 was impaired when Snail1 was not phosphorylated at Ser-11, which led to reduced Snail1-associated histone deacetylase activity. Additionally, lysyl oxidase-like protein 3 expression was up-regulated by ectopic PKD1 expression, implying a synergistic regulation of Snail1-driven transcription. Ectopic expression of PKD1 also up-regulated proliferation markers such as Cyclin D1 and Ajuba. Accordingly, Snail1 and its phosphorylation at Ser-11 were required and sufficient to control PKD1-mediated anchorage-independent growth and anchorage-dependent proliferation of different tumor cells. In conclusion, our data show that PKD1 is crucial to support growth of tumor cells via Snail1.
We here identify protein kinase D1 (PKD1) as a major regulator of anchorage-dependent and -independent growth of cancer cells controlled via the transcription factor Snail1. Using FRET, we demonstrate that PKD1, but not PKD2, efficiently interacts with Snail1 in nuclei. PKD1 phosphorylates Snail1 at Ser-11. There was no change in the nucleocytoplasmic distribution of Snail1 using wild type Snail1 and Ser-11 phosphosite mutants in different tumor cells. Regardless of its phosphorylation status or following co-expression of constitutively active PKD, Snail1 was predominantly localized to cell nuclei. We also identify a novel mechanism of PKD1-mediated regulation of Snail1 transcriptional activity in tumor cells. The interaction of the co-repressors histone deacetylases 1 and 2 as well as lysyl oxidase-like protein 3 with Snail1 was impaired when Snail1 was not phosphorylated at Ser-11, which led to reduced Snail1-associated histone deacetylase activity. Additionally, lysyl oxidase-like protein 3 expression was up-regulated by ectopic PKD1 expression, implying a synergistic regulation of Snail1-driven transcription. Ectopic expression of PKD1 also up-regulated proliferation markers such as Cyclin D1 and Ajuba. Accordingly, Snail1 and its phosphorylation at Ser-11 were required and sufficient to control PKD1-mediated anchorage-independent growth and anchorage-dependent proliferation of different tumor cells. In conclusion, our data show that PKD1 is crucial to support growth of tumor cells via Snail1.
The protein kinase D (PKD) 3 family of serine/threonine kinases consists of three members: PKD1 (PKC), PKD2, and PKD3. They share similar structural features and often phosphorylate the same substrates (1)(2)(3)(4)(5)(6)(7). The protein kinase D family has been implicated in the regulation of proliferation of different cells including pancreatic cancer cells (6 -11). We have previously identified protein kinase D as a major regulator of cancer cell motility and invasion (2)(3)(4)(5). However, it is unclear whether these functions are regulated by all PKD isoforms in a similar fashion and via the same PKD targets or substrates. Therefore, we investigated how PKD1 and PKD2, two PKD isoforms that mediate vital functions in pancreatic tumor growth and angiogenesis, are involved in the regulation of pancreatic cancer cell growth (12)(13)(14). We initiated a bioinformatics screening approach using Scansite (15) to identify putative PKD phosphorylation consensus motifs in potentially relevant PKD substrates and identified (in accordance with Du et al. (16)) Snail1 as a putative PKD substrate. Snail1 is an important zinc finger transcription factor controlling the epithelial-mesenchymal transition and tumor growth (17,18). Snail1 transcriptional activity can be mediated by regulation of protein stability via lysyl oxidase-like proteins (LOXLs) (19,20). LOXL isoforms 2 and 3 interact with Snail1 to modify critical lysine residues and thereby stabilize the protein (19). Snail1 repressor activity is also modulated by phosphorylation of 6 residues via glycogen synthase kinase 3, inducing nuclear export and -Trcp-controlled ubiquitin-dependent degradation (20,21). Snail1 transcriptional repression is mediated by recruitment of a Sin3Ahistone deacetylase 1 and 2 (HDAC1-HDAC2) complex. This interaction is critical for Snail1 repressor function and depen-dent on the N-terminal SNAG domain of Snail1 (22), which is adjacent to the PKD phosphorylation consensus in the protein.
Thus, the aim of this study was to identify how phosphorylation of Snail1 by PKD regulates Snail1 activity, tumor cell growth, and invasive features and to determine whether Snail1 phosphorylation by PKDs is isoform-specific.
EXPERIMENTAL PROCEDURES
Cell Culture-Panc89 (pancreatic ductal adenocarcinoma), Panc1 (pancreatic ductal adenocarcinoma), HEK293T, and HeLa cells were maintained in RPMI 1640 medium supplemented with 10% FCS and penicillin/streptomycin. Panc1 cells were transfected using Turbofect (Fermentas), and siRNAs were transfected using Oligofectamine (Invitrogen). Experiments in HeLa cells were performed using HeLa Monster reagent (Mirus). Panc1, HEK293T, and HeLa cells were acquired from ATCC. Stable Panc89 cells used in this study were described previously (4,5). For production of lentiviruses, 6 ϫ 10 6 HEK293T cells were transfected using Lipofectamine 2000 (Invitrogen). Virus supernatants were harvested after 48 h and used for transduction of stable Panc89 cell lines. Cells were subsequently subjected to puromycin selection to generate semistable cell lines used in assays.
Total Cell Lysates and Co-immunoprecipitation-Total cell lysates and co-immunoprecipitations were performed as described previously (3,5,24). In brief, total cell lysates were either prepared by solubilizing cells in radioimmune precipitation assay buffer (50 mM Tris, pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Nonidet P-40, 0.25% deoxycholate, 0.1% SDS plus complete protease and PhosStop inhibitors (Roche Applied Science)) or 2% SDS lysis buffer (10 mM Hepes, 150 mM NaCl, 1 mM EDTA, pH 6.8 plus inhibitors). Lysates were clarified by centrifugation at 13,000 ϫ g for 10 min. For immunoprecipitation, equal amounts of proteins were incubated with specific antibodies for 1.5 h at 4°C. Immune complexes were collected with protein G-Sepharose (GE Healthcare) for 30 min at 4°C and washed three times with lysis buffer (20 mM Tris, pH 7.4, 5 mg MgCl 2 , 150 mM NaCl, 1% Triton X-100). Precipitated proteins were released by boiling in sample buffer and subjected to SDS-PAGE. The proteins were blotted onto nitrocellulose membranes (Pall Corp., Germany). After blocking with 2% BSA in TBS with Tween 20, blots were probed with specific antibodies. Proteins were visualized by HRP-coupled secondary antibodies using ECL (Thermo Fisher). Quantitative analysis of Western blots was done by measuring integrated band density using NIH ImageJ. Values shown represent -fold change in respect to control.
qPCR-Quantitative real time PCRs were performed in a Bio-Rad iQ5 cycler with SYBR Green. Total RNA was isolated using an RNeasy minikit (Qiagen). We used 400 ng of total RNA for cDNA synthesis. Quantitative real time PCR analysis was performed in three replicas and in at least three independent experiments using qPCR primers for GAPDH (control), LOXL1-3, and Cyclin D2 (Qiagen). Results were calculated using the ⌬⌬Ct method normalized to GAPDH and vector control cells.
Three-dimensional Basement Membrane Extract (BME) Cell Culture-Three-dimensional BME culture was performed by seeding 10,000 cells of stable Panc89 cell lines (4,5) in BME (growth factor-reduced, phenol red-free; Cultrex, R&D Systems, Trevigen). Tumor cell clusters were documented after 16 days at 10ϫ magnification (see Fig. 8A) or 32 days (see Fig. 8G) at 8ϫ magnification using a Keyance microscope. Diameters of tumor clusters in images were quantified in perpendicular directions for each cluster using spacial calibration of images (NIH ImageJ). For statistical analysis, conditions were compared using frequency distribution histograms. Statistical significance was calculated using two-tailed unpaired Student's t test.
Immunofluorescence Confocal Microscopy and Acceptor Photobleach Fluorescence Resonance Energy Transfer (FRET)-HeLa cells were transfected with HeLa Monster and seeded at a density of 150,000 cells/well on glass coverslips. After adhesion overnight, cells were fixed with 4% formaldehyde at room temperature for 20 min, washed, quenched with 0.1 M glycine, and then permeabilized with 0.1% Triton X-100. Samples were blocked and stained in PBS supplemented with 5% FCS, 0.05% Tween 20. Primary and secondary Alexa Fluor dye antibodies (Invitrogen) were incubated for 2 h, respectively. Samples were mounted after extensive washing in Fluoromount-G (Southern Biotechnology) and analyzed by a confocal laser scanning microscope (TCS SP5, Leica) equipped with respective 63ϫ Plan Apo oil immersion objectives. Images were acquired in sequential scan mode, and processing was done using NIH ImageJ. Scale bars represent 10 m. Acceptor photobleach FRET experiments were performed in transiently transfected HeLa cells processed as stated above. FRET measurements were performed by acquiring pre-and postbleach images of donor and acceptor using the Leica acceptor photobleach FRET macro. Thresholded percent FRET values were depicted using a seven-color look-up table. Quantitative FRET analysis was performed by calculating mean FRET efficiency and S.E. for n ϭ 18 cells and two independent conditions (PKD1 versus PKD2). Statistical significance (****, p Ͻ 0.0001) was calculated using two-tailed unpaired Student's t test.
Soft Agar Assays-Anchorage-independent growth was measured using soft agar colony formation assays. Stable Panc89 cells expressing the indicated constructs were seeded at 10,000 cells/well in 6-well plates in 0.5% soft agar (Bacto Agar, BD Biosciences) with 0.5% agar bottom layers in three replicate wells per condition and in at least three independent experiments. Colonies were documented at 10ϫ magnification using a Keyance microscope after 13 days (see Fig. 6B) or 10 days, respectively (see Fig. 7, A and B). For transiently transfected Panc1 cells, 50,000 cells/well in 6-well plates were seeded and documented after 6 days (see Fig. 6C). Results were calculated by quantifying the average number of colonies per visual field at 10ϫ or 4ϫ magnification, whereas for transient expression in Panc1 cells, the entire well was counted. Statistical analysis was performed using one-way ANOVA with Bonferroni multiple comparison post-testing or Student's unpaired t testing.
Cell Proliferation Assays-Cell proliferation assays were performed with transiently transfected HeLa cells. After 24 h, 5000 cells were seeded in 100 l of standard growth medium in triplicate replicas per condition in 96-well culture plates for time points T0, T24, T48. After adhesion overnight, T0 cells were fixed and stained with crystal violet (0.5% in H 2 O, 20% (v/v) methanol) for 20 min at room temperature. After extensive washing, plates were dried, and additional plates were processed after 24 and 48 h in the same manner. To quantify cell density, crystal violet was dissolved in 100 l of methanol/well, and adsorption was measured at 550 nm using a Tecan1000 plate reader. Doubling time was calculated using linear regression (Prism software). Cell densities in graphs are shown for mean A 550 values in triplicate replicas ϮS.E.
HDAC Activity Assays-HDAC activity assays were performed using a fluorometric kit (Cayman Chemical Co.). 3 ϫ 10 6 HeLa cells were seeded in 10-cm dishes with two dishes per condition. Cells were lysed after 48 h according to the manufacturer's instructions. Assays were performed in black 96-well plates in triplicate replicas per condition. To measure HDAC activity, 10 l of crude nuclear extract were used after normalization of protein content by a BCA kit. Deacetylation of a specific HDAC substrate was measured at 455 nm (excitation, 360 nm) using a Tecan M1000 reader. Assays were further normalized for GFP transgene expression in crude nuclear extracts (Snail1-GFP and Snail1S11A-GFP) by measuring GFP fluorescence at 535 nm (excitation, 475 nm). Statistical analysis was performed using one-way ANOVA with Bonferroni multiple comparison post-testing.
RESULTS
Following a bioinformatics screen, we identified (in accordance with Ref. 16) Snail1 as a putative PKD substrate and mapped the respective phosphorylation site to Ser-11.
Mapping of PKD1 Phosphorylation Sites in Snail1- Fig. 1A depicts a structural overview of Snail1 with the putative PKD phosphorylation site located at Ser-11 directly adjacent to its SNAG domain (amino acids 1-9). The potential phosphorylation site LVRKPS* matches the published PKD phosphorylation consensus sequence LXRXXS* and partially matches the PKD phosphosubstrate antibody recognition sequence (pMotif; LXR(Q/K/E/M)(M/L/K/E/Q/A)S*) (25,26). Using anti-pMotif antibody, we investigated Snail1 in vivo phosphorylation by PKD1 (Fig. 1B). Active PKD1 enhanced phosphorylation of Snail1, whereas Snail1 phosphorylation was barely detectable in cells expressing catalytically inactive PKD1KD. In addition, phosphorylation of Snail1 was absent when Ser-11 was replaced by Ala (S11A) even in the presence of active PKD1. Thus, in accordance with data published by Du et al. (16), Ser-11 is a PKD phosphorylation site in vivo, and it is the only PKD phosphorylation site in Snail1 (Fig. 1B). Next, we wanted SEPTEMBER 21, 2012 • VOLUME 287 • NUMBER 39 to assess the upstream regulation of Snail1 by PKD isoforms 1 and 2. To determine whether both isoforms would interact with Snail1 in intact cells, we performed co-localization and FRET studies.
Snail1 Mediates PKD1-induced Growth of Cancer Cells
Only PKD1 Interacts Efficiently with Snail1 in the Nuclei of HeLa Cells-For co-localization and FRET studies, we used transiently transfected HeLa cells ectopically expressing Snail1-FLAG together with PKD1-GFP or PKD2-GFP, respectively. Both PKD1 and -2 were localized to nuclei of HeLa cells and co-localized with FLAG-tagged Snail1 (Fig. 2, A and B). To further characterize this co-localization and to determine a potential interaction, we performed acceptor photobleach FRET studies. Fig. 2A displays a representative FRET experiment for PKD1-GFP and Snail1-FLAG. Panels AЈ and BЈ depict donor pre-and postbleach states, whereas panels DЈ and EЈ show acceptor pre-and postbleach images, respectively. The relative increase in donor fluorescence intensity is marked by arrowheads in postbleach images (panel BЈ). Percent FRET values indicating interaction of the two proteins are shown in panel FЈ depicted by a seven-color look-up table ( Fig. 2A, panels AЈ-FЈ). Similar experiments were performed for PKD2-GFP and Snail1-FLAG (Fig. 2B, panels AЈ-FЈ). Active PKD2 is known to phosphorylate nuclear substrates (27). However, interaction of wild type PKD2 and Snail1 was barely detectable. Table 2.) These data indicate that PKD1 preferably interacts with Snail1 and suggests that interaction between PKDs and Snail1 is isoform-specific. We further verified these findings Table 2. Statistical significance (****, p Ͻ 0.0001) was calculated using a two-tailed unpaired Student's t test. D, endogenous Snail1 and PKD1 interact. Anti-PKD1 and nonspecific IgGs were used for immunoprecipitation (IP) from Panc89 vector cells. Immunoprecipitations were subsequently probed for the presence of endogenous Snail1 using specific antibodies. Error bars in graphs represent S.E.
Du et al. (16) reported that phosphorylation of Snail1 at Ser-11 by PKDs regulates its nuclear export by interaction with 14-3-3 proteins in epithelial cell lines including C4-2 tumor cells. However, the authors conceded that tumor cells may have different mechanisms for regulating Snail1 transcriptional activity. We investigated how subcellular localization of Snail1 was altered when Ser-11 was phosphorylated by PKD1 in two epithelial cancer cell lines, Panc1 human pancreatic cancer cells and HeLa cervical cancer cells.
Subcellular Localization of Snail1, Snail1S11A, and Snail1S11E Is Not Changed-To first investigate how Snail1 phosphorylation would impact its subcellular localization, we performed localization studies with Snail1 and the S11A and S11E phosphosite mutants in HeLa and Panc1 cells, respectively. There was no detectable change in subcellular localization using the phosphosite mutants compared with Snail1 wild type (WT) in HeLa cells (Fig. 3A) and Panc1 cells (data not shown). We quantified subcellular distribution in three experiments using HeLa cells and found that Snail1 predominantly localized in nuclei independently of its phosphorylation status in more than 99% of at least 1000 cells quantified per condition (Fig. 3A, panels AЈ-OЈ). We also did not observe any change in the subcellular localization of wild type Snail1 upon co-expression with constitutively active PKD1 in both HeLa (Fig. 3B) and Panc1 cells (data not shown). Similar data were obtained with endogenous Snail1 in both cell lines expressing active PKD1 (supplemental Fig. 1A). In addition, there was no change in the subcellular localization of predominantly cytoplasmic endoge-
Snail1 Mediates PKD1-induced Growth of Cancer Cells
nous Snail1 in non-transformed, immortalized HEK293T cells upon expression of active PKD1 or kinase-inactive PKD1KD (supplemental Fig. 1B). According to Du et al. (16), Snail1 should have exhibited nuclear localization in cells expressing PKD1KD in this setting. Thus, in the cell lines examined in this study, 14-3-3 binding to a consensus surrounding Ser-11 does not seem to be the relevant mechanism for Snail1 subcellular localization. This prompted us to investigate further molecular mechanisms to explain the function of Snail1 phosphorylation by PKD1.
Regulation of Snail1-mediated Transcriptional Activity by Lysyl Oxidase-like Family Members 2 and 3-In addition to HDAC1 and -2 (22), LOXL2 and -3 are known Snail1 interaction partners, enhancing Snail1 protein stability and Snail1-dependent regulation of marker genes (19). To investigate their role in the regulation of Snail1 transcriptional activity downstream of PKD1, we initially screened a panel of pancreatic cancer cell lines including Panc89 cells stably expressing GFP vector or PKD1-GFP (4, 5) as well as HeLa cells for the presence of Snail1, PKD1, and the LOXL3 isoform (Fig. 4A). Snail1 and LOXL3 proteins were present in HeLa, Panc1, MiaPaca, Panc89, and the stable Panc89 cell lines. PKD1 was expressed at different levels in all cell lines. Snail1 was strongly expressed in HeLa, Panc1, and both stable Panc89 cell lines, prompting us to use these cells for further analyses. Using qPCR, we additionally tested which LOXL isoforms were present in these stable Panc89 GFP vector or PKD1-GFP cells and whether a further upstream regulation by PKD1 may be involved. Both LOXL2 and -3 isoforms were expressed in Panc89 cells. To our surprise, the expression of LOXL3, but not of LOXL2, was significantly up-regulated by 5.5 Ϯ 0.36-fold in cells expressing PKD1 (Fig. 4, B and C), suggesting a PKD1-dependent synergistic regulation of Snail1 activity via LOXL3. Next, we investigated how phosphorylation at Ser-11 would impact co-regulation by HDACs. Thus, we initially performed co-localization studies of WT Snail1 and Snail1-S11A with the published co-regulator HDAC1 (22) in HeLa cells. Both Snail1-FLAG and Snail1S11A-FLAG co-localized with the endogenous co-repressor HDAC1 in the nuclei (Fig. 4D). We were further able to demonstrate interaction of Snail1-FLAG with HDAC1 in the nuclei by acceptor photobleach FRET (data not shown).
Binding of HDAC1, HDAC2, and LOXL3 Is Impaired by Snail1S11A, Reducing HDAC Activity-To study the molecular impact on HDAC binding following phosphorylation of Ser-11, we performed co-immunoprecipitation studies with phosphosite mutants. The Snail co-repressors HDAC1 and -2 interact with Snail1 in transcriptional complexes to regulate the expression of target genes (22). LOXL2 and -3 isoforms also act as co-regulators, modifying Snail1-mediated transcriptional regulation by enhancing its stability. The interaction of LOXL2 with Snail1 has been shown to be dependent on the N-terminal part of Snail1, which contains the SNAG domain (amino acids 1-9) adjacent to the Ser-11 phosphorylation site, and this part is also essential for interaction with HDAC transcriptional corepressors (19,28). Thus, we performed co-immunoprecipitation experiments in HeLa cells following co-expression of FLAG-tagged HDACs with Snail1 phosphosite mutants as well as with endogenous HDAC1 and -2 (22). Strikingly, the interaction of both HDAC1 (data not shown) and -2 with Snail1 was decreased upon expression of the Snail1S11A mutant but not upon expression of SnailS11E (Fig. 4E). Fig. 4F depicts the result of three independent co-immunoprecipitation experiments for co-expressed HDAC2. HDAC2 binding was significantly reduced with the S11A mutant and almost returned to the wild type level with S11E. HDAC1 demonstrated the same overall pattern of regulation (data not shown). We also investigated the interaction of endogenous HDACs with wild type Snail1 as well as its S11A and S11E mutant proteins. Fig. 4, G and H, show that binding of endogenous HDAC1 and HDAC2 to Snail1S11A was reduced as compared with wild type Snail1 from 1 to 0.53 times integrated band density for HDAC1 and to 0.58 times for HDAC2, whereas it was increased for the S11E mutant to 1.29 times for HDAC1 and 1.24 times for HDAC2. Additional coimmunoprecipitation experiments with endogenous HDAC1 and -2 also using other tags may be found in supplemental Fig. 2, A-D. Thus, phosphorylation of Snail1 Ser-11 by PKD1 is likely to be required for the stable interaction with its co-repressors (Fig. 4, E-H). In accordance with these data, LOXL3 interaction with Snail1S11A was also decreased as observed in coimmunoprecipitation experiments. Integrated band densities were reduced from 1 for WT to 0.3 times for the Snail1S11A mutant (supplemental Fig. 2E). We next assessed how the phosphorylation-dependent interaction of Snail1 with its co-regulators modulates HDAC transcriptional regulatory activity.
Snail1-dependent HDAC Activity and Regulation of Proliferation Markers-We performed HDAC activity assays measuring WT Snail1-as well as Snail1S11A-associated histone deacetylation to verify results of co-immunoprecipitation FIGURE 4. A, Snail1, LOXL3, and PKD1 are expressed in a subset of pancreatic cancer cell lines, HeLa cells, and stable Panc89 cells expressing GFP vector as well as PKD1-GFP. 200 g of total cell lysates were probed with specific antibodies. B, expression and upstream regulation of the Snail1 co-regulator lysyl oxidaselike proteins 2 and 3 in stable Panc89 cell lines. LOXL3, but not LOXL2, is up-regulated by ectopic PKD1. The graph displays -fold change in regulation relative to respective vector controls. qPCR for LOXL2 and LOXL3 was performed on RNA isolated from stable Panc89 cells expressing GFP and PKD1-GFP. Four independent experiments were quantified in triplicate replicas. Results were normalized to GAPDH and calculated according to the ⌬⌬Ct method. Statistical significance was calculated using one-way ANOVA with Dunnett's multiple comparison post-testing (***, p Ͻ 0.05). C, LOXL3 expression is up-regulated in stable PKD1-GFP Panc89 cells. 250 g of total cell lysates were probed for LOXL3 using specific antibodies. D, regulation of Snail1 activity by phosphorylation at Ser-11. Co-localization of Snail1-FLAG (panels AЈ-CЈ) and Snail1S11A-FLAG (panels DЈ and EЈ) with their endogenous co-repressor HDAC1 in HeLa nuclei is shown. Images depict single confocal sections. The scale bar represents 10 m. E, mutation Snail1S11A impairs interaction of Snail1 with co-expressed FLAG-HDAC2, whereas binding is reconstituted with Snail1S11E. Proteins were probed with respective specific antibodies in Western blots. F, statistical analysis of three independent co-precipitation experiments in E. -Fold change in HDAC2 co-precipitation with Snail1 and mutants was calculated from integrated band densities of Western blots. Significance was calculated using Student's t test. G, co-immunoprecipitation (IP) of endogenous HDAC1 with Snail1-, Snail1S11A-, and Snail1S11E-GFP from HeLa total cell lysates. Endogenous HDAC1 was probed with specific antibodies, and immunoprecipitations were reprobed for Snail1 expression by anti-Snail1 antibody. H, co-immunoprecipitation of endogenous HDAC2 with Snail1-, Snail1S11A-, and Snail1S11E-GFP in HeLa total cell lysates. Endogenous HDAC2 was probed with specific antibodies, and immunoprecipitations were reprobed for Snail1 expression. Error bars in graphs represent S.E. experiments. GFP vector, Snail1-GFP, or Snail1S11A-GFP constructs were ectopically expressed in HeLa cells for 48 h, and crude nuclear extracts were prepared using an HDAC activity assay kit (Cayman Chemical Co.). Equal amounts of extract were used in assays, and results were further normalized to GFP-Snail1 transgene expression present in nuclear lysates. In line with interaction studies, statistical analysis of three independent HDAC assays demonstrated reduced activity in cells expressing the Snail1S11A mutant by 21.37% compared with WT Snail1 (Fig. 5A). Expression of Snail1 transgene controls in crude nuclear lysates is shown in supplemental Fig. 3A. Because Snail1-associated HDAC activity contributes only partially to the total HDAC activity as demonstrated by inhibition with the HDAC inhibitor trichostatin (Fig. 5A), the extent of activity reduction by the S11A mutant is remarkable and also matches the markedly reduced HDAC1/2 and LOXL3 binding (Fig. 4, E-H, and supplemental Fig. 2E). Because we were interested in the role of Snail1 in the control of pancreatic cancer growth, we next investigated whether HDAC activity also translated into expression of marker proteins known to be involved in proliferation (Fig. 5, B and C). Thus, we investigated proliferation markers regulated downstream of Snail1 and PKD1 in Panc1 and the stable Panc89 cell lines. Panc1 cells were transiently transfected with GFP vector, Snail1-GFP, or Snail1S11A-GFP and changes in Cyclin D1 expression levels were observed (29 -31). Cyclin D1 was markedly up-regulated by Snail1-GFP, FIGURE 5. Snail1-dependent histone deacetylase activity and regulation of proliferation markers. A, Snail1S11A reduces Snail1-associated HDAC activity as compared with wild type Snail1. HDAC activity was measured using a fluorometric assay kit. Crude nuclear extracts from 10 ϫ 10 6 HeLa nuclei were normalized for protein expression, and HDAC activity was measured in triplicate wells per condition in 96-well plates (Tecan infinity M1000) for GFP vector, Snail1-GFP, and Snail1S11A-GFP. For Snail1-GFP and Snail1S11A-GFP, results were further normalized to GFP transgene expression levels in crude lysates. The graph depicts the combined statistical analysis of three experiments. Statistical significance was calculated using one-way ANOVA with Bonferroni multiple comparison post-testing. Expression of transgenes in HeLa crude nuclear extracts and loading controls are shown in supplemental Fig. 3C. B, Snail1S11A impairs Snail1-mediated proliferation marker protein expression in Panc1 cells. Panc1 cells were transfected with GFP, Snail1-GFP, and Snail1S11A-GFP. Cyclin D1 and Ajuba markers involved in the regulation of proliferation were probed in 60 g of total cell lysates with specific antibodies. Transgenes were probed with anti-Snail1 antibody. Actin was used as a loading control. C, PKD1 and PKD1KD-GFP regulate proliferation marker protein levels in a pattern similar to that of phosphosite mutants. The expression levels of Cyclin D1 and Ajuba were probed with respective antibodies in 60 g of total cell lysates of stable Panc89 cells. Transgenes were detected with anti-GFP antibody. Tubulin was used as a loading control. D, expression of the Snail target gene Cyclin D2 is prominently up-regulated by ectopic PKD1, but not PKD2 expression. The graph displays fold change in regulation relative to respective vector controls. Quantitative real-time PCR for Cyclin D2 was performed on RNA isolated from stable Panc89 cells expressing GFP, PKD1-GFP, and PKD2-GFP. Four independent experiments were quantified in triplicate replica. Results were normalized to GAPDH and calculated according to the ⌬⌬CT-method. Statistical significance (**, p Ͻ 0.05) was calculated using one-way Anova with Bonferroni multiple comparison post-testing. Error bars in graphs represent S.E.
whereas expression of Snail1S11A reduced Cyclin D1 expression (vector, 1-fold; Snail1-GFP, 2.7-fold; Snail1S11A-GFP, 1.48-fold integrated band density; Fig. 4B). We further tested the effects of the phosphomimetic Snail1S11E mutant on Cyclin D1 expression. Indeed, Cyclin D1 was markedly up-regulated by Snail1S11E (supplemental Fig. 3B). To substantiate our findings, we additionally investigated a second proliferation marker, Ajuba. Interestingly, we found Ajuba to be a downstream target of Snail1. Ajuba is known to regulate cell cycle progression and G 2 /M transition by enhancing Aurora A kinase activity through direct interaction (32)(33)(34). Thus, Ajuba is involved in mitotic checkpoint control (34). Additionally, Aurora A and B kinases are overexpressed in cancer tissues and are potentially tumorigenic (32). Ajuba protein levels were upregulated by WT Snail1, whereas its expression was reduced by Snail1S11A (vector, 1-fold; Snail1-GFP, 2.22-fold; Snail1S11A-GFP, 1.78-fold integrated band density; Fig. 5B). To further validate our results, we assessed the regulation of the same markers in Panc89 cells stably expressing either vector, PKD1-GFP, or kinase-inactive PKD1KD-GFP (Fig. 5C). In line with Fig. 5B, Cyclin D1 was up-regulated 3.9-fold by PKD1-GFP, whereas its expression dropped 2.74-fold upon expression of PKD1KD-GFP. The expression of Ajuba was up-regulated by PKD1-GFP 2.9-fold compared with only 1.6-fold by PKD1KD-GFP. Thus, PKD1-mediated regulation of proliferation markers is similar to that of Snail1 and its phosphosite mutants. However, PKD1KD-GFP was not capable to act fully as a dominant negative construct in these experiments. This may be explained in line with the literature (23) by a prominent localization of PKD1KD-GFP at the trans-Golgi network as evidenced by FIGURE 6. A, PKD1, as opposed to PKD2, enhances anchorage-independent growth in soft agar experiments. We seeded 10,000 cells of stable Panc89 cell lines expressing GFP, PKD1-GFP, PKD1KD-GFP, PKD2-GFP, and PKD2KD-GFP in triplicate wells in 0.5% soft agar and documented assays after 13 days. A, the graph depicts the average number of colonies and S.E. per visual field documented at 10ϫ magnification for five independent experiments. Statistical significance (****, p Ͻ 0.0001) was calculated using a two-tailed unpaired Student's t test. B, panels AЈ-EЈ, examples of soft agar colonies documented for quantification. The scale bar represents 100 m. C, Snail1 expression enhances anchorage-independent growth of Panc1 cells as compared with vector control, whereas Snail1S11A reduces the number of colonies with respect to wild type Snail1. We transiently transfected 50,000 Panc1 cells and subsequently seeded cells in 0.5% soft agar in triplicate wells per assay and in three experiments. Assays were documented after 6 days. The graph depicts the average number of colonies and S.E. per well at 10ϫ magnification. Statistical significance (****, p Ͻ 0.0001) was calculated using one-way ANOVA with Bonferroni multiple comparison post-testing. Representative transgene expression and images of colonies are shown in supplemental Fig. 4, A and B. Error bars in graphs represent S.E. SEPTEMBER 21, 2012 • VOLUME 287 • NUMBER 39 strong co-localization with trans-Golgi network marker TGN46 (supplemental Fig. 3C).
Snail1 Mediates PKD1-induced Growth of Cancer Cells
To further assess whether the regulation of downstream targets by PKDs was indeed isoform-specific, we examined the regulation of another Snail target, Cyclin D2 (18), by qPCR in Panc89 cells expressing PKD1-or PKD2-GFP, respectively. In line with our previous findings, Cyclin D2 expression was upregulated 1329-fold by wild type PKD1, whereas its expression was only up-regulated 90.8-fold by wild type PKD2, only 7% of the effect of PKD1. These data confirm the selective regulation of Cyclin D2 by PKD1 as compared with PKD2 (Fig. 5D).
We next investigated whether these biochemical data would translate into biological readouts. At first, soft agar assays were performed to identify changes in anchorage-independent growth mediated by PKD1 and -2 isoforms or the respective kinase-inactive proteins.
PKD1, but Not PKD2, Enhances Anchorage-independent Growth in Panc89 Cells- Fig. 6A shows the statistical analysis of five independent soft agar experiments performed with Panc89 cells stably expressing GFP vector, PKD1-GFP, kinaseinactive PKD1 (PKD1KD-GFP), PKD2-GFP, or kinase-inactive PKD2 (PKD2KD-GFP), respectively (4,5). In line with our FRET studies and the biochemical data shown above, only wild type PKD1 significantly increased the average number of colonies per visual field by 31.4 times as compared with GFP vector cells. PKD1KD reduced the number of colonies as compared with PKD1-GFP by 3.6 times but still had a minor effect on FIGURE 7. Snail1 is a necessary and sufficient mediator of PKD1-regulated anchorage-independent growth and proliferation in pancreatic cancer cells. Stable Panc89 cells expressing GFP vector and PKD1-GFP were transduced with lentiviruses expressing non-target shRNA (scrambled; Sigma-Aldrich), sh_Snail1 1 (NM_005985.2-136s1c1, Sigma-Aldrich), and sh_Snail1 2 (NM_005985.2-504s1c1, Sigma-Aldrich) and subjected to antibiotic selection. Then we used 10,000 cells of stable cell lines expressing the respective constructs and shRNAs and seeded cells in triplicate wells in 0.5% soft agar. Assays were documented after 10 days at 4ϫ magnification for colony counting. A, the graph depicts the combined average number of colonies per visual field of three experiments with six images at 4ϫ magnification per well and three replicate wells per experiment. B, exemplary images (panels AЈ-FЈ) used for quantification of colony numbers at 4ϫ magnification. The scale bar represents 100 m. Table 1 displays average differences (%) in colony number between conditions. Statistical significance (****, p Ͻ 0.0001) was calculated using one-way ANOVA with Bonferroni multiple comparison post-testing. C, control blots for knockdown efficacy of endogenous Snail1 with sh_Snail1 1 and 2 in stable Panc89 cells. Snail1 expression levels were probed in 60 g of total cell lysates using anti-Snail1 antibody. Tubulin was used as a loading control. Error bars in graphs represent S.E.
TABLE 1 Regulation of anchorage-independent growth by PKD1 and Snail1 shRNAs
Relative differences (%) in colony numbers are indicated by positive and negative values, respectively.
Snail1 Mediates PKD1-induced Growth of Cancer Cells
anchorage-independent growth of Panc89 cells, which correlates well with the data on proliferation marker expression (Fig. 5B). In contrast, PKD2 had no effect on anchorage-independent growth (Fig. 6, A and B, panels DЈ and EЈ). Fig. 6B, panels AЈ-EЈ, show representative colonies documented for quantification. In addition to the significant increase in colony number, colony size was also markedly increased upon expression of PKD1 (Fig. 6B, panel BЈ). Thus, our results again indicate a SEPTEMBER 21, 2012 • VOLUME 287 • NUMBER 39 PKD1 isoform-specific regulation of anchorage-independent proliferation in pancreatic cancer cells. We additionally performed soft agar experiments with transiently transfected Panc1 cells expressing GFP vector, WT Snail1-GFP, and the S11A mutant construct (Fig. 6C). In line with previous experiments performed with PKD1, anchorage-independent growth in Panc1 cells was significantly enhanced by WT Snail1 (3.65 times) as compared with vector control (****, p Ͻ 0.0001) and reduced by 44% upon expression of the S11A mutant compared with WT Snail1 (****, p Ͻ 0.0001) (Fig. 6C). Colonies documented for the respective conditions are depicted in supplemental Fig. 4A (panels AЈ-CЈ), and transgene expression of total cell lysates is shown in supplemental Fig. 4B. Snail1 Is Required to Mediate PKD1-regulated Effects on Anchorage-independent Growth-To demonstrate that PKD1mediated Snail1 phosphorylation is required for PKD1-induced anchorage-independent growth of pancreatic cancer cells, we performed soft agar assays with Panc89 cells stably expressing vector or PKD1-GFP with two different shRNAs against Snail1 as well as a non-targeting scrambled control. Fig. 7A displays the summarized statistical analysis of three soft agar assays with Panc89 cells. PKD1 expression increased anchorage-independent growth as compared with vector cells by 84.6%. In vector cells, knockdown of Snail1 reduced the average number of colonies per visual field by 60.4% for sh_Snail1 1 and 53.6% for sh_Snail1 2. For PKD1-expressing cells, Snail1 knockdown reduced the number of colonies by 68.5% for sh_Snail1 1 and 82.8% for sh_Snail1 2 (Table 1). We also quantified colony size. PKD1 expression enhanced colony size, and this was reduced by knockdown of Snail1 (data not shown). Examples of images used for quantification of colony numbers at 4ϫ magnification are shown in Fig. 7B for all conditions (panels AЈ-FЈ). The respective knockdown controls for Snail1 in the stable cell lines are shown in Fig. 7C. In conclusion, these data indicate that Snail1 as a downstream target of PKD1 is required to regulate anchorage-independent growth of pancreatic cancer cells by Ser-11 phosphorylation.
Snail1 Mediates PKD1-induced Growth of Cancer Cells
PKD1 Enhances Whereas PKD1KD Inhibits Panc89 Tumor Cluster Growth in Three-dimensional BME Culture-To investigate a PKD1-dependent regulation of anchorage-dependent tumor cluster growth and proliferation, we performed threedimensional BME culture using stable Panc89 cells expressing PKD1-and PKD1KD-GFP. Vector, PKD1-GFP, and PKD1KD-GFP cells were seeded in BME and documented after 16 days of growth. Fig. 8A displays representative examples of tumor cell clusters used for the assessment of three-dimensional growth (diameter). In line with the soft agar assays, the average size of tumor cell clusters with stable ectopic expression of PKD1-GFP was significantly increased by 10.1% as compared with GFP vector-expressing cells (***, p Ͻ 0.0005) (Fig. 8, A and B). PKD1KD-GFP significantly reduced the average cluster diameters by 10.3% when compared with vector controls (****, p Ͻ 0.0001) (Fig. 8B), indicating that PKD1 is also involved in the regulation of anchorage-dependent growth of pancreatic tumor cell clusters. Fig. 8, C and D, depict the respective frequency distribution histograms of tumor cluster diameters for PKD1-GFP and PKD1KD compared with GFP control cells. These data demonstrate that PKD1 expression resulted in a higher percentage of larger clusters, whereas PKD1KD-expressing cells formed smaller colonies. To corroborate these data, we performed proliferation assays with HeLa cells to investigate a general regulation of anchorage-dependent proliferation by Snail1 Ser-11 phosphorylation. GFP vector, WT Snail1, and the Snail1S11A mutant were transiently expressed in HeLa cells, and proliferation was quantified by measuring A 550 values of crystal violet-stained cells at time points T0, T24, and T48 h. 48 h after transfection WT Snail1 markedly decreased doubling times from 56.25 to 35.65 h (vector versus Snail1-GFP), enhancing proliferation, whereas expression of Snail1S11A had virtually no effect on the doubling time (52.17 h) (Fig. 8E). Transgene expression is shown in supplemental Fig. 5. Thus, PKD1-dependent phosphorylation of Snail1 at Ser-11 is involved in controlling anchorage-dependent and -independent growth and proliferation in two-dimensional and three-dimensional environments. To further validate our data on the regulation of proliferation by PKD1, we performed lentivirus-mediated knockdown experiments in GFP vector cells followed by three-dimensional BME culture (Fig. 8, F-H). Clusters were documented after 32 days (Fig. 8G). In line with all previous data, knockdown of PKD1 resulted in drastically reduced cluster sizes (diameters) of 38.3% in PKD1 knockdown cells (Fig. 8H). Specific knockdown of PKD1, but not PKD2, was verified by isoform-specific antibodies (Fig. 8F). Frequency distribution histograms show a shift to smaller cluster diameters following knockdown of PKD1 (Fig. 8I). Taken together, these findings indicate that PKD1 enhances proliferation and FIGURE 8. Three-dimensional growth in BME. A, panels AЈ-CЈ, 10,000 single cells of stable Panc89 cell lines expressing GFP, PKD1-GFP, and PKD1KD-GFP were seeded in a BME gel and documented in assays after 16 days. The scale bar represents 100 m. B, PKD1 significantly enhances clusters growth, whereas PKD1KD decreases cluster size. The average diameter of tumor cell clusters was quantified in perpendicular directions for each cluster using spacial calibration of images (for vector, n ϭ 150; for PKD1-GFP, n ϭ 161; and for PKD1KD-GFP, n ϭ 181). The graph depicts average diameters and S.E. of three experiments. C, frequency distribution histograms of structure diameters for vector versus PKD1-GFP. D, frequency distribution histogram of structure diameters for vector versus PKD1KD-GFP. E, Snail1 enhances whereas S11A mutation inhibits proliferation in HeLa cells after 48 h. The combined analysis of three independent proliferation assays was performed with transiently transfected cells expressing vector, Snail1-GFP, and Snail1S11A-GFP. Cells were seeded after 24 h at a density of 5000 cells/well in triplicate replicas in 96-well plates. Cell density was quantified by measuring A 550 of crystal violet-stained cells dissolved in methanol at time points T0, T24, and T48 h. The graph depicts the relative mean intensities for the respective cell lines after 24 and 48 h, respectively. Statistical significance was calculated using an unpaired Student's t test. Doubling times were calculated using linear regression (GraphPad Prism). Representative transgene expression is shown in supplemental Fig. 5. F, Panc89 GFP vector cells were transduced with lentiviruses expressing scrambled control shRNA (Sigma-Aldrich) and sh_PKD1 (NM_002742.x-2978s1c1, Sigma-Aldrich). A PKD1 knockdown was probed using a specific anti-PKD1 antibody in semistable cell lines following selection. Blots were reprobed for PKD2 expression, and Actin was used as a loading control. G, semistable Panc89 vector sh_scramble-and sh_PKD1expressing cells were seeded at 10,000 single cells in BME gel and documented after 32 days. The scale bar represents 100 m. H, knockdown of PKD1 significantly reduces clusters growth (diameter). The average diameter of tumor cell clusters was quantified in perpendicular directions for sh_scramble (n ϭ 45) and sh_PKD1 (n ϭ 84). The graph depicts average diameters and S.E. of three experiments. Numbers in the graph denote -fold change in percent. I, frequency distribution histogram for knockdown of PKD1 versus scrambled shRNA control. Knockdown of PKD1 significantly reduces cluster sizes in the BME matrix. Error bars in graphs represent S.E.
anchorage-dependent growth of tumor cell clusters in threedimensional culture. By contrast, PKD1KD-GFP or knockdown of PKD1 significantly inhibited proliferation, and this was mediated by phosphorylation of Snail1 at Ser-11.
DISCUSSION
PKDs are involved in the regulation of important cellular features such as proliferation (10,11,13,(35)(36)(37), motility, and invasiveness (2-5) of different tumor types. However, specific and detailed functions for distinct PKD isoforms have not been addressed so far. In a previous work, Ochi et al. (38) have proposed a function for PKD1 in the regulation of anchorage-dependent growth. However, the properties of distinct PKD isoforms were not directly compared or addressed by inhibitors that are not isoform-specific. Thus, it is as yet unclear whether PKD isoforms act in a redundant or specific fashion in tumors.
Our findings indicate that PKD1, as opposed to PKD2, regulates the expression of marker proteins involved in a hyperproliferative phenotype such as Cyclins D1 and D2 (29, 31) as well as Ajuba (33,34) via phosphorylation of Snail1 at serine 11 in pancreatic cancer cells. Our data also suggest that phosphorylation at this site is necessary for efficient binding of vital Snail1 co-repressors such as HDAC2 modulating Snail1-dependent HDAC activity. In contrast to Du et al. (16), Snail1 phosphorylation at Ser-11 did not affect nucleocytoplasmic shuttling of the protein. This may be explained by 14-3-3 down-regulation in many tumor cells by different mechanisms (39) including promotor methylation and inhibition downstream of p53 mutations, thereby facilitating cancer formation by many routes (40). Indeed, PKD1KD was not able to induce nuclear localization of primarily cytoplasmic Snail1 in non-transformed, immortalized HEK293T cells (supplemental Fig. 1B).
Here we propose a different mechanism for the regulation of Snail1 function by PKD1 in tumor cells: the phosphorylationdependent binding of co-repressors such as HDAC2 to Snail1. In addition to regulation of HDAC activity, we identified another regulatory mechanism induced by PKD1 that affects Snail1 function: PKD1 is required for up-regulation of LOXL3, which can stabilize the Snail1 protein (Fig. 4, A, B, and C, and supplemental Fig. 2E).
In conclusion, our data demonstrate that PKD1 enhances proliferation (10,36) of pancreatic and other cancer cells, and this regulation is mediated by Snail1 via phosphorylation at Ser-11. Snail1 is therefore required and sufficient for PKD1driven proliferation and anchorage-independent growth of different tumor cells. An overview of PKD1-mediated Snail1 regulation and control of biological effects is depicted in Fig. 9.
Thus, PKD1 expression could be relevant for primary tumors to drive proliferation and initiate epithelial-mesenchymal transition, preparing cells for the dissemination phase. At later stages, however, when cells are invading the surrounding matrix or tumor stroma, loss of PKD1 activity could even be beneficial because loss of PKD1 enables cells to acquire a high motility phenotype via the regulation of Actin-regulatory proteins such as Cortactin and Slingshot1L. This is also further supported by reports showing a reduced expression of PKD1 in a number of invasive tumor cells and tumor tissues (2). | 9,805.6 | 2012-07-12T00:00:00.000 | [
"Biology"
] |
Periodontal Inflammation and Dysbiosis Relate to Microbial Changes in the Gut
Periodontal disease (PerioD) is a chronic inflammatory disease of dysbiotic etiology. Animal models and few human data showed a relationship between oral bacteria and gut dysbiosis. However, the effect of periodontal inflammation and subgingival dysbiosis on the gut is unknown. We hypothesized that periodontal inflammation and its associated subgingival dysbiosis contribute to gut dysbiosis even in subjects free of known gut disorders. We evaluated and compared elderly subjects with Low and High periodontal inflammation (assessed by Periodontal Inflamed Surface Area (PISA)) for stool and subgingival derived bacteria (assayed by 16S rRNA sequencing). The associations between PISA/subgingival dysbiosis and gut dysbiosis and bacteria known to produce short-chain fatty acid (SCFA) were assessed. LEfSe analysis showed that, in Low PISA, species belonging to Lactobacillus, Roseburia, and Ruminococcus taxa and Lactobacillus zeae were enriched, while species belonging to Coprococcus, Clostridiales, and Atopobium were enriched in High PISA. Regression analyses showed that PISA associated with indicators of dysbiosis in the gut mainly reduced abundance of SCFA producing bacteria (Radj = −0.38, p = 0.03). Subgingival bacterial dysbiosis also associated with reduced levels of gut SCFA producing bacteria (Radj = −0.58, p = 0.002). These results suggest that periodontal inflammation and subgingival microbiota contribute to gut bacterial changes.
Introduction
Periodontal disease is a chronic, inflammatory condition present in more than 50% of the population [1].It results from the interaction between subgingival dysbiotic bacteria and the host immune response [2][3][4] leading to local inflammation characterized by tissue infiltration with immune cells and high proinflammatory cytokines such as IL-8, IL-1β, IL-6, and TNFα [5] and systemic inflammation.
There is increasing evidence that the oral microbiome and periodontal inflammation play a significant role in systemic diseases, including gut disorders characterized by gut microbial dysbiosis [6,7].The gut microbiome is the most abundant and diverse microbiome.In the human gut, there are >1000 different bacterial species [8] making up about 2 million genes (the microbiome).The gut with its microbiome is contiguous with the oral cavity, which has the second most abundant microbiome.Therefore, multiple anatomical and physiologic communications exist between the two sites [9].In the gut, dysbiotic states are, in general, associated with a decrease in bacterial diversity (number of different bacterial species) and decrease in beneficial bacteria such as those with anti-inflammatory properties, those producing short-chain fatty acids (SCFA) such as acetate, propionate, and butyrate, or those with an intestinal-barrier-protecting effect [9][10][11].In animal models, gut SCFAproducing bacteria have been shown to be decreased by ligature-induced periodontitis and increased by nonsurgical periodontal treatment, emphasizing the importance of periodontal inflammation in modulating gut SCFA-producing bacteria [12].
Clinical data linking periodontal disease and gut dysbiosis are shown particularly in conditions with pre-existing gut pathology [13][14][15].In its absence, only a few small studies showed differences in the gut microbial composition between subjects with and without periodontal disease [16].
While the mechanism by which oral bacteria and inflammation can contribute to gut disorders is unknown, there is considerable evidence showing that periodontal disease can change gut bacterial composition.Periodontal-derived bacteria and inflammatory cytokines may directly reach the gut or indirectly by getting access to the systemic circulation and then reaching the gut.Recently, we have shown that clinical periodontal inflammation correlated with salivary cytokines, demonstrating a strong local oral inflammation related to periodontal inflammation [17].Therefore, it is conceivable that clinically defined periodontal inflammation may impact gut health.
The present cross-sectional study tested the hypothesis that, in elderly subjects free of gut conditions/diseases, clinical periodontal inflammation and periodontal bacteria are directly associated with gut dysbiosis and inversely associated with gut bacteria known to produce SCFA ("healthy bacteria").
Study Design and Population
This is a cross-sectional study in which the subjects were recruited from an existing cohort and their characteristics were described previously [18,19].Thirty-six (36) subjects that had both clinical periodontal measures and stool samples were included in this study.Among them, 26 subjects also had measures of subgingival microbiota.Inclusion Criteria: All included subjects had reported ≥12 years of education.Exclusion Criteria: Individuals were excluded if they had significant history or medical conditions of stroke, diabetes, uncontrolled hypertension, head trauma, any neurodegenerative disease, and chronic depression.Subjects taking anti-inflammatory medications for chronic conditions (i.e., NSAIDS and anti-TNFα) or antibiotics or having periodontal treatment within 3 months of the periodontal evaluation were also excluded.All dental exams and sample collections were standardized.Subgingival and stool sample collection and processing were conducted independently and therefore blinded from each other.
Dependent Variables
Primary outcomes were derived from the gut microbiota in stool samples.Using Lefse analysis [20], we identified gut bacteria at the species level that differed between the groups with high and low levels of clinical periodontal inflammation: High PISA and Low PISA.Gut pathogenic bacteria were defined as those that were abundant in High PISA, while gut beneficial bacteria were defined as those most abundant in the Low PISA group.Then, we constructed a gut dysbiotic index (Gut-DI) defined as the ratio of the mean of gut pathogenic bacteria to the mean of beneficial bacteria.This approach is modeled after other indexes published in the literature [21,22].
Secondary outcomes: Short-chain fatty acid (SCFA) bacterial index was derived by the cumulative mean of each of butyrate, propionate, and acetate bacterial indexes in the gut microbiome.These SCFAs are most commonly associated with health benefits [23,24].All these indexes were derived by averaging bacterial abundances that are known to produce the respective SCFA as summarized by Akhadar [25].Therefore, butyrate bacterial index was derived from the following bacterial abundances: Ruminococcus_bromii, Anaerostipes_s, Coprococcus_eutactus, Roseburia_s, and Faecalibacterium prausnitzii.Propionate bacterial index was derived from the following bacterial abundances: Akkermansia_muciniphila, Ruminococcus_Other_A, Bacteroides_s, Coprococcus_eutactus, Roseburia_s, and Dialister_s and acetate bacterial index was formed from Bifidobacterium_s, Bacteroides_s, Streptococcus_s, Clostridium_s, Blautia_s, Ruminococcus_s, and Akkermansia muciniphila.
Independent Variables
Independent variables (main exposure) were the clinical measures of periodontal disease: periodontal inflamed surface area (PISA) scores.PISA scores were calculated from periodontal pocket depth (PD) and bleeding on probing (BOP) using the formula from Nesse's publication (http://www.parsprototo.info/(accessed on 5 June 2024)) [26] accessed February, 2024.PISA scores [26] were dichotomized into High (pathogenic) vs. Low PISA (nonpathogenic) groups as we previously described using 450 mm 2 as the threshold [17].Based on this cut-off, 12 subjects were High PISA and 24 subjects were Low PISA.
The secondary independent variable (exposure) was the subgingival dysbiotic index (Subgingival-DI), as published previously by us and others [19].It was defined as the abundance ratio at the genus level of bacteria associated with periodontal disease (Treponema, Porphyromonas, and Tannerella) to healthy bacteria (Rothia and Corynebacterium) [19].Higher numbers indicate a less healthy oral microbiome.
Clinical Evaluations and Sample Collection
Periodontal exam: Subjects received an oral-periodontal examination, as previously described [17].Briefly, this exam encompassed examination of 6 surfaces of each tooth for probing depth, clinical attachment loss (CAL), and bleeding on probing (BOP).Pocket depth measures were assessed at six sites per tooth using a Michigan probe and defined as the linear distance in millimeters from the gingival margin to the base of the periodontal pocket in millimeters.Bleeding on probing (BOP) was assessed at each probing site after the quadrant probing.Demographic (age, gender, and education), systemic factors (comorbidities), oral (brushing, dentist visits, and prophylaxis), and social (smoking) measures were obtained by a standardized examiner-conducted interview at the time of the oral examination [18].Smoking was defined as never smoking vs. current/past smoking.Brushing was classified as brushing once/day vs. >once/day.Prophylaxis was defined by having cleanings each 3 months or at intervals > 3 months.
Subgingival sample collection: Subgingival bacterial samples were collected from the four deepest periodontal pockets, as previously described [27].The samples were pooled into one vial and stored at −80 • C.
Stool was collected as previously published [28].Subjects were provided with a stool collection kit containing detailed written instructions, a collecting hat, a stool collecting kit (ALPCO), and gel ice packs.Stool was collected at home and brought to the NYUCD at the appointment within 24 h.Stool processing and storage was standardized and samples were stored at −80 • C until sequencing.
Microbiome Assessment and Analyses 16S rRNA Amplification and Sequencing
We used the 16SrRNA methodology as previously published in this study [29,30].Briefly, DNA was extracted from the subgingival plaque and stool samples.Using PCR, the V3-V4 region of 16S rRNA gene was amplified, sequenced, and the reads were clustered in operational taxonomic units (OTU) identifying bacterial ranking.We report our analyses at species levels.Alpha diversity was assessed for the observed OTUs and Shannon Diversity Index.
Statistical Methods
Statistical analyses were performed using IBM SPSS (v27, IBM Corp., Armonk, NY, USA).Continuous data are presented as means and standard deviation (SD) and categorical data as percentages.To evaluate biomarker group differences for continuous variables, t-test and Mann-Whitney U (MWU) test were used, whichever was appropriate.For categorical variables, Chi-Square tests were used.Normality was tested by Kolmogorov-Smirnov and log10 transformation was used to normalize the distributions for each microbial index.For microbiome analyses, we used the linear discriminant effect size analysis (LEfSe).LEfSe uses an algorithm that combines statistical with biological significance to reveal biomarker clusters [20].Effect size (LDA = linear discriminative analysis scores expressed in log 10 ) provides an estimation of the magnitude of the observed effect.The effect size (LDA) to estimate the magnitude of the observed effect was computed using the default settings, p < 0.05 and LDA ≥ 2 [20].
Statistical Analysis
Association between Gut-DI and SCFA bacterial indexes and PISA and Subgingival-DI were evaluated using multiple regression analyses adjusted for age.In initial models, we evaluated the association of potential confounders with the bacterial indexes: gender, education, BMI, behaviors (smoking, brushing, and dentist visits), systemic conditions (0 vs. 1 medical conditions).None of these were significant and, therefore, not included in the final models.Age was also not significant; however, due to its reported association with both gut bacterial dysbiosis [31,32] and PISA/Subgingival-DI, it was included in all final models.Given the exploratory nature of this study, an unadjusted p < 0.05 level of significance was used.
Results
The characteristics of our population are shown in Table 1.Our population was relatively homogeneous.Most were white, elderly, and highly educated.Females were more represented.Subjects were relatively healthy with >30% not reporting any medical conditions, while 60% reported ≤ 1 medical condition.Two subjects were current smokers.PISA scores were similar as a function of age, gender, education, BMI, smoking, or the number of systemic conditions.All subjects had CAL ≥ 5 mm and, therefore, they had stage III and IV periodontitis.There were no differences in the gut alpha diversity assessed by observed and Shannon indexes between PISA+ and PISA− groups (p = 0.12 and p = 0.80).
Gut Pathogenic and Beneficial Bacteria Were Differentially Enriched in High/Low PISA
Using LEfSe, we determined the most discriminative features between the 12 High and the 24 Low PISA groups at the species level.As shown in Figure 1, the species from taxa Coprococcus, Atopobium, and Clostridiales were abundant in High PISA, while Lactobacillus zeae, and species from genera Lactobacillus, Roseburia, and Ruminococcus were abundant in the Low PISA (red) group.Subjects (n = 36) that were included because they provided the information needed to complete these analyses were similar to those not included (n = 40) on measures of age (p = 0.91), years of education (p = 0.36), BMI (p = 0.65), PISA (p = 0.76), number of teeth (p = 0.80), and Perio staging diagnosis (p = 0.55).Thus, these subjects appear representative of the larger group.There were no differences in High and Low PISA in gender (p = 0.45), race (p = 0.71), smoking (p = 0.45), brushing (p = 0.55), prophy frequency (p = 0.55), or the presence of ≤1 medical condition (p = 0.81).
There were no differences in the gut alpha diversity assessed by observed and Shannon indexes between PISA+ and PISA− groups (p = 0.12 and p = 0.80).
Gut Pathogenic and Beneficial Bacteria Were Differentially Enriched in High/Low PISA
Using LEfSe, we determined the most discriminative features between the 12 High and the 24 Low PISA groups at the species level.As shown in Figure 1, the species from taxa Coprococcus, Atopobium, and Clostridiales were abundant in High PISA, while Lactobacillus zeae, and species from genera Lactobacillus, Roseburia, and Ruminococcus were abundant in the Low PISA (red) group.
Gut-DI Correlates with Firmicutes-to-Bacteroidetes Ratio
Although there are dissenters [21], Firmicutes/Bacteroidetes (F/B) ratio is accepted as an important index signaling pathogenic intestinal changes/gut dysbiosis [33].Therefore, we tested whether Gut-DI correlated with F/B ratio.As shown in Figure 2, there was a direct correlation between the Gut-DI constructed using Lefse and F/B ratio with R = 0.36, p = 0.04.
Gut-DI Correlates with Firmicutes-to-Bacteroidetes Ratio
Although there are dissenters [21], Firmicutes/Bacteroidetes (F/B) ratio is accepted as an important index signaling pathogenic intestinal changes/gut dysbiosis [33].Therefore, we tested whether Gut-DI correlated with F/B ratio.As shown in Figure 2, there was a direct correlation between the Gut-DI constructed using Lefse and F/B ratio with R = 0.36, p = 0.04.Gut dysbiotic index correlates with Firmicutes-to-Bacteroides ratio.Although there is no consensus definition for gut dysbiosis [21], Firmicutes/Bacteroidetes (F/B) ratio is accepted as an important index of gut dysbiosis [33].We showed that Gut-DI defined in our study correlated with Firmicutes-to-Bacteroides ratio (R = 0.36, p = 0.04).
Gut-DI Inversely Associated with Gut SCFA Bacterial Index
A growing body of evidence suggests that some bacteria are particularly beneficial through SCFA production [25,34].Therefore, we sought to determine if Gut-DI relates to SCFA-producing bacteria.In regression analyses, the gut SCFA bacterial index was predicted by Gut-DI (Radj = −0.43,p = 0.01); Figure 3 shows the relationship between the Gut-DI and SCFA bacterial index: as Gut-DI increased, the abundances of gut SCFA bacteria decreased.Among the three components of the gut SCFA bacterial indexes, the propionate (Radj = −0.34,p = 0.044) and butyrate (Radj = −0.55,p = 0.001) significantly associated with Gut-DI, while the acetate index was approaching significance (Radj = −0.29,p = 0.091).Gut Dysbiotic Index Firmicutes to Bacteroides Ratio R=0.36, p= 0.04 Figure 2. Gut dysbiotic index correlates with Firmicutes-to-Bacteroides ratio.Although there is no consensus definition for gut dysbiosis [21], Firmicutes/Bacteroidetes (F/B) ratio is accepted as an important index of gut dysbiosis [33].We showed that Gut-DI defined in our study correlated with Firmicutes-to-Bacteroides ratio (R = 0.36, p = 0.04).
Gut-DI Inversely Associated with Gut SCFA Bacterial Index
A growing body of evidence suggests that some bacteria are particularly beneficial through SCFA production [25,34].Therefore, we sought to determine if Gut-DI relates to SCFA-producing bacteria.In regression analyses, the gut SCFA bacterial index was predicted by Gut-DI (Radj = −0.43,p = 0.01); Figure 3 shows the relationship between the Gut-DI and SCFA bacterial index: as Gut-DI increased, the abundances of gut SCFA bacteria decreased.Among the three components of the gut SCFA bacterial indexes, the propionate (Radj = −0.34,p = 0.044) and butyrate (Radj = −0.55,p = 0.001) significantly associated with Gut-DI, while the acetate index was approaching significance (Radj = −0.29,p = 0.091).
PISA Inversely Associated with Gut SCFA Bacterial Index
We tested the hypothesis that PISA will also impact the SCFA-producing bacteria.In regression analyses, the gut SCFA bacterial index was predicted by PISA score (Radj = −0.38,p = 0.03).Figure 4 shows the relationship between PISA and the SCFA bacterial index: as PISA scores increased, the abundances of SCFA-producing bacteria decreased.Among the three SCFA-producing bacteria, PISA associated significantly with the propionate (Radj = −0.42,p = 0.01) and acetate (Radj = −0.34,p = 0.045) but not with butyrateproducing bacteria (Radj = −0.14, p = 0.41).
PISA Inversely Associated with Gut SCFA Bacterial Index
We tested the hypothesis that PISA will also impact the SCFA-producing bacteria.In regression analyses, the gut SCFA bacterial index was predicted by PISA score (Radj = −0.38,p = 0.03).Figure 4 shows the relationship between PISA and the SCFA bacterial index: as PISA scores increased, the abundances of SCFA-producing bacteria decreased.Among the three SCFA-producing bacteria, PISA associated significantly with the propionate (Radj = −0.42,p = 0.01) and acetate (Radj = −0.34,p = 0.045) but not with butyrate-producing bacteria (Radj = −0.14, p = 0.41).
Main Findings
In elderly subjects, free of gut disorders, clinical periodontal inflammation was associated with indicators of dysbiosis in the gut.These changes were specifically related to reduced activity in healthy gut bacteria that produce SCFA.Subgingival bacterial dysbiosis was also associated with reduced levels of gut SCFA bacteria.These results suggest that periodontal inflammation and subgingival microbiota contribute to gut dysbiosis and are consistent with animal models showing modulation of gut dysbiosis by induction and
Main Findings
In elderly subjects, free of gut disorders, clinical periodontal inflammation was associated with indicators of dysbiosis in the gut.These changes were specifically related to reduced activity in healthy gut bacteria that produce SCFA.Subgingival bacterial dysbiosis was also associated with reduced levels of gut SCFA bacteria.These results suggest that periodontal inflammation and subgingival microbiota contribute to gut dysbiosis and are consistent with animal models showing modulation of gut dysbiosis by induction and treatment of periodontal disease [12].
Clinical Periodontal Inflammation and Gut Dysbiosis
We tested the hypothesis that clinical periodontal inflammation affects gut bacterial changes.We found that, in subjects with high clinical periodontal inflammation, gut microbiota were characterized by enrichment mainly in bacteria associated with gut pathology, while, in those with low clinical periodontal inflammation, the gut bacteria were enriched in beneficial bacteria.The gut pathogenic bacteria enriched in High PISA such as Clostridia [35] and Atopobium [36] lead to gut inflammation, increase permeability, translocation of bacteria and lipopolysaccharides (LPS) to the systemic circulation [37], and consequent systemic inflammation [38][39][40].Coprococcus species, in contrast, have been associated with health benefits due to their butyrate production.We can speculate that their enrichment in High PISA may be due to specific coprococcus species/subspecies that may have pathogenic capabilities or their enrichment is a reaction to the presence of a high dysbiotic environment.Low PISA was enriched in beneficial bacteria whose effects are contrary to the pathogenic bacteria.For example, Lactobacillus_s_ and Lactobacillus zeae are known for maintaining IBD remission [41].Species from Lactobacillaceae family produce lactic acid as the final product of glucose fermentation, with immune health and pathogenic bacteria inhibitory effects [41].In fact, Lactobacillus is considered probiotic due to its benefits, safety profile, and its production of acetate [42].Other beneficial bacteria such as Rosburia and Ruminococcus are known for SCFA production (see below).
Clinical periodontal inflammation is a characteristic of periodontal disease and can be assessed by PISA scores using current measures of periodontal disease: PD and BOP [26,43].PISA has been found to associate with plasma CRP [44], a marker of systemic inflammation and several systemic conditions [45][46][47], and correlates with salivary cytokines [17].In addition to inflammation, periodontal disease is characterized by periodontal tissue destruction as a pathognomonic feature.Periodontal disease expression depends on the interaction between periodontal bacteria and host immune response.It affects more than 50% of people over 50 [3,4].
There are no studies directly linking clinical periodontal inflammation (assessed by PISA) to the gut.However, studies showed the importance of periodontal disease in gut pathology and is supported by periodontal animal models demonstrating gut dysbiosis, oral bacterial translocation to the gut, gut inflammation (increased CRP, Th1, and Th17 cytokines), increased permeability, systemic inflammation, and distant pathology such as AD pathology [48][49][50].Ectopic colonization of oral bacteria in the intestine leads to TH1 cell induction and inflammation [51].Transplanting saliva from patients with severe periodontitis into mice changed gut microbiota composition with higher abundance of Porphyromonadaceae and Fusobacterium and lower Akkermansia compared to controls (no perio) [13].Reduced periodontal inflammation in animal models reduced gut infiltration with immune cells and production of inflammatory cells, particularly Th1-and Th17-related cytokines [52].
The gut microbiota is highly complex and there is significant variation among individuals.Therefore, to date, there is no gold standards to define healthy gut microbiota or gut dysbiosis [53].However, several studies defined a change in the gut Firmicutesto-Bacteroides ratio as a gut imbalance that was linked to obesity [54,55], metabolic syndrome [56], or autism [57].We find that our Gut-DI correlated with Firmicutes-to-Bacteroides ratio, suggesting that our Gut-DI reflects gut dysbiosis at higher ranking levels.
We hypothesized that clinical periodontal inflammation would also change the abundance of selective gut SCFA-producing bacteria.The inverse relationship between Gut-DI and SCFA bacterial index is not surprising.While dysbiotic index serves as a surrogate for bacterial dysbiosis, dysbiosis or normobiosis are defined by the composition and functional make-up of the whole bacterial community and not by just a few bacteria composing the dysbiotic index.Therefore, as expected, as dysbiotic index increased, the beneficial bacteria decreased.
Subgingival Dysbiosis and Gut Dysbiosis
Our data showed that subgingival dysbiosis inversely associated with gut SCFA bacterial index, further supporting the hypotheses that periodontal disease associated with clinical periodontal inflammation and periodontal bacteria associated with gut microbial changes.
Over 200 bacterial species colonize the subgingival biofilm and, among them, several are enriched in periodontal disease (i.e., Porphyromonas gingivalis (PG) and Treponema denticola), while others (i.e., Rothia and Corynebacterium) are enriched in periodontically healthy subjects [64].One recent study found that subgingival dysbiotic index defined as a ratio of Treponema, Porphyromonas, and Tannerella to healthy bacteria (Rothia and Corynebacterium) [19] associated with periodontal disease and PISA.We used this index in a previous study to show correlations with a systemic effect [19].
The mechanism by which periodontal-disease-associated inflammation/dysbiosis may induce gut dysbiosis is unknown.However, we can speculate that oral bacteria and associated inflammatory molecules directly or indirectly can affect the gut bacterial composition.Approximately ~1 L to 1.5 L of saliva containing ~10 11 oral bacterial cells daily reaches the intestinal tract.While many oral bacteria are destroyed, evidence showed that a significant number reach the gut [65], stimulate the immune system, and, with predisposing influences, can cause pathology [51].The indirect mechanism implies that bacteria and inflammatory cytokines obtain access to the systemic circulation and then reach the gut or influence gut response.
We published previously that two salivary indexes composed of six cytokines correlated with PISA [17].In the present study, salivary cytokine indexes failed to correlate with gut measures and thus failed to support the direct effect of oral cytokines on the gut.
Another possible mechanism is via oral bacteria.It is possible that periodontal bacteria reach the gut directly.Clinical data linking periodontal disease and gut dysbiosis are shown particularly in conditions with pre-existing gut pathology [14,15,[66][67][68][69].We tested whether Subgingival-DI associated with Gut-DI.While, in this small sample, Subgingival-DI failed to correlate with Gut-DI, in regression analyses, Subgingival-DI associated with gut SCFA bacterial index, suggesting that the subgingival periodontal bacteria effect may be on SCFA-producing bacteria.It is possible that SCFA-producing bacteria are more sensitive to subgingival dysbiosis or the study is too small to detect these changes.Gut-DI correlated with SCFA bacterial index, suggesting that SCFA may drive gut dysbiosis.Therefore, we propose a model (Figure 6) in which A. subgingival pathogenic bacteria induce reductions in the gut SCFA-producing bacteria and, therefore, the amount of gut SCFA.Lower gut SCFA, in turn, regulate gut dysbiosis with negative consequences on the gut.B. Periodontal inflammation independently or interactively regulates gut SCFA-producing bacteria in addition to contributing to gut dysbiosis.These results are consistent with other studies.In an animal model, ligature-induced periodontitis-induced gut dysbiosis, intestinal pathology, and changes in bacterial function.Moreover, 4 weeks after, a nonsurgical periodontal treatment partially restored the gut microbiota toward health.Consistent with our study, periodontal treatment increased bacteria with SCFA-producing effects, again suggesting that subgingival bacteria effects are targeted towards SCFA bacteria [12].This model should be tested in larger, longitudinal studies.
gut SCFA bacterial index, suggesting that the subgingival periodontal bacteria effect may be on SCFA-producing bacteria.It is possible that SCFA-producing bacteria are more sensitive to subgingival dysbiosis or the study is too small to detect these changes.Gut-DI correlated with SCFA bacterial index, suggesting that SCFA may drive gut dysbiosis.Therefore, we propose a model (Figure 6) in which A. subgingival pathogenic bacteria induce reductions in the gut SCFA-producing bacteria and, therefore, the amount of gut SCFA.Lower gut SCFA, in turn, regulate gut dysbiosis with negative consequences on the gut.B. Periodontal inflammation independently or interactively regulates gut SCFA-producing bacteria in addition to contributing to gut dysbiosis.These results are consistent with other studies.In an animal model, ligature-induced periodontitis-induced gut dysbiosis, intestinal pathology, and changes in bacterial function.Moreover, 4 weeks after, a nonsurgical periodontal treatment partially restored the gut microbiota toward health.Consistent with our study, periodontal treatment increased bacteria with SCFA-producing effects, again suggesting that subgingival bacteria effects are targeted towards SCFA bacteria [12].This model should be tested in larger, longitudinal studies.Our subjects were free of any inflammatory bowel disease or any other gut disorders.Nevertheless, subjects with high clinical periodontal inflammation had gut bacterial changes.The clinical significance of these gut bacterial changes is unknown.It is possible that the subjects' gut condition is mild, occult and undetected, or unreported.It is also possible that the magnitude of the gut bacterial changes was not severe enough or not accompanied by predisposing factors to manifest as a gut clinical condition.In the absence of a gut clinical condition, the gut dysbiosis may have deleterious systemic effects.Gut dysbiosis is considered a major trigger of systemic inflammation both in animal models and in humans and gut dysbiosis has also been linked to several inflammatory conditions such as obesity [54], metabolic syndrome [70], diabetes [71], cardio-vascular disease [72], and AD [73,74].And still another possibility is that our stool sampling may be an early indicator of gut dysbiosis.
Strengths and Weaknesses
Several strengths characterized our study.Our sample was quite homogeneous, consisting of elderly, well-educated, and relatively healthy individuals.None had gut disorders.Periodontal measures and subgingival and stool collections were standardized and the microbiome assessments were determined blindly to the periodontal assessment.
There are several limitations related to our study that include the design, population characteristics, and sample size.As a cross-sectional study, it shows only a correlation of periodontal inflammation/subgingival bacteria to gut measures and the direction of the association cannot be determined.It is also possible that gut microbiota could affect periodontal inflammation and subgingival dysbiosis [75].The number of subjects was limited.All subjects had stage III and IV periodontitis.Therefore, these results may not apply to the general population.It is to be noticed that subjects included and not included in the study did not differ in their demographic characteristics.An additional limitation is lack of information about the dietary factors that can change the gut microbiome.
Stool sampling was used as an index of gut microbiota and there are limitations in using the stool as an index of gut microbiome.Stool represents the colon microbiota rather than the entire section of the gut [76] and studies found a good representation of the luminal gut [77].In addition, stool sampling is the most established method of characterizing the gut microbiome and studies have shown significant relationships between the stool microbiome and gut/systemic conditions [78].
In conclusion, in this study, we found that PISA scores significantly associated with gut dysbiotic index and these associations were independent of age.Our results support our hypothesis that clinical periodontal inflammation can be used as a correlate of the severity of gut bacterial changes.Our results also showed that PISA and subgingival periodontal bacteria associated with gut SCFA bacterial index, suggesting that oral inflammation/subgingival periodontal bacteria may affect the production of SCFA.
Larger longitudinal studies assessing periodontal-disease-associated inflammation/ bacteria as well as SCFA production would be desirable.However, interventional studies using periodontal treatment as the intervention and gut bacterial changes and SCFA production as outcomes would not only validate our results but would also point towards mechanistic pathways.Treatment of periodontal disease is aimed at reducing bacterial dysbiosis and local/systemic inflammation.This treatment would be expected to prevent changes in the gut bacteria and SCFA.There are several periodontal treatment options, including scaling and root planing, local, systemic antibiotics, antiseptics [79,80], and surgical procedures.Antibiotics/antiseptics could have a direct effect on the gut bacteria and, therefore, these treatments are not recommended when investigating the effect of oral inflammation/dysbiosis on the gut bacteria.Scaling and root planing and supportive periodontal disease therapy with rigorous home care are the gold standard for periodontal disease therapy [43].With this treatment, the supra-and subgingival bacterial biofilm enriched in pathogenic bacteria, bacterial products, and calculus deposits are removed, thereby inducing a subgingival environment characterized by health-associated bacteria [81,82] and reduced local and systemic inflammation.This is a relatively inexpensive, noninvasive procedure and would constitute a model for a drug-free treatment.Its effects on the gut would be due to reduced periodontal inflammation and dysbiosis.
Figure 1 .
Figure 1.Differences in gut bacterial composition between pathogenic (high PISA-p) and nonpathogenic (low PISA-n).Using LEfSe, we determined the most abundant gut bacteria in the 12 High PISA (p) and the 24 Low PISA (n) groups at species level.Of importance, gut bacteria enriched in Low PISA are known as gut beneficial bacteria, while the bacteria associated with the High PISA are linked to gut pathology.
Figure 1 .
Figure 1.Differences in gut bacterial composition between pathogenic (high PISA-p) and nonpathogenic (low PISA-n).Using LEfSe, we determined the most abundant gut bacteria in the 12 High PISA (p) and the 24 Low PISA (n) groups at species level.Of importance, gut bacteria enriched in Low PISA are known as gut beneficial bacteria, while the bacteria associated with the High PISA are linked to gut pathology.
Figure 6 .Figure 6 .
Figure 6.Model of hypothetical pathways from periodontal inflammation and dysbiosis to gut bacterial changes.Periodontal bacteria (subgingival pathogenic bacteria) contribute to reductions in the
Table 1 .
Characteristics of the study population by PISA groups. | 6,764.8 | 2024-06-01T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Impact of Financial Liberalization on Banking Sectors Performance from Central and Eastern European Countries
In this paper we analyse the impact of financial liberalization and reforms on the banking performance in 17 countries from CEE for the period 2004–2008 using a two-stage empirical model that involves estimating bank performance in the first stage and assessing its determinants in the second one. From our analysis it results that banks from CEE countries with higher level of liberalization and openness are able to increase cost efficiency and eventually to offer cheaper services to clients. Banks from non-member EU countries are less cost efficient but experienced much higher total productivity growth level, and large sized banks are much more cost efficient than medium and small banks, while small sized banks show the highest growth in terms of productivity.
Introduction
The opening to the outside and the internal structural reforms of the financial sector are two interdependent processes, both having as a purpose the development of a financially competitive and efficient system in order to facilitate economic growth and financial system stability.
In the present days, in the context of recent turmoil on the financial markets, there is a dispute regarding the benefits of financial liberalization. There are opinions that the financial deregulation and the increasing of the process of globalization were the main causes what amplified the recent financial crisis. Many studies evaluate the direct impact of financial deregulation on banking performance, their empirical results are also rather controversial. Some authors, such as [1], [2], [3], [4], [5], show that financial deregulation has a positive impact on banking efficiency and on the productivity of banks, while other authors consider that deregulation has a negative effect on the performance of banks, determining a decrease of technical efficiency [6] or consider that financial liberalization most often leads to financial crises [7].
Combining insights from the liberalization -efficiency and financial openness -stability literatures, we develop a unified framework to assess how regulation, supervision and other institutional factors may affect the performance of banking systems in 17 countries from Central and Eastern Europe for the period 2004-2008. This study seeks to address two key questions. What variables influence the performance of banks from Central and Eastern European countries? Did the financial liberalization and reforms in the banking system have a notable influence on bank performance? Actually, we analyze the impact of financial liberalization and reforms in the banking system as well as the associated changes in the industry structure on the banking performance, measured in terms of cost efficiency and total productivity growth index. To do this, we develop a two-stage empirical model that involves estimating banks' performance in the first stage and assessing its determinants in the second one.
The importance and originality of this paper consist in assessing the CEE banking systems in a period when there were two waves of EU enlargements and the first influences of the recent international financial crises had appeared. Our sample of countries could be split into three categories: EU members, EU candidates and other potential EU candidates. The results of our papers are important in the context of the present financial turmoil; therefore, in the end of the paper, we try to develop some policy recommendations for both policy makers from CEE countries and EU ones. The evidence of our research could also be useful for banks' strategies of internationalization.
Cross-country efficiency studies in the banking industry have attracted a lot of attention. For banks, efficiency implies improved profitability, greater amount of funds channeled in, better prices and services quality for consumers and greater safety in terms of improved capital buffer in absorbing risk [8].
Studies of the impact of deregulation upon efficiency have found different results. Evidences from Taiwan [9], Korea [10], Norway [1], Turkey [11] and Thailand [12] proved improvements in efficiency, while in the case of Spain [13] and US [14], [15] found that deregulation have a negative impact upon efficiency.
Studies focused on the case of developing countries from Central and Eastern European countries explore various issues including the impact of ownership and privatization [16], [17], competition [5], [18] and the bank reforms and regulation [19], [20] on the banks' efficiency. Cross-country efficiency studies have also become more common for CEE banking systems as the success of the economic transition in the 1990s, the progress of privatization and similar development paths fostered by the EU accession process have boosted the interest of researchers in the region [21].
The creation of an effective and solid financial system constituted an important objective of the process of reform and transition from a centralized economy to a market economy in CEE countries. The liberalization of prices, the liberalization of the circulation of goods, services and capital, the deregulation of financial systems, globalization and the mutations on the level of the economic, social and political environment had a significant impact on the development of the CEE banking system [22]. The banking systems in the developing countries suffered ample mutations with the purpose of creating some efficient banking institutions, with a high degree of soundness capable of facilitating economic growth.
Most studies focused on the banking system in Central and Eastern Europe (CEE) are only performed at the level of one state and do not offer comparative information regarding the efficiency and productivity growth of banks in these states. However, in recent years, several papers have published comparative analyses highlighting the impact of banking system reform, the evolution of banking structure, competition and privatization on banks' efficiency (see e.g. [5], [16], [17], [18], [23], [24], [25], [26], [27], [28], [29],).
Fang et al. find that the institutional development, proxied by progress in banking regulatory reforms, privatization and enterprise corporate governance restructuring, has a positive impact on bank efficiency [30].
Brissimis et al. examine the relationship between banking system reform and bank performance -measured in terms of efficiency, total factor productivity growth and net interest marginaccounting for the effects through competition and bank risk-taking [5]. The model is applied to bank panel data from ten newly acceded EU countries. The results indicate that both banking system reform and competition exert a positive impact on the bank efficiency, while the effect of reform on total factor productivity growth is significant only by the end of the reform process.
Pasiouras et al. uses stochastic frontier analysis to provide evidence on the impact of regulatory and supervision framework on bank efficiency based on a dataset consisting of 2853 observations from 615 publicly quoted commercial banks operating in 74 countries during the period 2000-2004 [31]. Their results suggest that banking regulations that enhance market discipline and empower the supervisory power of the authorities increase both cost and profit efficiency of banks. In contrast, stricter capital requirements improve cost efficiency but reduce profit efficiency, while restrictions on bank activities have the opposite effect, reducing cost efficiency but improving profit efficiency.
The rest of the paper is organized as follows: in section 2 we explain the methodology used to measure the impact of financial liberalization on the bank efficiency and productivity growth and we discuss the data and the variable selection. Thereafter, the results of the empirical analysis are presented and discussed in section 3. The main conclusions are drawn in section 4.
Methodology and Data
In this section we discuss the empirical model used to investigate the impact of financial liberalization on bank performance. Then we explain our measures of bank performance: cost efficiency and productivity growth. The discussion of data and control variables follows afterwards.
Estimable Model
The purpose of the estimable model outlined in this section is to capture the effects of financial liberalization, reforms in the banking system and the associated changes in the industry on bank performance. We also include a range of bank-specific variables that have been used in previous empirical studies that examine the drivers of bank performance. The model is specified as: where the subscripts i, j, t denote bank i, country j, and year t; P ijt -performance indicators of the banks; BS jt -banking system specific variables; B it -bank-specific variables; M jt -macroeconomic variables; e ijt -error term.
2.1.1. Measures of banks performance. Bank performance is proxied alternatively by cost efficiency (EFF) and total productivity growth index (TFPCH). These indicators have been used widely in previous empirical literature concerned with the measurement and determinants of the bank performance in developing countries [5], [17], [18], [32]. The analysis of the efficiency and productivity of banks can be performed both by means of parametrical methods and of non-parametrical methods. For a comparison of these methods see [2], [33], [34].
In line with [35] we measure cost efficiency as how close a bank's cost is to what best practice banks cost would be for producing the same output bundle under the same conditions. As costs functions are not directly observable, inefficiencies are measured relative to an efficient cost frontier. When assessing the impact of financial liberalization on banking performance we also use the total productivity growth index what measures the modification of total productivity of the factors between the two periods of time, by calculating the ratio between the distances from each point observed in the respective technology.
In the estimation of the cost efficiency level of the banks in CEE countries we used the SFA Method and applied the model developed by [36]. The cost frontier can be expressed thus: where: y it -outputs vector; p i -prices of inputs vector; b and Qindependent variable coefficients; v it -random error N(0,d 2 v ); u ierror variable that follows a normal-truncated distribution; ttime component.
The cost frontier indicates the minimum cost, c i , which a decisional unit can register in order to produce a quantity of outputs, y i , considering the prices of inputs, p i . The cost efficiency level is given by the ratio between the minimum cost and the cost registered by the decisional unit and it is calculated as: The SFA method assumes that the inefficiency component of the error term is positive and thus the high costs are associated with a high level of inefficiency.
In the order to quantify the total productivity growth we estimated the Malmquist index with the help of the DEA-type linear programming method, a method that was introduced by [37] and developed by [38]. The Malmquist index measures the modification of total productivity of the factors between two periods of time, by calculating the ratio between the distances from each point observed in the respective technology.
Fä re et al. proposed in [37] the following form for the Malmquist index (output oriented), between two periods of time t (basic period) and (t+1) (current period): where D t O x tz1 ,y tz1 À Á represents the distance from the point observed in the period t+1 to the frontier of the technology of period t. M O w1 indicate an increase of the total productivity of factors from one period to another, while M O v1 corresponds to a decline of total productivity of factors.
In the empirical analysis of the mutations on the level of the productivity of banks we have to calculate four distance measures that occur in equation (4) for each pair of adjacent periods of time.
Having at disposal the panel sets of data, we can calculate the distance functions with the help of the DEA method. For the bank ''i'', i = 1, 2, …, N the DEA linear programming problems, under the assumption that the technologies have constant returns to scale, can be written: The linear programming problems must be solved N times, once for each company in the ensemble. The introduction of solutions to the problems in relation (4) allows for the estimation of the Malmquist index of productivity.
2.1.2. Banking system characteristics. Because the purpose of this analysis is to analyze the connection between the performance of banks and the degree of financial liberalization of the banking system, the first set of banking system characteristics considered in the model includes the following variables: Banking reform and interest rate liberalization indicator (BREF), Financial Openness Index (KOPEN), Asset share of state-owned banks (ASSB) and Asset share of foreign-owned banks (ASFB).
The Banking reform and interest rate liberalization indicator is compiled by the EBRD with the primary purpose of assessing the progress of the banking systems of formerly communist countries and quantifies and qualifies the degree of liberalization of the banking industry [5]. This indicator provides a ranking of progress in liberalization and institutional reform of the banking system, on a scale of 1 indicating little progress in reform to 4 representing a level that approximates the institutional standards and norms of an industrialized market economy [18]. In order to assess the level of financial openness we use the Chinn-Ito index that measures the country's degree of capital account openness. The index is based on the binary dummy variables that codify the tabulation of restrictions on cross-border financial transactions reported in the IMF's Annual Report on Exchange Arrangements and Exchange Restrictions [39].
Following previous studies that focus on banks' performance [27], [40], [41], we control for cross-country differences in the national structure and competitive conditions of the banking system, using the following measures: i) Asset share of state-owned banks (ASSB) that are quantified as percentage of asset share of state-owned banks in total assets of banking system, the state includes the federal, regional and municipal levels, as well as the state property fund and the state pension fund (state-owned banks are defined as banks with state ownership exceeding 50 per cent, end-of-year); ii) Asset share of foreign-owned banks (ASFB) that show the share of banks with foreign ownership exceeding 50 per cent in total bank system assets. We use these indicators to assess the impact of state and foreign ownership on performance differences in national banking systems; iii) Number of banks (NB); iv) The percentage share of the three largest banks (CR3), ranked according to assets, in the sum of the assets of all the banks in that banking system; v) Herfindahl-Hirschmann index (HHI) that is calculated as the sum of the squares of all the banks' market shares in terms of total assets.
We measure bank stability using Z-score, which is a very popular indicator in recent literature concerned with the measurement and determinants of soundness and safety of banks [42]. The Z-score is calculated as: ROA is the bank's return on assets, E/A represents the equity to total assets ratio and s(ROA) is the standard deviation of return on assets. A higher Z-score implies a lower probability of insolvency, providing a direct measure of soundness that is superior to analyzing leverage.
The data used to quantify these indicators have been taken from EBRD and ECB reports.
2.1.3. Bank-specific variables. The economic literature pays a great deal of attention to the performance of banks, expressed in terms of efficiency, productivity, competition, concentration, soundness and profitability.
The use of risk indicators in the analysis of bank performance has gained in the past decades a special attention because the control on banks' risks is one of the most important factors the profitability of the bank depends on [43].
Following the empirical literature, we use the Return on Assets (ROA) to reflect the bank's management ability to use the resources the bank disposes of for the purpose of optimizing profit. Bank capital adequacy is measured as the equity to assets ratio, quantified as the value of total equity divided by the value of total assets.
To express the risk profile of the banks we use two different types of risk: credit risk measured as ratio of loan-loss provisions to total loans (LLR_GL) and liquidity risk measured as ratio of liquid assets to total deposits and borrowing funds (LA_TD). Another variable used in the analysis is the bank's size measured as logarithm of total assets (TAL).
The data used in the analysis are taken from the annual reports of the banks and from the Fitch IBCA's BankScope database.
2.1.4. Macroeconomic variables. In line with the previous literature [31], [44], [45], [46], we include a variety of macroeconomic variables in our model. The macroeconomic variables used in our analysis are: GDP growth rate -Growth in real GDP in per cent (GDP_G), Inflation rate -change in annual average retail/consumer price level in per cent (IR), Level of financial intermediation -domestic credit provided by banking system percentage of GDP (FIN_INT), and Interest rate spreadlending rate minus deposit rate percentage (IRS).
In order to quantify the effects of structural reforms, we also use two governance indicators developed by Kaufmann et al. to proxy institutional differences: rule of law (ROL) and regulatory quality (RQ) [47]. Rule of law is an indicator of the extent to which agents have confidence in and abide by the rules of society while regulatory quality is an indicator of the ability of the government to formulate and implement sound policies. These indicators are assessed on a scale of about 22.5 to 2.5 with higher values corresponding to a 'better' regulatory environment.
Improvements in the regulatory quality help banks if it is accompanied by more adequate banking supervision. The quality of the rule of law affects cost efficiency through the effectiveness and predictability of the judiciary. There is a growing literature that points to the importance of institutions for an efficient operation of the financial system. This literature argues that better institutions positively affect bank efficiency (see also [48]). The data used to quantify this indicator have been taken from EBRD, World Bank and ECB reports.
Data
This study seeks to undertake this assessment by examining banking efficiency and productivity growth in 17 countries from Central and Eastern Europe (Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Macedonia, Republic of Moldova, Montenegro, Poland, Romania, Serbia, Slovakia and Slovenia). We omit Belarus and Ukraine from our study because we could not obtain sufficient data. All bank-level data used are obtained from the BankScope database and are reported in Euros. To be included in our sample, a bank has to have a minimum of 3 years of continuous data to obtain reliable efficiency estimates [27]. The selection process yields an unbalanced panel with 236 banks (730 observations) for the 2004-2008 period.
In the literature in the field there is no consensus regarding the inputs and outputs that must be used in the analysis of the efficiency and productivity growth of commercial banks [2]. In our paper, bank inputs and outputs are defined according to the valueadded approach, originally proposed by Berger and Humphrey [49], which suggests using deposits as outputs since they imply the creation of value added. Following [44], [45], we used the following set of inputs and outputs in order to quantify the efficiency and mutations on the level of the productivity of banks: Loans, Other earning assets and Demand deposits -as outputs; Personnel expenses, Fixed assets and Financial capital (sum of total deposits, total money market funding, total other funding and equity) -as inputs. Input prices are obtained as Total personnel expenses over Total assets, Other operating expenses over Fixed assets and Interest expenses over Financial capital. Table 1 and 2 present the mean values for the banking system characteristics, bank-specific variables and macroeconomic variables.
When analyzing the means of determinants of efficiency value we can observe that the degree of financial liberalization of the banking system has continuously increased during the assessed period. Thus the level of the banking reform and interest rate liberalization indicator (BREF), Financial Openness Index (KO-PEN) and asset share of foreign-owned banks (ASFB) increased and the level of asset share of state-owned banks (ASSB) due to the privatization process and the increase of foreign capital (the last two determinants are correlated). The number of banks was relatively stable, the concentration ratio of the first 3 banks continuously grew, but the evolutions of HHI denote a moderate competition towards high competition, being relatively stable. The stability of the entire banking systems, from the perspective of insolvency probability, has increased continuously as Z-score relieves. The explanations could be the process of harmonization with the EU acquis, which implies a better banking regulation framework. We consider that the evolutions of these determinants were influenced by the process of European integrations, because some of the countries assessed are EU members, some of them are EU candidates and others potential EU candidates. The bank-specific variables had different evolutions. Thus we can observe a decrease of ROA in the context of an ample growth of total bank assets. The risk profile of the banks evaluated as following: the ratio of loan-loss provisions to total loans (LLR_GL) and liquidity risk measured as ratio of liquid assets to total deposits
Estimation Approach
The empirical models used in the specialty literature use a twostage procedure: in the first stage the level of cost efficiency and total productivity growth is estimated and in the second stage the regression analysis is applied in which the levels of cost efficiency and total productivity index are dependent variables.
The empirical model specified in the equation is estimated using the panel least square fixed effects methodology. We use the fixed effects model, since we focus on a limited number of countries, for which we want to assess country-specific differences with respect to the relationship between financial liberalization and bank performance. For this purpose, performance scores are regressed on a set of common explanatory variables; a positive coefficient implies efficiency increase whereas a negative coefficient means an association with an efficiency decreases. The empirical model is tested for each of the two measures of banking performance, i.e. cost efficiency and total productivity growth.
The research strategy follows the specific-to-general approach. We start by investigating the relationship among cost efficiency and Banking reform and interest rate liberalization indicator (BREF) and Financial Openness Index (KOPEN). Next, we include all other banking system characteristics, bank-specific variables and macroeconomic variables one by one to test the stability of the main independent variables BREF and KOPEN. A second set of models is estimated using total productivity growth index as dependent variable. Table 3 presents the estimates of the cost efficiency level and total productivity growth index, showing the results by country and year.
Efficiency and Productivity Level
From empirical results we see that the average cost efficiency of banks in Central and Eastern European countries grew in the period analyzed, from an average value of 0.8866 in 2004 to 0.9099 in 2008, but there is significant variation across the banking systems of the Central and Eastern European countries in terms of cost efficiency level. Similar to [30], our results show that the highest level of efficiency is recorded in the banking systems from the Czech Republic and the lowest is recorded in Serbia. The higher increase of total productivity growth index during 2004-2008 was recorded in Bosnia and Herzegovina, Montenegro, Serbia and Republic of Moldova. Only Albania recorded a decrease of total productivity growth index during the analyzed period. Our results are in line with previous results obtained by [18] and [20].
Table no. 4 also shows the average cost efficiency and productivity growth results for banks of different size. Following [50] we classified banks into 3 different categories considering the size of banks: small if it has total assets ,1 000 mil EUR; medium if it has total assets .1000 mil EUR and ,10 000 mil EUR; and large if it has total assets .10 000 mil EUR. We also classified the banking systems in two different categories considering the status of the country: member or non-member of the European Union.
Thus the results show that, on average, banks from a nonmember country are less cost efficient but experienced much higher total productivity growth level during 2004-2008 period. In non-member countries, these productivity gains could be due to technological progress, rather than to an improvement in efficiency. Large sized banks are much more cost efficient than medium and small banks, while small sized banks show the highest growth in terms of productivity. This suggests that small sized banks are able to generate strong profits possibly by operating in the high value added segments of the markets while incurring higher costs at the same time. Table 5 and 6 report the key empirical results of the second stage analysis based on the estimation of Panel OLS models, using cost efficiency and total productivity growth index as the dependent variables. As for the effect of banking system characteristics, we found that a higher level of the Banking reform and interest rate liberalization indicator (BREF) and Financial Openness Index (KOPEN) improves cost efficiency, suggesting that banks in countries with higher level of liberalization and openness are able to increase cost efficiency and finally to offer cheaper services to clients. Our results are in line with [5] for new accepted EU countries, with [27] for transition economies and [31] for 74 countries, but contrary like those of [18], [46] for some CEE countries. Like [27], our results show that a higher share of state-owned banks (ASSB) has a negative impact on the level of banks' cost efficiency. The level of Asset share of foreign-owned banks (ASFB) has no statistically significant impact on the level of banks' cost efficiency. This result contradicts those of [27] that demonstrated that privatised banks with majority foreign ownership are the most efficient and those with domestic ownership are the least and [26] that show that banks with higher foreign bank ownership involvement were associeted with lower inefficiency.
Determinants of Efficiency
The results show that the level of Banking reform and interest rate liberalization indicator (BREF) and Financial Openness Index (KOPEN) have a positive impact on the total productivity growth. The Z-score is positively correlated with total productivity, demonstrating that the total productivity depends on the soundness and safety of banks.
With regard to the impact of structure of banking systems, results show that higher concentration quantified by means of the Herfindahl-Hirschmann index (HHI) improves cost efficiency, while the percentage share of the three largest banks (CR3) has a negative impact on the cost efficiency level. The mean value for these two indicators during the period assessed does not prove significant changes in the banking structure and level of competition. This evidence could suggest that the competition was not one of the most important factors of improving cost efficiency, being in contradiction with the traditional view and previous results [51].
As regards the impact of bank-specific variables, the results show that the level of Return on Assets (ROA) has a statistically significant and negative impact on both cost efficiency and total productivity growth. The level of credit risk measured as the ratio of loan-loss provisions to total loans (LLR_GL) negatively influences cost efficiency.
Turning to the effect of macroeconomic variables, we observe that GDP growth rate had a negative impact on cost efficiency, maybe because under expansive demand conditions, managers are less focused on the expenditure control and therefore become less cost efficient. Another explanation could be that the increase in credit markets involves higher capital cost, an increase in operating expenses and cost with fixed assets. This results are in line with [52], [53].
From another point of view, the decrease of GDP growth rate improves the total productivity of banks. This could be a reason for foreign-owned banks to maintain their exposure on these markets in case of economic decrease, but with the condition of maintaining the soundness and safety of banks. We also found a negative and significant relationship among Inflation rate (IR), Interest rate spread (IRS) and level of Rule of law (ROL) and bank cost efficiency.
Our results show that the level of Financial intermediation has a positive effect on the bank performance, meaning that a low level of financial intermediation hampers banking performance.
Conclusions
From our analysis it results that the Financial liberalization improves cost efficiency of banks from Central and Eastern European countries with higher level of liberalization and openness are able to increase cost efficiency and finally to offer cheaper services to clients. These facts are in compliance with the Single European Market principles and demonstrate that EU new member states, candidate states and potential candidate states banking market mechanisms could achieve their objective of lowering and harmonization of banking services prices. In this case, from a banking policy perspective, we consider that the EU enlargement could continue in Central and Eastern European countries and could add benefits for the EU banking market.
In exchange, the level of Asset share of foreign-owned banks has no statistically significant impact on the level of bank cost efficiency. This could mean that the dominance of foreign banks on the market does not increase cost efficiency, but the best practices that they brought in the banking systems. From the policy perspectives, these results suggest that, in the case of new member countries, foreign-owned banks have no influence on increasing cost efficiency by means of their own activity and dominance on the market, but perhaps by means of their best practices that domestic banks must adopt for competing them.
In what concerns the effect of financial reform on the total productivity growth of banks from CEE countries, the results show that the level of Banking reform and interest rate liberalization indicator has a positive impact on the total productivity growth. Also, the results suggest that the important factors shaping the total productivity are merely the banking system characteristics and bank-specific variables, and the only macroeconomic variable with impact is the GDP growth rate.
Overall, in order to promote efficiency and productivity, monetary authorities from CEE countries should enhance their efforts to continue the reform of the financial services regulatory and supervisory framework. At the same time, banking markets should remain open, encouraging the entry of foreign banks for improving best practices and for increasing the benefit from technological spillovers brought by them. For a sustainable improvement of cost efficiency and total productivity of banks, the focus should be on the improvements of managerial practices, especially in domestic small and medium banks. Policy makers should also be concerned about improving the liquidity level.
Furthermore, our results indicate that policy makers in EU could take into account the follow-up of the process of enlargement in some countries from CEE, because their banking markets have a good potential in adapting the Single European Markets principles. Foreign banks could maintain their exposures or enter the CEE markets because there is a good perspective for total productivity growth and the stability of the banking systems has increased. Table 6. Determinants of total productivity growth. | 7,121.8 | 2013-03-21T00:00:00.000 | [
"Business",
"Economics"
] |
Leading Edges of Economy-Building Science Education
The objective of this economy policy article was to describe innovative edges of science education for quality economy and life in the new postmodern rising era. Science and technology education in the postmodern time will not be counted merely on the basis of practical or purely hypothetical realizations and achievements. The capability to preserve embryonic tendencies in science and technology education will rely on generating the type of scientists and researchers who can capacitate education and creation of more and not less qualified than own.
Innovative Edges of Science-Founded Economy Policies
The objective of this economy policy article was to describe innovative edges of science education for quality economy and life in the new postmodern rising era. Science and technology education in the postmodern time will not be counted merely on the basis of practical or purely hypothetical realizations and achievements. The capability to preserve embryonic tendencies in science and technology education will rely on generating the type of scientists and researchers who can capacitate education and creation of more and not less qualified than own.
Such new generations of science and technology educators and mentors are not simply characterized by teaching and research proficiencies [1,2]. They must be crucial edges whose exclusivities are embraced with merits in growth and education of science-founded economy mentorship concepts. Mentorship is an art whereas schooling is a limited occupation. Schooling is transferring knowledge to learners whereas moral mentorship is constructing, capturing and exchanging insights in science and technology. Schooling teaches learning and education of self, but mentorship creates capacities to train and mentor minds and bodies of else [3,4].
From a global perspective, schooling develops learners that finally graduate whereas mentorship generates pragmatic influencers that move on forever in the learning path until after even they bodily die. Schooling requires giving back the teacher only the materials that were educated whereas mentorship directs minds to create innovative philosophies. Schooling is almost a one-way correspondence, but mentorship is an innovative and creative medium for idea and perspective exchange. Schooling does not tolerate mentees to question teachers and the way they think and teach, whereas mentorship truly welcomes pragmatic learners to challenge mentors' thoughts [5][6][7].
Questions and challenges are the means whereby learners can experience science communication with others and observe critical education of others. Schooling is restricted to habitual times whereas mentorship defines a circadian lifetime commitment [8].
Schoolers are employees whereas mentors serve as employers. Schools employ teachers whereas mentors employ science and technology. Schooling encourages learning whereas mentorship creates mentors capable of building ever evolving education roads. Schoolers tutor science whereas mentors generate innovative science producers. Schooling is an already-known task whereas mentorship is a creative and challenging commitment. The most important results of schooling are science discoveries whereas among the utmost consequences of mentorship are brilliant minds and philosophies that are created within mentors' contemplations towards creating the scientists that fuel ongoing discoveries.
Schooling may expand the existing knowledge somewhat, whereas mentorship does develop scientists who collectively make considerable progress in the innovation of new ground breaking insights. Knowledge is the end but insight is just the inauguration to commence and create novel authorities of contemplation. In a nutshell, schooling is an instant line whereas mentorship is a well-shaped thorough concept of pragmatism that resembles an encircle surrounding a central negligible tip of discoveries. However, the adjacent surrounding area encompasses the morality in creating leading-edge mentors of science education. Certainly, schooling causes knowledge accumulation that conceptually and pragmatically adds nothing to the literature but the complexity, whereas mentorship integrates science into safe and quality economy and life policies.
Conclusions
To sum, schooling complicates science whereas mentorship simplifies understanding of economy and life. Accountable mentorship instead of irresponsible schooling will persist to serve as a crucial cuttingedge science for today's education towards quality economy and life. Such a pragmatic mentorship will immensely help create global moral figures and concepts from scientific discoveries. These perceptions are a crucial beginning to global cooperations in establishing reciprocal understanding and sturdy national-international peace and prosperity. | 945 | 2015-06-30T00:00:00.000 | [
"Education",
"Economics",
"Philosophy"
] |
A Data Processing Middleware Based on SOA for the Internet of Things
. The Internet of Things (IoT) emphasizes on connecting every object around us by leveraging a variety of wireless communication technologies. Heterogeneous data fusion is widely considered to be a promising and urgent challenge in the data processing of the IoT. In this study, we first discuss the development of the concept of the IoT and give a detailed description of the architecture of the IoT. And then we design a middleware platform based on service-oriented architecture (SOA) for integration of multisource heterogeneous information. New research angle regarding flexible heterogeneous information fusion architecture for the IoT is the theme of this paper. Experiments using environmental monitoring sensor data derived from indoor environment are performed for system validation. Through the theoretical analysis and experimental verification, the data processing middleware architecture represents better adaptation to multisensor and multistream application scenarios in the IoT, which improves heterogeneous data utilization value. The data processing middleware based on SOA for the IoT establishes a solid foundation of integration and interaction for diverse networks data among heterogeneous systems in the future, which simplifies the complexity of integration process and improves reusability of components in the system.
Introduction
The concept of the Internet of Things (IoT) was firstly derived by the Automatic Identification (Auto-ID) Labs in the Massachusetts Institute of Technology (MIT) in 1999.The Auto-ID Labs simultaneously propose [1] radio frequency identification (RFID) systems that connect devices and transmit information via radio frequency to the Internet in order to achieve intelligent identification and management.To formalize the concept of the "Internet of Things, " the International Telecommunication Union (ITU) released the report of "ITU Internet reports 2005: the Internet of Things [2]" in the World Summit on Information Society (WSIS) held in Tunis in 2005, in which the IoT characteristics, related technical challenges, and future market opportunities were introduced.
ITU pointed out in the report [2], "We are standing at the edge of new times of communication, information and communication technology (ICT) to achieve the objectives have been developed to meet the communication between people, and things, between things connection.The coming era of ubiquitous Internet of Things make us a new dimension of communication in the world of information and communication technology (shown in Figure 1), any time, any place, connected to anyone, expansion to connect things connected to the Internet of Things." With the rapid development of information and communication technology, just a onefold technology cannot satisfy the complex context-aware application requirements, in which resource information has been subject to outside interference, and people want to be able to obtain realtime and real-world information such as diverse sensory data acquisitions and human-computer interaction data acquisitions and ultimately achieve efficient data acquisitions between people and things, people, and things and things.
The IoT applications integrate with IntelliSense recognition technologies, pervasive computing, and ubiquitous networks, which are called the third wave of the information technology revolution following the development of the information industry in the computer and the Internet.The IoT is an important part of the new generation of information The IoT incorporates RFID, wireless sensor networks, and ubiquitous terminal equipment as the perception foundation, with a variety of wired or wireless communication and integration of the Internet to achieve the perception data transferred and shared.By leveraging cloud computing and highperformance computing technology for real-time information process, management, and organization, we ultimately offer the upper application a variety of feedback decisionmaking processes for closed-loop control of the things.
Consequently, the growing popularity of the IoT will inevitably lead to a new wave of development of various industries, such as smart home, intelligent monitoring, smart grid, and other new concepts of things technology.So far, the IoT has been launched as a variety of demonstration applications in different domains (shown in Figure 1), such as intelligent industry [3], intelligent agriculture [4], intelligent logistics [5], intelligent transportation [6], smart grid [7], environmental protection [8], security protection [9], intelligent medical care [10,11], and smart home [12].
The rest of the paper is organized as follows.In Section 2, the IoT concepts are reviewed.The description of the architecture of the IoT is detailed in Section 3. In Section 4, we propose a middleware framework based on SOA for IoT.We conclude the paper and point out future work in Section 5.
Related Work
The IoT will be a promising facility of future network which has self-configuration ability in global dynamic network based on standard and interoperable communication protocols.In the network, all real and virtual items have specific identification and physical sensory data in order to achieve the goal of information sharing through seamless connection of intelligent interface [13,14].These intelligent interfaces connect and communicate with users, society, and environment context on the basis of the agreed protocols.It is an extension and expansion of the network based on the Internet to achieve intelligent identifying, locating, tracking, monitoring, and managing.
From an alternative perspective beyond the initial concept of the IoT and the definitions of it as abovementioned, the IoT is a network connecting things to things for achieving intelligent identification and management of the items in a broad sense; it can be seen as a fusion of the information space and physical space.Through that way, everything is digitized and networked, which results in realizing an efficient information interactive mode between items, items and people, and people and environment.After that, various diversities of information are merged into social networks and integrated into human society in a higher realm.For realization of information fusion in the IoT, the middleware technology is suitable to be adopted as a concrete solution.
Middleware as computer software provides connection of different software components and applications.It consists of a set of enabling services that allow multiple processes running on one or more machines to interact across a network.Atzori et al. [13] summarized the relationship as three visions of the IoT, that is, things-oriented visions, semantic-oriented visions, and Internet-oriented visions.According to the three characteristics, middleware in the IoT shall be able to address things issues and Internet issues, deal with the semantics gap, such as interoperability across heterogeneous devices, context awareness, and device discovery, manage resources constrained embedded devices and scalability, manage large data volumes and privacy, and cope with semantic data and so forth.
Several studies have been published that have explored ways to design middleware for the IoT.In [15], Römer et al. summarized the functions and the nature of the middleware for wireless sensor network.In [16], Wang et al. have reviewed middleware for WSN and a detailed analysis of the approaches and techniques offered by the middleware to meet the requirements of the WSN has been presented.It also discusses generic components of the middleware and reference model of WSN based middleware.In [17], middleware has been surveyed from adaptability perspective in which Sadjadi and McKinley presents taxonomy for adaptive middleware and their application domains and provides details for one of each middleware category.The context-awareness middleware also has been studied.The survey in [18] is based on the architectural aspects and provides taxonomy of the features of the generic context-aware middleware.A survey reported in [19] evaluates several context-aware architectures based on some relevant criteria from ubiquitous or pervasive computing perspective.In [20], Bandyopadhyay et al. provides a survey of middleware system for the IoT.
The Architecture of the IoT
Heterogeneous information sources are the most important characters of the IoT.In order to achieve interconnection, intercommunication, and interoperability between heterogeneous information, the future architecture of the IoT needs to be open, layered, and scalable [21].The IoT architecture is generally divided into four layers that are perception layer, network layer, middleware layer, and application layer (shown in Figure 2).A major part of perception layer is wireless sensor nodes.A generic sensor node aims to take measurements of physical environment [22].It may be equipped with a variety of devices which can measure various physical attributes such as light, temperature, humidity, barometric pressure, acceleration, acoustics, magnetic field, and carbon dioxide concentration.In addition to the sensors, perception layer also consists of a large amount of information generated equipment, including RFID and positioning systems, and a variety of smart devices, such as smart phones, PDAs, multimedia players, netbooks, and laptops.It can be seen that the diversity of generated information is an emerging and important feature of the IoT.
IPv6 addresses the major defect of the limit on the number of terminal pieces of equipment to access Internet.The main idea of network layer is leveraging the existing Internet as the main dissemination of information by virtue of a variety of wireless accesses.Every wireless access method has its own characteristics and application scenarios.Wi-Fi and other wireless broadband technologies possess broader coverage, faster transmission, reliable high-speed, and lower cost and circumvent the obstacles.The low-speed wireless networks, such as ZigBee, Bluetooth, and infrared low-speed network protocols, are adapted to resource constrained node, which has the characteristics of the low communication radius, low computing power, and low energy consumption.Mobile communication network will become an effective platform for "a comprehensive, anytime, anywhere." Middleware layer tackles the information heterogeneity issues by intelligent interfaces.The functional solutions of middleware layer mainly consist of data storage (database and mass storage technology), heterogeneous data retrieval (search engine), data mining, data security, and privacy protection.
In application layer, traditional Internet has gone through data-centric to people-centric conversion; typical online applications include file transfer, e-mail, the World Wide Web, e-commerce, online gaming, and social networking.In the application of IoT, things or physical world are considered as the center, typical IoT applications covering item tracking, context-aware, intelligent logistics, intelligent transportation, smart grid, and so forth.The IoT application is currently in a period of rapid growth.
The Implementation of Middleware Based on SOA for IoT
The IoT research mainly pays more attention to network layer recently, such as the IoT network coding, identification and anticollision technology.However, data processing infrastructure continues to be overwhelmed by the mass of heterogeneous information from the number of terminals in the IoT.The flexible architecture that is based on SOA for heterogeneous information fusion in the IoT offers the opportunity to employ mitigation measures.It is critical for the ultimate success of the IoT application for better utilization of the integration of a wide range of services from multiple sources and provides more personalized service to businesses or individuals [9].
4.1.Service-Oriented Application Architecture Description.SOA (service-oriented architecture) is a component model and links different functional units (called services) of the application through well-defined interfaces and contracts between these services.Interface is defined by a neutral manner and it should be independent of implementation services, hardware platforms, operating systems, and programming languages.This allows the service to be built in a variety of such systems to interact in a uniform and general way [23].The service is the basis of the SOA; thereby, they can be applied directly and effectively depending on system and interaction of software agents.Typically, business operations running in an SOA comprise a number of different components, which are often in an event-driven or asynchronous fashion that reflects the underlying business process needs [24].In the context of the IoT, original and emerging resources are in the form of services and are opening on Internet.Consequently, the study of SOAbased fusion application technology is of great value [25].
SOA architecture consists of five main parts, depicted as below: (1) Consumer: acquires the information from producers' entities that provide services, such as mobile terminals and web clients.(2) Application: provides application interfaces or different degrees of loosely coupled services, such as mobile applications, web applications, and rich client.(3) Service: the implementation of the entities involved in a specific task, such as data center and enterprise information center.(4) Service Support: SOA specific application background support functions, such as security, management, and semantic analysis.(5) Producer: an entity to provide specific services or functions.
The IoT Middleware Design.
Inspired by the characteristics of data in the IoT, the design of middleware with the service-oriented architecture was employed in this paper, and integration services, compatible with various types of data and the agreement has been divided.Consequently, this paper presents the basic framework of SOA-based IoT applications as shown in Figure 3.In Figure 3, the three-layer structure of the original SOA is broken down into a five-layer system.Service providers (producers) use of various types of environmental sensing technology.Data processing platform is responsible for data processing, data filtering, and data integrity.It provides XML scheme for data unification and metadata consistency and standardization of heterogeneous data processing.Security platform is a security barrier between the service platform and data platform, which is responsible for the safety of the equipment and data.Service layer aims at providing a range of generic interfaces and agency services which are responsible for data parsing in order to coordinate different data formats and are also advantageous for distributed deployment of a variety of databases.The purpose of universal interface is to achieve compatible communication protocols, which are used by different types of users, to perform unified data exchanging with the upper consumers.
The key part of the service layer is to form a bridge between the data processing and the upper application.The service layer also faces different problems that are encountered in the IoT application such as network connection, resource-constrained nodes, and different application platform.Because the underlying device is extremely rich in the IoT, the SOA system to provide network services needs to consider the problem of transmission delay and resource scheduling and network services need to provide a variety of routing or delay tolerant network technology to deal with.SOA systems also need a balanced scheduling algorithm and balanced network resources.Different application platforms require more generic SOA system design patterns; we will first consider the standard between different devices and the upper users between different access platforms.
As it can be seen from Figure 3, the basic framework of SOA application is on the basis of the data stream generated by the perceive network from the physical world, which is with the basic physical properties of the world from the underlying environmental sensing.In SOA architecture, these vast amounts of real context-aware data form the basis of the entire application.
Since heterogeneous data processing is inevitably linked to the IoT middleware architecture based on SOA, a concrete solution is proposed for the metadata integration of heterogeneous data which is as shown in Figure 4.This architecture is divided into three basic processes, which from top to bottom are the client application layer, the data integration layer, and the IoT heterogeneous data sources, as shown in Figure 4.
The client applications include users unified access interface for data manipulation, which can be a specific application or a web browser.
Data integration service layer [26] is the core of the architecture and also the key to heterogeneous data integration.In order to increase the intelligence and scalability of architecture and alleviate the burden on users, we design a structure that contains the upper and lower levels of service.Metadata format vary greatly since which are grounded in heterogeneous sensor sources [27].To circumvent this obstacle, we express various types of data into XML format and set up rules to make operations on the metadata.Consequently, we first converted the heterogeneous data in a unified XML format, on this basis, and then created the underlying data integration services layer.The upper layer services built on top of the lower layer services and the underlying services are developed in an XML document based on the underlying heterogeneous data sources.
Services of lower levels achieve four data access functions: add, modify, query, and delete and the upper service function extracts the same functionality from the underlying service to the same service according to the data then forms the integrated data service function.Application layer call corresponding upper services according to the operational requirements, and then underlying data manipulation is specified by the upper service based on data parameters from the client calls; thus, when the underlying heterogeneous data source changes, we simply update the underlying service and map to the upper applications rather than make any changes to upper layers.Data integration implementation process is completely transparent to the user, which is compatible and interoperable in different systems.
Evaluation.
In order to test the IoT middleware architecture based on SOA, we set up an indoor temperature monitoring system in practical environment.We deploy 30 sensor nodes in three rooms, to monitor the indoor temperatures.After collection of the sensor data, the data is delivered by a wireless multihop network from sensor nodes to a base station which is connected to a cluster.
In experiments, clusters include 4 common PCs that are 2-core 2.8 GHz desktop computer with 2 GB RAM and Ubuntu operating system for the runtime environment.Although they are generic computers, they satisfy the requirements.
In the paper, the sensor nodes are IRIS nodes.IRIS nodes are produced by Crossbow Technology Inc.These sensors are based on the Atmegal1281 microprocessing chip and a RF230 RF chip which are working at 2.4 GHz and supporting the IEEE 8.2.15.4 communication protocol.The nodes have three times radio range and twice the program memory of MICA Motes and outdoor line-of-sight tests have a range as great as 500 meters between nodes without amplification.The IRIS not only has a longer transmission distance but also has ultralow power consumption and a longer battery life.
In the test, equipment used is shown in Figure 5 and the portal of the IoT system for environment monitoring is shown in Figure 6.Some example of the collected sensory data is as shown in Table 1.The monitoring system continues evaluating for three months and collects about five million of the sensor data items.During the test period, we have supplemented and changed several types of the sensors without interrupting the system.When we add the new equipment, some new profiles will be added to the middleware framework while the system is still running.
We can draw some conclusions from the result of the experiment.
First, the SOA-based data processing middleware architecture represents better adaptation to the IoT multisensor and multistream application scenarios, which improve the heterogeneous data reusability and utilization value.
Second, the experiment result demonstrates the decoupling power of middleware which makes it easier to establish a unified heterogeneous information processing platform for a diversity of applications in the IoT.
Third, the distributed deployment of middleware brings better performance optimization and achieves better load balancing in the cluster.
Conclusion
This paper discusses the development of the concept of the IoT and gives a detailed description of the architecture of the IoT.Based on characteristics of the architecture and challenges of information fusion in the IoT, the paper designs a middleware platform based on SOA architecture for the integration of multisource heterogeneous information.After that, we use the SOA data processing middleware to build an environmental monitoring system for validation verifying.
Through theoretical analysis and experimental verification, the SOA pattern-based processing middleware architecture design is better adapted to the IoT multisensor and multistream application scenarios, which improve the sensing data utilization value.The SOA data processing middleware has laid a solid foundation for data integration and interaction between different networking systems, simplifying the complexity of the system integration process and improving the reuse of components in the future.In order to achieve better interaction between the different large-scale IoT applications, the criteria with regard to unified data format are widely considered to be made for coordination of different systems in relevant international organizations, research institutions, and enterprises.
Figure 3 :
Figure 3: The IoT middleware architecture based on SOA.
Figure 4 :
Figure 4: The IoT data integration middleware based on SOA.
Figure 5 :
Figure 5: The equipment used in the demo.
Figure 6 :
Figure 6: The portal of the IoT system for environment monitoring.
Table 1 :
Some examples of the collected sensory data. | 4,572.4 | 2015-08-02T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Constructing Academic Identity Through Critical Argumentation: A Narrative Inquiry of Chinese EFL Doctoral Students’ Experiences
This study aimed to explore the experiences of Chinese EFL doctoral students in constructing their academic identities through critical argumentation in their thesis writing in an English as a Medium of Instruction (EMI) context. Data were collected through semi-structured interviews and document analysis. Narrative analysis was used to examine participants’ thesis writing to gain insights into their experiences of academic identity construction. The results revealed that Chinese EFL doctoral students face challenges in developing a critical voice and authorial position, synthesizing multiple sources, and positioning themselves rhetorically in their writing. Furthermore, the results open possibilities for a broader understanding of academic writing that values international graduate student’s educational background and cultural diversity in target English language discourse communities. While the narrative inquiry study on Chinese EFL doctoral students’ academic identity construction through critical argumentation is insightful, there are several limitations to consider, mainly due to the small sample size of only two female Chinese respondents. Plain Language Summary Constructing Academic Identity through Critical Argumentation This study investigates how Chinese international doctoral students construct their academic identities by using critical argumentation in their thesis writing within an English as a Medium of Instruction (EMI) setting. The researchers collected data through semi-structured interviews and document analysis. In addition, they used narrative analysis to examine the participants’ thesis writing and gain a deeper understanding of their experiences in developing their academic identities. The study’s findings indicated that Chinese EFL doctoral students struggle to establish a critical voice and authorial position, integrate multiple sources, and strategically position themselves in their written work. This means that they struggle to express their opinions and ideas in their writing and use sources effectively to support their arguments. They also find it challenging to position themselves as experts in their field. The results highlight the need for a more comprehensive understanding of academic writing that considers international graduate students’ educational backgrounds and cultural diversity in English language discourse communities. This means that academic writing instruction should be tailored to the needs of students from different cultural and linguistic backgrounds. The study has some limitations, particularly the small sample size, which only includes two female Chinese participants. This means the findings may not be generalizable to Chinese international doctoral students or students from other cultural and linguistic backgrounds.
Introduction
Recently, Malaysia has emerged as an increasingly soughtafter destination for postgraduate students from China, owing to the internationalization of education.As of June 2021, over 20,000 registered Chinese students are studying in Malaysia's public and private higher education institutions (Malaysian Ministry of Foreign Affairs, Malaysia, 2021).Numerous Malaysian universities have implemented EMI policies, given that English is a second language in Malaysia.However, previous research (Singh, 2015(Singh, , 2016) ) has indicated that the crux of the international doctoral scholars' experience lies in dealing with various linguistic, cultural, and institutional challenges, such as the need to cultivate their academic identities through critical argumentation in their academic discourse practices.
Critical argumentation and academic identity are indispensable for intellectual growth and producing original scholarship in a given discipline within higher education and beyond (McKinley, 2015;Xu & Grant, 2020).In higher education institutes where English as a Medium of Instruction (EMI) is employed, such as Malaysia, critical thinking is one of the essential prerequisites and desired competencies.Doctoral students are expected to adopt a well-established English academic discourse to achieve successful academic writing standards and actively participate in the EMI community.This adoption necessitates students to demonstrate critical argumentation in their academic writing and associated skills, such as evaluation and analysis.However, students in EMI programs, particularly those from cultures that practice different critical argumentation values, may require assistance applying the Western critical argumentation style in their English academic writing.Such students often encounter difficulties articulating coherent arguments and organizing their ideas, resulting in misunderstandings and communication failures (Ai, 2017;McKinley, 2017;Wang & Parr, 2021;Wu & Buripakdi, 2021).
In the context of doctoral studies, thesis writing is often perceived as a social learning space for constructing students' academic identity, as the research process encourages them to engage in discussions to resolve differences of opinion beyond the scope of their thesis (Flowerdew & Wang, 2015;French, 2020;Mertkan & Bayrakli, 2018).Scholars develop their social identities textually by presenting themselves in accordance or at odds with societal discourse as they appropriate and portray these identities (Mantai, 2019;Teng, 2019;Pu & Evans, 2018).These scholars suggest that academic identity develops over time and is constructed by the student's educational experiences.Although previous studies (Wolfe, 2011;Zhang, 2017) have established that critical argumentation is essential to higher education, how doctoral students comprehend and experience critical argumentation while writing their doctoral theses and constructing their academic identity still requires further investigation.Only a few studies (Li & Deng, 2019;Pu & Evans, 2018;Teng, 2019) have examined EFL doctoral students' critical argumentation and its impact on their academic identity while writing their theses.Furthermore, while these studies have explored the cultural factors that influence Chinese students' academic writing and critical argumentation skills, few have specifically investigated how their prior educational experiences and cultural values shape their construction of academic identity through critical argumentation.
While there has been some research on academic identity construction among Chinese EFL doctoral students in other contexts, such as their home universities in China or Western universities, there is a significant gap in the literature regarding their academic identity construction in English as a second language setting, such as Malaysia.Additionally, while critical argumentation has been identified as an important aspect of academic writing, there has been limited exploration of its role in shaping Chinese EFL doctoral students' academic identities in this context.Therefore, further investigation is needed to understand how Chinese EFL doctoral students construct their academic identities through critical argumentation in an ESL setting.
This study examined two Chinese EFL doctoral students' experience in employing critical argumentation in writing their thesis and academic identity development.More specifically, the following research question is addressed: What are the experiences of Chinese EFL doctoral students in constructing their academic identities through critical argumentation in thesis writing?
EFL Doctoral Students and Critical Argumentation Skill
According to Walton (2012), critical argumentation is a process of identifying, analyzing, and evaluating arguments with the aim of influencing the thoughts and actions of others.Wingate (2011) posits a three-part process for constructing an argument: analysis and evaluation of content knowledge, development of the writer's position, and a coherent exposition.The first component involves the writer's ability to discern pertinent information from the literature to support their point.The second component requires the writer to articulate a well-considered position, often conveyed through their voice or stance.Finally, the third component necessitates the logical organization of ideas at a structural level, commonly realized through the academic essay or dissertation format.
Critical argumentation in academic writing is closely linked to constructing academic identity in various ways.First, critical argumentation requires the writer to engage with complex ideas and concepts, which can contribute to developing a sense of intellectual ownership.As doctoral students learn to express their own ideas and opinions, they begin to see themselves as knowledgeable and competent individuals in their respective fields.This sense of academic confidence is a crucial aspect of academic identity, as it allows students to establish themselves as active contributors to the academic community.
Scholars (Baptista et al., 2015;Brodin, 2018) argued that critical argumentation and academic identity are highly relevant to doctoral thesis writing in seeking the novelty of disciplinary value.Doctoral thesis writing is viewed as a learning process for research and academic writing that produces a sense of self by experimenting with ideas.In doctoral thesis writing, students' critical argumentation competence is continually shaped and developed by academic interactions (feedback process, peer interaction, and academic networks; Akpur, 2020).As such, doctoral thesis writing is seen as a critical process that constitutes the writers' understanding of who they are, whom they aspire to be, and becoming a community member of the discipline.Therefore, writing a doctoral thesis remains challenging and should be discussed based on critical argumentation and academic identity concepts.
Critical argumentation allows students to establish their position as academic community members by demonstrating their ability to engage with academic discourse.By engaging with the ideas and arguments of other scholars in their field, doctoral students develop a sense of belonging and begin to see themselves as part of a larger academic community.This sense of belonging is also a crucial aspect of academic identity.It helps doctoral students develop a sense of identity as academic scholars connect to others in their field.In addition, critical argumentation allows students to construct their own academic voice and develop a unique perspective on their field of study.Students develop new insights and contribute to new knowledge as they engage with existing knowledge and challenge established ideas.This process of constructing a unique academic voice is essential for developing academic identity.It allows students to establish themselves as experts in their field and set themselves apart from others who may hold similar qualifications.
Factors Affecting Critical Argumentation
Some scholars (Ramanathan & Atkinson, 1999;Vandermensbrugghe, 2004) used cultural stereotypes to explain the inadequacies in international students' critical argumentation.For instance, Chinese EFL students' educational background and cultural factors can influence their lack of critical argumentation in writing their thesis in English.There are several reasons why this may be the case.First, the Chinese education system traditionally strongly emphasizes rote memorization and reproduction of knowledge rather than critical argumentation and independent inquiry (Tian & Low, 2011;Zhang, 2017).As a result, Chinese students may have been exposed to a different level of critical argumentation in their previous academic experiences.Hence, they may struggle to develop these skills when writing their thesis in English.
Moreover, China's cultural values and norms influence how Chinese students approach academic writing.In traditional Chinese culture, there is a strong emphasis on collectivism, harmony, and avoiding conflict (Andrews, 2007;Zhang, 2017).These values may discourage Chinese students from engaging in critical argumentation, as it can be perceived as confrontational and may create tension within the academic community.Additionally, the hierarchical nature of Chinese society may also play a role in inhibiting critical argumentation.In Chinese culture, respect for authority and deference to elders is highly valued.Chinese students' apparent lack of critical argumentation has been attributed to their deference to instructors and scholars, where criticism can be interpreted as disrespectful (Andrews, 2007;Zhang, 2017).Thus, this deference may make Chinese students more hesitant to challenge the ideas of established scholars and academics, an essential aspect of critical argumentation.
Furthermore, the language barrier may also inhibit critical argumentation (Rear, 2017;Manalo & Sheppard, 2016).Writing a thesis in a foreign language can be challenging, and Chinese students may struggle to express their ideas in English as they would in their native language.They might find it challenging to engage in critical argumentation, as doing so might need more advanced linguistic abilities.
In conclusion, Chinese EFL students' educational background and cultural factors may influence their lack of critical argumentation in writing their thesis in English.The traditional Chinese education system, cultural values and norms, hierarchy, and the language barrier can all inhibit the development of critical argumentation skills.Therefore, addressing these factors and supporting Chinese EFL students to develop these skills can be essential to their academic success and professional development.
Theoretical Context
Examining the influence of critical argument on EFL doctoral students' academic identity construction requires a theoretical framework that inspires a relationship between critical argument academic identity.Doctoral students' academic identity development could be understood through Ivanic's (1998) identity-building theory.Ivanic (1998) expanded on the theory to describe how social and cultural elements influence academic writer development, which occurs in written discourse when a writer makes certain linguistic decisions in an effort to influence readers.For EFL students, academic writer identity development coincides with cultural identity development.Clark and Ivanic (1997) reported that academic writers' identities manifested in their writing as various ''selves'' (autobiographical, authorial, or discoursal), which are used according to the writer, task, and sociocultural component (Ivanic, 1998).The autobiographical self is the writer's life story and sense of self, including their background, cultural beliefs, and inclinations.The discoursal self is the image a writer deliberately or unconsciously presents in writing, which is tied to the writer's sense of self, values, beliefs, and power relations in a social environment.The authorial self refers to how writers show themselves as authoritative in writing by presenting their thoughts and beliefs.Possibilities for selfhood refer to the interaction between a writer's social-cultural background, institutional setting, and disciplinary discourse, which can lead to forming writer identities distinct from the autobiographical self.The autobiographical and discursive selves can eventually lead to the creation of an authorial self, and the possibilities for selfhood might impact how we perceive ourselves.These identities are manifested by the writer's attempts to persuade the reader through various forms of argumentation, which occurs in the final step of a writer's identity development.In academic writing, the authorial self is a means of representing ideational and interpersonal meanings.
Critical argumentation influences a writer's academic identity construction that conforms to the academic discourse community value system by defending an authorial position.The concept of critical arguments states that a debate results from an argument in jointly constructing knowledge (Andrews, 2015;Wolfe, 2011).Regarding academic writing, the identity-building theory is based on the notion that writers create an argument with several strategies and formats.Two possible approaches are taking a position based on one's schemata and supporting it with source information.Another approach is to read widely first and establish a position decision based on the evidence.Examples of different forms of critical argumentation are deductive or inductive writing and the use of an autobiographical, authorial, or discursive self (Clark & Ivanic, 1997).Overall, Ivanic's identity-building theory provides a valuable lens to analyze the complex and dynamic process of constructing academic identity among Chinese EFL doctoral students in an EMI context.
Methodology
This study employs a narrative inquiry approach that explores the Chinese EFL doctoral students' engagement with critical argumentation in thesis writing and their academic identity development.According to Connelly and Clandinin (2006, p. 375), narrative inquiry involves examining ''experience as a story'' and is primarily a means of conceptualizing experience.As such, narrative research emphasizes the individual and the idea that a life narrative or biographical account can provide insight into one's life.This method is particularly well-suited to studying academic identity construction because it allows researchers to capture the complex and often personal ways individuals make sense of their academic experiences.
Through narrative inquiry, researchers gather narratives from participants and use them to interpret their experiences of the world, with a focus on the threedimensional space of temporality, sociality, and place.Hence, exploring the participants' academic identity involved examining three dimensions.The first dimension was time, which provided insights into their past experiences and current practices.The participants in this study shared narratives that described their past educational experiences in their home country and their transition to the current academic program in Malaysia.
The second aspect of narrative inquiry focused on personal elements, revealing how the participants' identities as PhD thesis writers evolved and shifted within their experiences and research practice.In this study, interviews focused on participants' personal and social aspects, with questions designed to elicit their feelings and beliefs and their interactions with supervisors' feedback.
Lastly, the institutional and sociocultural environments were considered to investigate the influence of context on the development of their academic writer identity.
Participants
This study involved two female Chinese EFL doctoral students at the Faculty of Education of a public university in Malaysia.The study involved a purposive sample of Chinese EFL doctoral students who have completed at least one semester of coursework and are currently working on their thesis.Purposive sampling helps enroll the participants who may provide credible and rich information on the subjects under consideration (Patton, 2002).
The EFL doctoral students in the education discipline were selected for the study as they had to write their thesis in English.At the university, potential doctoral candidates are required to meet a cut-off International English Language Testing System (IELTS) score of 6.0 and hold a recognized master's degree with a minimum cumulative grade point average (CGPA) of 3.0 to gain entry to the doctoral studies in education.
Volunteers were recruited through targeted invitations sent via WhatsApp and Telegram group chats and online community platforms exclusively for doctoral students at a Malaysian public university's Faculty of Education.Interested individuals were encouraged to contact the researchers to schedule a face-to-face briefing on the informed consent process and interviews.Initially, five doctoral students (two males and three females) agreed to participate, but three withdrew after the first interview.The researchers investigated the reasons for withdrawal and documented them thoroughly.All three (two males and one female) participants cited scheduling constraints as their reason for discontinuing their involvement in the study.They had requested a one-semester postponement of their doctoral studies.In addition, the two male participants had full-time jobs and financial difficulties (self-funded PhD) that made it challenging to continue their studies.
In contrast, the female participant cited her recent marriage commitment.Prospective participants were given a week to review the research information before deciding whether to participate.Volunteers were informed of the study's purpose, data collection processes, time commitment, and their right to privacy and anonymity.The two students included in this study were selected as they met the aforementioned descriptive criteria.Moreover, they volunteered to participate in the study.They were willing to reflect on their experiences writing their doctoral thesis at a Malaysian university and how they progressed through the course 1 year later.The primary premise of sampling is that the informants can provide first-hand accounts of EFL doctoral students' thesis writing.The pseudonyms Ming and Yuwei were used to protect the students' identities.The two students had completed a master's degree by coursework at an Australian and a university in China.Ming has an IELTS score of 7.5, while Yuwei scores band 7.At the time of the study, the students had been in Malaysia for more than 6 months.The two students were enrolled in a qualitative research methodology course that focused heavily on the philosophical underpinnings of this research approach.
The study received ethics approval from the university where it was conducted.In addition, we obtain signed informed consent from both participants and ensure that their identities are protected through the use of pseudonyms.We also ensure that the data collected is kept confidential and used only for this study.
Data Collection
Data were collected through semi-structured interviews and document analysis.The interviews aimed to explore students' experiences using critical argumentation in their academic writing and how it is linked to their academic identities.Each participant took part in two 1hour interviews.The initial interview, which was done at the start of the research, was intended to understand the participants' educational background and academic writing experiences in their native language and EFL.The main topics of the second interview session were how they assumed their academic identity as doctoral students and how they handled critical argumentation in their thesis writing.This second interview also discussed any potential sociocultural and personal factors influencing their academic writing practices.During the interviews, the participants were encouraged to describe and evaluate their own academic writing experiences from their perspectives.
An interview protocol was employed for in-depth interviews based on the existing instruments (Ivanic, 1998;Shang-Butler, 2015).All the interviews were conducted face-to-face and recorded for transcription purposes.The interviews were transcribed immediately, and the researchers reviewed each transcription with written notes from the interview while listening to the recording.
The document analysis was conducted 2 to 4 weeks after the students had completed writing sections of their thesis.Document analysis was used to examine participants' thesis writing in order to gain insights into their critical argumentation and rhetorical positioning.In addition, they discussed their written texts and how they exercised their agency to mediate their academic writing and adapt to disciplinary practices.
Data Analysis
In this study, we adopted a narrative approach as the primary unit of analysis.The data collected from the participants were subjected to this analysis.This involved examining each participant's narrative structure, themes, and discourse to identify patterns and gain insights into their personal experiences of critical argumentation and academic identity formation.An iterative inductive analysis was conducted, whereby each narrative was scrutinized without taking notes or discussing the stories with other researchers.Each researcher independently analyzed each narrative to determine key themes later discussed during a subsequent meeting for a final joint perusal.
The analysis aimed to understand the narratives the Chinese doctoral students shared by focusing on the differences in their experiences rather than simplifying them into one meaning.It is worth noting that narratives represent and interpret personal life experiences, as stated by Clandinin and Connelly (2000).The narrative analysis was particularly fitting for this study, enabling the researchers to identify common thematic elements across the participants' accounts and interpret their intended meaning.Overall, the use of narrative analysis in this study provided a powerful lens through which to explore and understand the experiences of Chinese EFL doctoral students concerning their critical argumentation and academic writer identity development in an EMI context.
Being small-scale and exploratory, we do not suggest that the findings of this study will necessarily be representative of Chinese EFL doctoral students' experiences in Malaysia or other countries.However, the interpretations reached in the study may shed some light on other doctoral EFL students' experiences in a similar context.
Findings
This narrative inquiry aimed to explore the experiences of Chinese EFL doctoral students in constructing their academic identities through critical argumentation in an EMI context.The findings suggest that critical argumentation in doctoral thesis writing plays a vital role in shaping Chinese EFL students' academic identity construction.Three key themes emerged through a narrative analysis of the interview data: developing a critical voice, defending authorial positions and constructing English rhetorical positioning.
Using our narrative approach, we start with two short anecdotes to present Ming and Yuwei's encounters with critical argumentation while working on their theses.Next, we will discuss how their thesis-writing process influenced their academic identity development, exploring the transformation and conflict between their previous interpretations of academic writing and disciplinary requirements.
Ming's Story
During the interview period, Ming, a 29-year-old first-year PhD student, had recently commenced the second study semester.In 2018, Ming completed a Master's degree by coursework in Australia and applied for a PhD program in Malaysia.Ming was admitted as a doctoral student in the summer of 2019.Ming originated from China and considered English a foreign language despite having received training in English academic writing and educational experience in Australia for her Master's degree.In the first interview, Ming expressed a few preconceived beliefs about doctoral thesis writing, acknowledging the need to enhance the quality of writing to meet the doctoral-level criteria.Owing to Ming's exposure to English academic writing and the educational system for the Master's degree in an EMI context, Ming was conscious that doctoral thesis writing required extensive critical thinking.
Ming noted that numerous countries and universities employ EMI-educated academic writing at different levels, constituting a culture of analytical writing.''In Australia, it was so common where we will need to critique the work of others, whether it is discussion-based in class or written work,'' Ming remarked.Ming acknowledged that the situation resembled her experience in Malaysia.In class discussions or written work, critiquing the work of others was also the norm in her doctoral program in Malaysia.However, Ming perceived a greater emphasis on critical thinking in the PhD thesis writing than in her Master's degree coursework.Ming viewed her Master's degree as merely a collection of coursework assignments, whereas the PhD thesis required more in-depth analytical and evaluative skills.Ming added, ''Critical discussions in the classroom will suffice for a Master's degree, but now I need to develop critical argumentation in thesis writing, which I was not required to do previously.''Ming also expressed concerns about her thesis writing, specifically the lack of critical reasoning and positioning in her English thesis writing.
Critical Argumentation: Developing Critical Voice and Defending Authorial Position
During the discussion of her thesis, Ming demonstrated comprehension and agreement with the reading materials presented.Furthermore, she seemed to perceive critical argumentation as the ability to acknowledge and endorse diverse perspectives.Ming appeared to understand and agree with everything she had read when discussing her thesis writing.To her, critical argumentation was related to how she could agree with different views.Ming discussed how she attempted to be critical, which is illustrated below: I think all these professors' view is correct, and I agree with them.However, I really want to make a point in the face of my agreeing to all these professors' work.I know I must write that what they said is correct, and I cannot say that it is not correct.
Ming was unaware of what critical argumentation entails in her writing and felt that she needed to obscure her emerging voice and endeavors to position herself in the text.Moreover, she believed critical argumentation referred to agreeing with the authors cited in her claims.For this reason, Ming's strategy was to avoid confrontation and preserve her view while simultaneously placing herself in a more secure position.She referred to her critical argumentation with the following terms: ''I agree with.''.''This is in agreement with.'', and ''We should.as noted by.''.Her linguistic choice appeared to indicate the tension between her desire to construct an insider's identity in her discipline and her interpretation of the disciplinary requirements.Thus, how Ming constructed meaning revealed how she thought she could agree and how she wanted to argue regulated her critical argumentation.This insight acknowledged EFL students' unfamiliarity with critical argumentation culture.
In the quote mentioned above, Ming noted that she struggled to comment on or criticize other researchers' work as she was shaped by her admiration for the experts' ideas and fear of expressing herself against book and journal authors.Ming elaborated on how she evaluated the literature in the following excerpt: I don't dare to give my opinion or critique the authors who have written and published so much.These are professors and established authors.I feel not qualified to do so as a doctoral student.
Ming felt compelled to follow established authors who demonstrated what should be done.Therefore, in this specific example of meaning-making, Ming experienced challenges manifesting her presence, which is a critical writing voice.Given Ivanic's (1998) identity-building theory, her avoidance of critiquing could have been due to a lack of confidence in positioning her view in relation to established authors or uncertainty about how she could achieve a balance between others and expressing herself.This finding highlighted how critiquing and presenting a position requires a sense of power and control over the reading text.Ming's case indicated that doctoral students might lack the power to develop a critical voice and defend an authorial position in critical discourse within the disciplinary community.Ming explained why she refrained from critically commenting on others' work in the following excerpt: In my culture, Chinese, we are not encouraged to challenge authority.We show our respect for elders and high-ranking people like professors.Professor is very knowledgeable, and I am just a doctoral student.The professor is always right, you know.
Ming revealed that Chinese academic practices emphasized respect for authority and scholarship.Strikingly, Ming used ''we'' when referring to her community.It appeared that Ming adopted her Chinese discoursal tradition or thoughts (way of thinking) either consciously or unconsciously in her thesis writing.Briefly, cultural elements appeared to influence her critical argumentation.She believed it was irrational to be ''critical'' of an authority, such as a book writer, who was more knowledgeable than her.As she found critiquing challenging, Ming resorted to agreeing to multiple views and tended to hide or neutralize her writer's voice.This ultimately undermined her critical voice and authorial position of having an opinion.Reflecting on her thesis writing challenges, Ming demonstrated more complex perceptions of critical argumentation in the following excerpt: Yes, I took a master's [degree], and the professor values critical argument, but we are not taught how to show it in writing.So, it is very difficult for me to know what is that and to comment or evaluate.
The preceding excerpt indicated that Ming's critical argumentation was hampered not due to cultural barriers but because it was not explicitly taught in higher education.It may have been more challenging for Ming as she had not engaged in critical argumentation when pursuing her Master's degree (coursework).They involve a research project and a series of taught modules delivered through lectures and seminars.Moreover, her doctoral studies involved embarking on a new and unfamiliar research topic of which she needed more knowledge.This challenge highlighted how the value of argumentation is embedded in lecturers' expectations, but it needs to be explicitly taught as a component of doctoral students' experiences (McKinley, 2015).In this case, Ming's critical argumentation was confined to retelling, comparing, or contrasting other researchers' work.Consequently, Ming needed help to meet the demands of self-representation to position herself within the academic community, project her writing voice, and adopt an academic identity.
Ming's account was compelling as her academic writer identity development coincided with cultural identity development and higher education discourse practices.The previous excerpt provided valuable perspectives on critical argumentation nature and practice in higher education.The excerpt highlighted how culture could influence writing styles but is not considered a barrier to critical skills acquisition and critical voice development.Ming's unfamiliarity with critical argumentation in thesis writing was probably due to a lack of exposure to critical argumentation structures or experience with developing a clear and coherent argument.Her transformation regarding critical argumentation was complex and multilayered in the attempt to express her critical voice and defend her authorial position.
Yuwei's Story
Yuwei was 27 years old and completed a Master's degree through coursework in China in 2018.In the same year, she applied for a doctoral program in Malaysia and was admitted as a doctoral student in 2019 despite having limited experience in thesis writing.Yuwei mentioned that she only wrote an academic project paper in her Master's degree program, which put her at a disadvantage in the thesis-writing process.During the first interview, Yuwei discussed the evolution of personal beliefs on doctoral thesis writing before and after commencing the PhD program.Before commencing the PhD program, Yuwei regarded her English proficiency as ''good for students like me from China.''However, after commencing the doctoral program in Malaysia, Yuwei considered her ''understanding of critical thinking and academic writing skills'' an obstacle.Yuwei felt inhibited by her linguistic nuances and expression during thesis writing, despite being comfortable with spoken English.She also noted the disparities in English usage between China and Malaysia, where English was more widely employed outside the classroom in Malaysia.
In reflecting on the first year of doctoral studies in Malaysia, Yuwei acknowledged that the experience was relatively difficult.She struggled to cope with the English as a medium of instruction (EMI) context and constantly expressed that ''PhD takes me into unknown space'' and ''feels so different from what I learned all these years.''She described her doctoral studies in Malaysia as dissimilar from her previous education experience in China, with less focus on rote learning and more problem-based in nature.As a result, Yuwei encountered difficulties in synthesizing ideas and evaluating academic arguments, which delayed her thesis writing progress and influenced her adoption of academic identity.Despite these struggles, Yuwei acknowledged the advantages of writing a dissertation in an EMI context, allowing her to situate herself outside her comfort zone, namely her native country and language, during thesis writing.She also regarded her dissertation writing success in Malaysia as a stepping stone in her personal development as an academic writer while increasing her employability.Nevertheless, Yuwei remained constantly concerned about her thesis writing progress and publication as it was part of the PhD graduation requirement of the public university.
Critical Argumentation: Construction of English Rhetoric Positioning and Synthesizing Multiple Sources
In discussing critical argumentation in thesis writing, Yuwei demonstrated a different approach to implementing a critical stance compared to Ming.Yuwei's critical argumentation was linked to her attempts to reflect Chinese rhetorical norms, which is shown in the following excerpt: I prefer to use Professor's name to support my thesis writing.After I write something, I am expected to present famous professors in my field as evidence to prove my point.This will make my writing more believable since there is another superior to support it.
Yuwei's approach to critical argumentation was dependent on what she thought she was expected to do.She stated that she was exposed to using the words of famous people or books as evidence as she was exposed to Chinese expository essay writing.Her reasoning for her approach, which was not apparent on the surface of her thesis writing, was uncovered by exploring how she thought she was (or was not) expected to demonstrate this critical argumentation and how she struggled to do so.
Elaborating on how she evaluated the literature, Yuwei stated, 'I refer to the professor or authorities to advance my argument.it is sort of using the authorities to promote trustworthiness.'Therefore, in this specific example of meaning-making, Yuwei self-positioned as a writer who attempted to provide the perceived desired response according to the voices she considered representative of doctoral thesis writing in previous essay-writing experiences.Therefore, she demonstrated her Chinese expository strategy by relying on authorities in the field to allow her to promote the trustworthiness of her writing and convey her writer's voice less directly to academia.This over-reliance on referencing famous writers to advance an academic argument simultaneously obscured her stance and undermined her efforts to construct the identity of an authoritative and knowledgeable member of her chosen field of study.Yuwei's stance aligns with Andrews's (2007) argument that Confucius's heritage culture places great importance on respect for authority and scholarship.As a result, novice writers tend to avoid presenting their critical arguments before mastering their field.
Yuwei revealed that this habit was influenced by the Chinese collective way of thinking: ''I should think in this way and be taught to use this kind of strategy in Chinese essay writing for years.''This response indicated that her previous writing experiences and exposure were limited to Chinese composition style and framing.The different writing experiences Yuwei acquired in school and undergraduate studies did not prepare her for the fact that doctoral thesis writing style is different and adheres to different literary conventions.More importantly, Yuwei's reliance on her previous writing style could be specifically challenging for her to adopt the expected writing practices in her current academic environment and restrict her approach to critical argumentation.According to Liu and Huang (2021), rhetorical aspects in EFL academic writing are not considered as important in the Chinese context.This may contribute to Yuwei's challenges in adapting to English academic writing conventions.
Yuwei explained that her approach to critical argumentation was rooted in her prior literacy practices (Chinese rhetorical strategy) and acknowledged that she found it challenging to understand the rhetorical differences between Chinese and English in thesis writing.While finding it challenging to compose the target discourse, Yuwei appeared to face barriers in negotiating identities within two selfhoods.She viewed the Chinese and English writer's selfhoods as a conflict between two positions and preferred identities.This conflict resulted in her struggling to engage in critical argumentation and writing her thesis, necessitating an awareness and understanding of English rhetoric.In the second interview, Yuwei expressed concern over critical argumentation in the following excerpt: I know that for PhD thesis writing, I need to present critical arguments, but I have so many questions in mind: will I be in danger if I sound too outspoken?Or will I sound like a problematic writer that disagrees with the other professors?So I don't think I need to refute it.I want to be safe and stable, you know.
Yuwei appeared to be highly aware of the politics of writing and the unequal power relations between her and other authors.This awareness appeared to influence her writing, prompting her restraint in expressing personal opinions, specifically those contrary to consensus or those in positions of authority.Yuwei's attempt revealed that she believed her writing would be favored and more acceptable if she intentionally incorporated the words of authorities in the field.Saying, ''I do not think I need to refute.I want to be safe and stable'' also reflected Yuwei's conscious decision of not wanting to exercise critical argumentation skills and her Chinese discoursal strategy.The preceding excerpt indicated that Yuwei needed to gain more knowledge about critical argumentation or was unaware of the significance of rebuttals in completing argument structure, integrating argument, and counterargument.The following excerpt is Yuwei's reflection on her experience in doctoral thesis writing: I find it hard to bring together the different ideas and my own.I know I need to synthesise, but I had not practised this in English or Chinese before.So in my thesis, I summarise and paraphrase to use the information from several sources.
Yuwei's account indicated that she had minimal experience and prior knowledge and understanding of the rhetorical nature of synthesis writing.Moreover, she highlighted the need to focus on summarizing and paraphrasing the source text when writing her thesis, which suggested that she needed to fully understand the vital role of synthesis and a clear conceptual understanding of synthesizing from a writing perspective.Briefly, Yuwei demonstrated underdeveloped writer-source integration by combining sources and her own ideas.Her lack of experience with synthesis writing and presenting new ideas based on interpretations of other evidence or arguments reflected one of the challenges she faced in representing herself as intertextually knowledgeable and adopting an appropriate academic identity (Chang, 2016;Liu & Huang, 2021).
Yuwei's case highlighted how EFL writers new to critical argumentation in the target language could find it unconsciously challenging to change cultural elements and rhetorical aspects (thinking pattern, audience consideration, and synthesis writing).Chien (2007) argues that traditional Chinese text structures and rhetorical strategies continue to influence the contemporary English writing of Chinese students.Yuwei's experiences reflected the fact that English rhetoric positioning construction presents specific challenges for Mainland Chinese students.In her bachelor's and master's degree research projects, Yuwei was exposed to, directed, and trained in Chinese rhetoric.This limited her understanding of the target English language discourse practices.
In this case, audience consideration in the rhetorical context required considerable readjustments.Specifically, the readjustments posed challenges to Yuwei as a foreign language writer due to the differences between English and Chinese writing.Consequently, this imposed an extra burden on Yuwei as a writer, influenced her thesis writing progress, and affected her identity construction as an academic writer.
Discussion and Conclusion
This study explored the experiences of Chinese EFL doctoral students in constructing their academic identities through critical argumentation in an English-Medium Instruction (EMI) context.The findings revealed that critical argumentation plays a crucial role in shaping the academic identity construction of Chinese EFL students in their doctoral thesis writing.Using Ivanic's (1998) framework of identity building, the doctoral students' narratives illustrate their struggles in negotiating their autobiographical, authorial, and discursive selves in academic writing.Insights into the thesis-writing experiences of both doctorate students in this study serve to illuminate the various factors underlying their academic identity construction, particularly in the development of critical argumentation.These challenges are often rooted in linguistic, cultural, and educational differences between doctoral students' life histories and EMI institutional requirements.
This study highlights the influence of the EFL doctoral students' educational background and cultural values on how they express themselves in their writings.For instance, Ming acknowledged the importance of critical thinking in doctoral thesis writing but struggled to develop her critical voice and defend her authorial position.Cultural influences, such as respect for authority, shaped Ming's reluctance to critique established authors' work.Ming's case highlighted the need for explicit instruction and support in developing critical argumentation skills for EFL students.
In another narrative, Yuwei's difficulties in synthesizing ideas and rhetorical positioning led to her merely reiterating facts or summarizing the main ideas.She relied heavily on referencing famous authors to support her arguments, reflecting her Chinese expository essay writing background.This over-reliance on authorities hindered her ability to construct an authorial voice and rhetorical positioning in her thesis.Yuwei's case emphasized the challenges of adapting to English academic writing conventions and the need to understand the rhetorical differences between Chinese and English.Studies by Chang (2016;Liu & Huang, 2021) have shown cultural differences to play a role in shaping Chinese EFL students' rhetorical positioning.
Overall, the findings of this study shed light on the complex nature of academic identity construction among the two Chinese EFL doctoral students in an EMI context.The participants' experiences reflected the influence of cultural norms, educational backgrounds, and language proficiency on their engagement with critical argumentation.The study emphasizes the importance of providing explicit instruction and support for developing critical argumentation skills in doctoral programs to facilitate the construction of academic identities among EFL students.By addressing these challenges, universities can better prepare EFL doctoral students for successful academic writing and contribute to their development as scholars.
The findings open up possibilities for a broader understanding of academic writing that values international graduate students' educational background and cultural variety in target English language discourse communities.In addition, the study's conclusions could be a platform for other EMI higher institutions to better prepare for international students' academic experiences.International graduate students, for example, can be offered long-term academic support in coping with their academic studies.
Limitations of the Study
While the narrative inquiry study on Chinese EFL doctoral students' academic identity construction through critical argumentation is insightful, there are several limitations to consider, mainly due to the small sample size of only two female Chinese respondents.First, the study's findings may not represent all Chinese EFL doctoral students as the sample size is small and homogeneous.Therefore, the study's results should be generalized with caution.Second, the study's focus on only two female Chinese respondents limits the scope of the study in terms of gender and cultural diversity.Gender and cultural differences may impact the way individuals construct their academic identities, and by only focusing on two female participants, the study may not account for these differences.Lastly, the study's focus on only academic identity construction through critical argumentation may not be comprehensive enough to capture the full scope of factors that influence academic identity construction.Other factors, such as educational background, cultural values, and institutional context, may also shape academic identities.In conclusion, while the narrative inquiry study on Chinese EFL doctoral students' academic identity construction through critical argumentation is insightful, its limitations should be considered when interpreting the results.Future research should address these limitations by using larger and more diverse samples, incorporating multiple perspectives, and considering a broader range of factors that impact academic identity construction.
Furthermore, it would be valuable to explore the experiences of other international doctoral students in other contexts and disciplines to understand further the challenges they face in developing their critical voice and authorial position in their thesis writing.Finally, longitudinal studies that track EFL doctoral students' development over time would be valuable for understanding the long-term effects of interventions and support strategies on their academic writing skills and identity construction. | 9,868.6 | 2023-10-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
One or two things we know about concept drift—a survey on monitoring in evolving environments. Part A: detecting concept drift
The world surrounding us is subject to constant change. These changes, frequently described as concept drift, influence many industrial and technical processes. As they can lead to malfunctions and other anomalous behavior, which may be safety-critical in many scenarios, detecting and analyzing concept drift is crucial. In this study, we provide a literature review focusing on concept drift in unsupervised data streams. While many surveys focus on supervised data streams, so far, there is no work reviewing the unsupervised setting. However, this setting is of particular relevance for monitoring and anomaly detection which are directly applicable to many tasks and challenges in engineering. This survey provides a taxonomy of existing work on unsupervised drift detection. In addition to providing a comprehensive literature review, it offers precise mathematical definitions of the considered problems and contains standardized experiments on parametric artificial datasets allowing for a direct comparison of different detection strategies. Thus, the suitability of different schemes can be analyzed systematically, and guidelines for their usage in real-world scenarios can be provided.
Introduction
The constantly changing world presents challenges for automated systems, for example, those involved in critical infrastructure, manufacturing, and quality control.Reliable functioning of automated processes and monitoring algorithms requires the ability to detect, respond, and adapt to these changes (Ditzler et al., 2015;Reppa et al., 2016;Chen and Boning, 2017;Vrachimis et al., 2022;Gabbar et al., 2023).
Formally, changes in the data-generating distribution are known as concept drift (Gama et al., 2014).These changes can be caused by modifications in the observed process, environment, or data-collecting sensors.Detecting anomalies in the observed process is essential for identifying faulty productions or other types of unwanted errors.Conversely, detecting changes in sensors and the environment is crucial for automated processes to take appropriate actions, such as replacing a faulty sensor or modifying the system processing the collected data to fit a new scenario (Gama et al., 2004(Gama et al., , 2014;;Gonçalves et al., 2014).
Typically, drift is studied in stream setups, where changes in the underlying data distribution necessitate model adaptation or alerting a human operator for corrective action (Ditzler et al., 2015;Lu et al., 2018;Delange et al., 2021).This is closely linked to the evolution of concepts in continual learning, a widespread subject in deep learning where concepts can arise or vanish.Drift extends beyond data streams and appears in time-series data with interdependent observations.Such drift usually manifests itself as trends, and its absence is known as stationarity (Esling and Agon, 2012; Aminikhanghahi and Cook, 2017).
In settings where data are observed over time, such as manufacturing and quality control, data are frequently gathered across multiple locations and subjected to federated learning techniques (Zhang et al., 2021).Instead of consolidating all data on a global server, local processing is implemented, and outcomes are integrated into an overarching model.Similar to stream learning, it is crucial to address differences or drift in data from various locations to build a strong global model (Liu et al., 2020).Furthermore, drift must be taken into account in transfer learning, a deep learning technique (Pan and Yang, 2010) in which the model is pre-trained on a similar task with a more extensive dataset before being fine-tuned on the target task using a limited dataset.Although the main focus of this study is on data streams, the strategies presented herein apply to other tasks.
Processing drifting data streams involves two major tasks: establishing a robust model for predictive tasks, that is, online or stream learning, and monitoring systems for unexpected behavior.In the former, the focus is on a label and its relation to other features, while the latter is concerned with any change indicating unexpected system behaviors or states.Drift detection, therefore, focuses on different goals, in analogy to general learning termed supervised for the former and unsupervised for the latter.This study omits online learning as it has been extensively explored in previous surveys (Ditzler et al., 2015;Losing et al., 2018;Lu et al., 2018) and toolboxes (Bifet et al., 2010;Montiel et al., 2018Montiel et al., , 2021)).
Instead, this study centers on unsupervised drift detection and monitoring situations where drift is anticipated due to sensor usage or sensitivity to environmental changes.Specifically, the focus is on unsupervised drift detection, which is vital for monitoring and comprehending drift phenomena.Some exemplary applications are the detection of drift for security applications (Yang et al., 2021) and the usage of drift detection for the detection of leakages in water distribution networks (Vaquet et al., 2024a,b).In addition, there are techniques for further analyzing drift (Webb et al., 2017(Webb et al., , 2018;;Hinder et al., 2023a), which we will not cover in detail in this study.For the interested reader, we provide an extended version that covers these topics as well as the content of this study (Hinder et al., 2023b).Note that approaches for unsupervised drift detection discussed here differ from those designed for online learning, as discussed by Gemaque et al. (2020).In Section 2.2, we describe the contrast to supervised drift detection in more detail.
Monitoring entails observing a system and offering necessary information to both human operators and automated tasks to ensure proper system functionality.The required information varies depending on the specific task (Goldenberg and Webb, 2019;Verma, 2021).Generally, there are crucial inquiries to answer regarding drift (Lu et al., 2018): The first one pertains to the whether (and when) of drift occurrence, which is addressed through drift detection (Gama et al., 2014).When detecting drift, a precise assessment of its severity, that is, the how much?, is crucial in determining appropriate measures.Drift quantification, estimating the rates of change that trigger alarms, often precedes detection, and although not the main focus, this aspect will be briefly discussed later.
To take accurate action, it is essential to pinpoint drift more precisely (Lu et al., 2018).While detecting and quantifying drift addresses the when by identifying change points and rate of change, drift localization and segmentation (Lu et al., 2018) focus on the where by assigning drift-related information to the data space.For example, identifying anomalous items, specifically drifting data samples, is crucial in monitoring settings.
Addressing the aforementioned issues may not be sufficient in some cases.Systems can experience drift, a malfunction resulting in changes across multiple data points and features.For example, a deteriorating sensor can produce altered measurements.Reliance solely on drift location provides limited insight into the nature of the event.However, it is crucial to provide detailed information about what happened and how it occurred.In many cases, drift explanations (Hinder et al., 2023a) provide relevant information to human operators concerned with monitoring and manual model adaptation.Finding appropriate explanations is crucial since the complexity of the drift may go beyond the information obtained by answering the previously raised questions.
This study is organized as follows: First, we formalize the concept of drift (Section 2.1) and position our work in the context of related research at the intersection of the stream setup, supervised, and unsupervised approaches (Section 2.2).We then turn our attention to drift detection: We begin by formalizing the task (Section 3) and presenting a general scheme implemented by most approaches (Section 4).We then discuss and categorize several detection methods (Section 5) and perform an analysis based on criteria specific to drift and streaming scenarios (Section 6).In the ArXive version (Hinder et al., 2023b), we also cover topics that are closer related to the analysis of concept drift like drift localization and drift explanation.
Concept drift-defining the setup
In this section, we first formally define drift.Then, we explore various setups for dealing with drift before delving into a detailed examination of the body of work covering drift detection approaches in the later sections.
. A formal description of concept drift
In classical batch machine learning, one assumes that the distribution remains constant during training, testing, and application.We denote this time-invariant data-generating distribution by D and consider a sample of size n is a collection of n i.i.d.random variables X 1 , . . ., X n ∼ D.
However, real-world applications, particularly stream learning, often violate the assumption of time-invariant distributions.To ./frai. .address this formally, we introduce time into our considerations, allowing each data point to follow a potentially distinct distribution X i ∼ D t i linked to the observation time t i .Given the rarity of observing two samples simultaneously, that is, t i = t j for all i = j, it is common to use D i instead of D t i for simplicity (Gama et al., 2014).This setup aligns with the classical scenario if all X i share the same distribution, that is, D i = D j for all i, j.Concept drift takes place when this assumption is violated, that is, D i = D j for some i, j (Gama et al., 2014).
As argued by Hinder et al. (2020), this definition of concept drift depends on the chosen sample and not the underlying process.This makes drift a non-statistical problem, as one sample may have concept drift while another does not, even though they were generated by the same process within the same time period, but with different sampling frequencies.To address this issue, Hinder et al. (2020) suggest incorporating the statistical properties of time.This is done by using a model of time, denoted as T, instead of a simple index set.The framework assumes a distribution P T on T that characterizes the likelihood of observing a data point at time t, together with a collection of distributions D t for all t ∈ T, even though only a finite number of time points are observed in practical terms.The combination of P T and D t forms a distribution process (in the literature, this is also referred to as drift process).
Definition 1.Let T = [0, 1] and X = R d .A (post-hoc) distribution process (D t , P T ) from the time domain T to the data space X is a probability measure P T on T together with a Markov kernel D t from T to X, that is, for all t ∈ T, D t is a probability measure on X and for all measurable A ⊂ X the map t → D t (A) is measurable.We will just write D t instead of (D t , P T ) if this does not lead to confusion.
Distribution processes are formal models for data streams, which consist of independent observations with the only restriction that simultaneous observations follow the same distribution.This differs from a time series or stochastic process which are randomly sampled functions from time to data where observations can depend on each other, but each time point has only one definite value.Although both describe data and time interdependencies, and observed data can usually be modeled in both setups, their interpretation and areas of application differ significantly (Hinder et al., 2024).For instance, measuring the temperature of an object over time is a time series, yielding a single value per time.Conversely, a stream of ballots qualifies as a distribution process because the distribution is more interesting than an individual vote.
Two particularly relevant types of distributions can be derived from a distribution process: First, by appending a time-stamp to each sample from its arrival, the data follow what we call the holistic distribution D. Second, by aggregating all samples observed within a specific time window W ⊂ T, the data conform to the mean distribution D W during W. Formally, these distributions are defined as follows: A distribution process provides the benefit of data sampling.In contrast, a sample-based arrangement does not allow the creation of a new sample from old ones.Two techniques exist for generating new data from a distribution process.One method involves obtaining i.i.d.samples from the holistic distribution D. These time-stamped data points (X, T) are commonly obtained by first randomly selecting an observation time (T ∼ P T ) and then drawing X from the distribution D t with the assumption that Building on the aforementioned definition, we define drift as a property of a data-generating process, not just a sample drawn from it.To account for the statistical nature, a slight adaptation is necessary.We assert that D t exhibits drift if there is a nonzero probability of obtaining a sample with drift.In other words, a sample X 1 , X 2 , . . .will have indices i and j where with a probability that is greater than zero.The number of samples does not impact this, due to measure-theoretical considerations, enabling the examination of only two samples for this definition.Definition 3. Let (D t , P T ) be a distribution process.We say that D t has drift iff Here, P 2 T denotes the product measure of P T with itself, that is, the measure on T 2 = T × T that is uniquely determined by It may be questioned how far this is distinct from having s and t in T, where D t = D s .This formally is due to P T null sets, that is, it is possible that different distribution only occurs at a single point in time, such that we are unable to observe any samples from the other distribution, making it impossible to detect the drift.Therefore, it is a quirk of the formal model rather than a reflection of the actual process.
As mentioned before, we can also use different choices for T. While T = [0, 1] might be the best model for clock-time, T = {1, . . ., n} can be used to model different computational nodes, etc. (Hinder et al., 2023c).In particular, if T is at most countable, then drift is equal to the existing of s, t ∈ T with different distributions D t = D s (Hinder et al., 2020).
Both existence and uniqueness of D are assured by the Fubini-Tonelli theorem.
Frontiers in Artificial Intelligence
frontiersin.org There are different yet equivalent formalizations of drift (Hinder et al., 2020).These involve situations where there is a non-equality to a standard distribution (P T [D t = P] > 0 for all distributions P on X), non-equality to the mean distribution (P T [D t = D T ] > 0), and distinct distributions for two separate time windows (D W = D W ′ for some W, W ′ ⊂ T).However, a very important way to phrase drift is to express it as the dependence between data X and time T.
Theorem 1.Let (D t , P T ) be a distribution process from T to X and let (X, T) ∼ D be distributed according to the holistic distribution.Then, D t has drift if and only if T ⊥ ⊥ X are not statistically independent, that is, there exist W ⊂ T and A ⊂ X such that This concept was pivotal in shaping the development of new methods, for example, it was used to reduce the problem of drift detection to independence X ⊥ ⊥ T testing without the necessity of using two windows (Hinder et al., 2020); it was used to describe the location of drift through temporal homogeneity using conditional independence X ⊥ ⊥ T | L(X) where L are the homogeneous components (Hinder et al., 2021a(Hinder et al., , 2022a)); explaining drift was reduced to the explanation of models that estimate X → T (Hinder et al., 2023a); the position of anomalies in critical infrastructure was identified as those features X i that have a particularly strong correlation with time T (Vaquet et al., 2024a,b).
. Concept drift in supervised and unsupervised setups In the previous section, we defined drift in the context of data generation.Typically, drift is classified based on its temporal qualities.An abrupt drift refers to a sudden change in distribution at a specific time referred to as change point, while changes gradually occurring over an interval signify gradual drift.During a changing period in incremental drift, samples are drawn from both distributions with varying probabilities.Recurring drift refers to the reappearance of past distributions, usually due to seasonality.Some authors use alternative nomenclatures, for example, abrupt drift is sometimes referred to as "concept shift," and gradual or incremental drift as "concept drift."However, unless specified, we will refer to all those notions simply as "drift." Moreover, drift is further categorized based on the modifications made to data and label space distributions.In a data stream of labeled pairs (X, Y) within X × Y, where Y represents the label, changes in the conditional distribution D t (Y | X) are referred to as real drift, while changes within the marginal D t (X) are known as virtual drift or occasionally data drift.
From a statistical perspective, drift in the marginal distribution of X and time T and the joint distribution of (X, Y) and time T can be modeled within a common framework despite different interpretations.Real drift can equivalently be described as the conditional statistical dependence of Y and T, given X, that is, Y ⊥ ⊥ T | X (Hinder et al., 2023d).
Analogous to general machine learning tasks, drift detection can be considered in the supervised settings, that is, those that model-guided
FIGURE
Display of the drift analysis categorization according to the goal and the applied strategy.
are concerned with conditional distributions usually with respect to a label or target, and unsupervised tasks, that is, those that are concerned with the joint or marginal distributions.While in supervised settings both real and virtual drift might be present, in unsupervised settings only virtual drift has to be considered.
Dealing with drifting data streams involves two key objectives: maintaining an accurate learning model despite drift (model adaption) and accurately detecting and characterizing drift in the data distribution (monitoring).In supervised settings, the emphasis is on analyzing model losses and assessing the model's ability to perform prediction tasks (prediction loss-based).In unsupervised settings, more attention is given to the data distribution or data reconstruction (distribution-based).These goals align with two overarching approaches of model adaption and monitoring, resulting in the categorization illustrated in Figure 1.
In supervised environments, model adaptation is typically attained through loss-based tactics, in which updates are guided by the model's capacity to execute tasks.By considering reconstruction losses, such detection strategies can also be used for the unsupervised setup.Many studies examine this supervised strategy (Ditzler et al., 2015;Losing et al., 2018;Lu et al., 2018).However, the connection between model loss, model adaption, and actual drift is rather vague and heavily reliant on the selected model class, the specific properties of the drift, and the setup (Hinder et al., 2023c,d).Therefore, employing loss-based approaches for drift detection in monitoring setups is typically unsuitable.
Unsupervised distribution-based techniques are available for both model fitting and monitoring.We focus on those unsupervised drift detection methods for monitoring tasks, which we discuss further in the following sections.Notably, there is currently no comprehensive survey of drift analysis specifically tailored to the monitoring task, although surveys such as the one by Gemaque et al. (2020) have covered unsupervised drift detection for model adaptation.In addition, Aminikhanghahi and Cook (2017) explore unsupervised change point detection, which is a related problem within the domain of time-series data but is beyond the scope of this discussion.
Drift detection-setup and challenges
As discussed before, the first important question when monitoring a data stream is whether (and when) a drift occurs.The task of determining whether or not there is drift during a time period is called drift detection.A method designed to perform that task is referred to as drift detector.Surprisingly, most surveys do not provide a formal mathematical definition of drift detectors, so we provide a formalization, first.
One can consider drift detectors as a kind of statistical analysis tool that aims to differentiate between the null hypothesis "for all time points t and s we have D t = D s " and the alternative "we may find time points t and s with D t = D s ."More formally, a drift detector is a map or algorithm that, when provided with a data sample S drawn from the stream, tells us whether or not there is drift.
We can formalize that such a drift detection model is accurate or valid, respectively, in the following way: (a) the algorithm will always make the right decision if we just provide enough data, or (b) we can control the chance of false positives independent of the stream.This leads to the following definitions: Definition 4. A drift detector is a decision algorithm on data-timepairs of any sample size n, that is, a (sequence of) measurable maps A drift detector A is surely drift-detecting if it raises correct alarms in the asymptotic setting, that is, for every distribution process D t and every δ > 0 there exists a number N such that for all n > N we have Notice that the definition is not uniform across multiple streams (or drifts if the method is local in time), that is, for some streams it suffices to have 100 samples to correctly identify drift, for others 10,000 are not enough because the effect is too small.This is not a shortcoming of drift detection but a common scheme for all statistical tests.To cope with that problem we have to take the two kinds of errors into account: A type I error occurs if there is no drift but we detect one (false alarm), and a type II error occurs if there is drift but we do not detect it.As discussed above, avoiding type II errors is not feasible.In addition, as the effect of very mild drifts is usually less severe, missing one might as well be less problematic in practice.Thus, we focus on controlling the type I error.
Controlling the number of false alarms can be stated as follows: Once we provide a certain number of samples, the chance of a false alarm falls below a certain threshold.That number of samples must not depend on the data stream we consider.As this is also fulfilled for the trivial drift detector that never raises any alarms and thus never detects drift, we require that the chance of detecting drift in case there actually is some to be larger than this threshold provided enough data from the stream is available.Here, the amount of required data is stream-specific as discussed above.If a drift detector fulfills these properties at least for some streams, we say that it is valid.If this holds for all streams, then we call the drift detector universally valid.Formally: Definition 5. A drift detector A is valid on a family of distribution processes D, if it correctly identifies drift in the majority of cases: We say that A is universally valid if it is valid for all possible streams, that is, D is the set of all distribution processes.Notice that validity does not imply that A makes the right decision even if we make use of larger and larger sample sizes.For a concrete case, it makes no statement about the correctness of the output except that it is more likely to predict drift if there actually is drift.This probability, however, holds across all streams independent of the severity of the drift.Thus, for monitoring, we need a drift detector that is universally valid and surely driftdetecting.
One is frequently additionally interested in the time point of the drift.This problem is usually addressed indirectly: If drift is observed in a certain time window, the algorithm will raise an alarm which is then considered as the time point of drift.
A general scheme for drift detection
As discussed before, the goal of drift detection is to investigate whether or not the underlying distribution changes.As visualized in Figure 2, drift detection is usually applied in a streaming setting where a stream of data points is arriving over time.At time t, a sample S(t) containing some data points which are observed during W(t) and thus are generated by D W(t) becomes available.On an algorithmic level, existing drift detectors can be described according to the four-staged scheme visualized in Figure 2 following the ideas of Lu et al. (2018).
In this section, we discuss some of the most prominent choices for the stages 1-4 of this drift detection scheme.
. Stage : acquisition of data input: data stream output: window(s) of data samples, for example, one reference window and one containing the most recent samples As a first step, a strategy for selecting which data points are used for further analysis needs to be selected.Depending on the strategy used (we will discuss those in Section 5) either one or two windows of the data are selected.Most approaches rely on sliding windows (Lu et al., 2018).As visualized in Figure 3, there are four main categories which differ in how the reference window is updated, for example, fixed until an event, growing, or sliding along the stream or implicit as a summary statistic using a model.We refer to Lu et al. (2018) for a more detailed description.There also exist approaches using preprocessing such as a deep latent space embedding (Vaquet et al., 2021). .Stage : building a descriptor input: window(s) of data samples output: possibly smoothed descriptor of window(s) The goal of the second stage is to provide a possibly smoothed descriptor of the data distribution in the window obtained in stage 1.
Possible descriptors are grid-or tree-based binnings, neighbor-, model-, and kernel-based approaches: Binnings can be considered as one of the simplest strategies.The input space is split into bins, and the number of samples per bin is counted.The bins can be obtained as a grid or by using a decision tree.Decision trees can be constructed randomly, according to a fixed splitting rule (Dasu et al., 2006), or using a criterion that takes temporal structure into account (Hinder et al., 2022b) which can result in better performance.
One can also use a machine learning model's compression capabilities by training the model.This way, the data are stored implicitly (Dwork, 2006;Shalev-Shwartz and Ben-David, 2014;Haim et al., 2022).A query is then used to access the data.Common strategies are discussed in Section 5.
Other versatile, robust, and non-parametric families of methods are offered by a large variety of neighborhood-or kernelbased approaches (Gretton et al., 2006;Harchaoui and Cappé, 2007;Pérez-Cruz, 2009;Liu et al., 2017).In those cases, the information is encoded via (dis-)similarity matrices like the adjacency matrix or kernel matrix.
. Ensemble and hierarchical approaches
Some authors suggest combining multiple drift detectors (Lu et al., 2018).They are usually arranged in an ensemble, for example, by combining multiple p-values after stage 4 into a single one, or hierarchical, for example, by combining a computationally inexpensive but imprecise detector with a precise but computationally expensive validation.Although those approaches differ on a technical level, they do not from a theoretical perspective, as the suggested framework is sufficiently general.
Categories of drift detectors
So far, we formally defined the properties a drift detection algorithm should fulfill and described on an algorithmic level how different approaches can be implemented.In this section, we focus on concrete approaches.We propose a categorization according to the main strategies of the approaches, relying either on an analysis of two samples, meta-statistics, or a block-based strategy.We present methods organized according to the taxonomy in Figure 4.An overview of the approaches considered in this survey is presented in Table 1.
. Two-sample analysis based
The most common type of drift detector exploits that drift is defined as the difference between two time points which can be tested for by statistical two-sample tests.To perform such a test, we split our sample S(t) into two samples S − (t) and S + (t) and then apply the test to those.The construction of the descriptor, distance measure, and normalization (stages 2-4) are then left to the used testing scheme.In addition to classical statistical tests, there also exist more modern approaches that make use of advanced machine learning techniques.
As stated above, to apply this scheme, we need to split the obtained sample into two sub-samples which are then used for the test.This step is crucial as an unsuited split can have a profound impact on the result.In severe cases, choosing an unsuited split can make the drift vanish and thus undetectable as we consider time averages of the windows.However, there exist theoretical works that suggest that the averaging out does not pose a fundamental problem (Hinder et al., 2021b).
From a more algorithmic perspective, there are essentially three ways the testing procedure is approached.Loss-based and virtual classifier-based approaches rely on machine learning techniques,
FIGURE
Taxonomy of drift detection approaches discussed in this study.Methods marked in gray are rely on model performance.
while statistical test-based approaches rely on statistical tools.We will discuss those in the following.
. . Loss-based approaches
A large family of loss-based approaches uses machine learning models to evaluate the similarity of newly arriving samples to already received ones.Such models are typically unsupervised or applied without relying on external labels, thus differing from the supervised approach discussed in Section 2.2.However, being reliant on model performance, they face similar pitfalls as prediction loss-based methods (Hinder et al., 2023c).Here, we find it necessary to discuss them due to their widespread popularity.In this case, the reference window (stage 1) is implicitly stored in a machine learning model which is also used as a data descriptor (stage 2).The dissimilarity is usually given by the model loss.It is further analyzed using drift detectors which are commonly used in the supervised setup (Basseville and Nikiforov, 1993;Gama et al., 2004;Baena-Garcıa et al., 2006;Bifet and Gavaldà, 2007;Frias-Blanco et al., 2014) and serve as a normalization (stages 3 and 4).
Several candidates are implementing this strategy.One of the most common model choices are auto-encoders which compress and reconstruct the data (Rabanser et al., 2019).Other popular model choices are models like 1-class SVMs or Isolation Forests.Originating from anomaly detection, they provide an anomaly score that estimates how anomalous a data point is.Finally, density estimators, which are designed to estimate the likelihood of observing a sample, can be applied to detect drift.Here, the idea is that a sample from a new concept is assumed to be unlikely to be observed in the old concept, resulting in a low occurrence probability, high reconstruction error, or anomaly score.Thus, a change in the mean score indicates drift (Yamanishi and Takeuchi, 2002;Kawahara and Sugiyama, 2009).
These methods are quite popular as they are closely connected to supervised drift detection, but they also face similar issues.On a theoretical level, Hinder et al. (2023c,d) showed that for many important models, one can construct streams where the drift is not correctly detected because it is irrelevant to the decision boundary learned by the model class.This claim was further substantiated by empirical evaluations (Hinder et al., 2023c,d;Vaquet et al., 2024a).Thus, such approaches are unsuited for discovery tasks or the monitoring setup.We will therefore only focus very shortly on them.
. . Virtual-classifier-based
A different approach using machine learning models is based on the idea of virtual classifiers (Kifer et al., 2004;Hido et al., 2008): If a classifier performs better than random guessing, then the class distributions must be different.
This idea can be employed for drift detection as follows (see Figure 5 for an illustration): Store all samples explicitly in two windows (stage 1).Define labels according to reference or current sample, that is, label x ∈ S − (t) as y = −1, x ∈ S + (t) as y = 1.Use that to train a model (stage 2).The test score then serves as a drift score (stage 3) which is commonly a normalized score (stage 4).
In practice, the usage of k-fold evaluation is advised for optimal data usage (Hido et al., 2008;Gözüaçık et al., 2019).Furthermore, statistical learning theory offers guarantees that can be used to derive p-values (Kifer et al., 2004;Dries and Rückert, 2009) which, however, are usually rather loose.The used model class is crucial in terms of which drift can be detected and how much data are necessary (Hinder et al., 2022b).It was also shown that for valid split points, many learning models yield surely drift-detecting algorithms and suggested that the resulting algorithms are also universally valid.Furthermore, the chance of choosing an invalid split point is essentially zero (Hinder et al., 2021b).As a candidate of this class, we consider D3 (Gözüaçık et al., 2019).
. . Statistical-test-based
So far we considered intuitive ad hoc approaches.More theorydriven approaches can be derived by considering drift detection as a two-sample testing problem for which formal justification usually exists.
Classical statistical tests commonly focus on one-dimensional data.The Kolmogorov-Smirnov (KS) test might be the most prominent classical two-sample test (see Figure 6 for an illustration): The test requires two samples (stage 1).It then computes the empirical cumulative distribution function (CDF) (stage 2): The test statistic is given by the maximal distance of the two CDFs (stage 3).
Under H 0 the distribution of d does not depend on the data distribution (Massey, 1951) and we can compute the p-value analytically, serving as a normalized scale (stage 4).All steps can be computed incrementally (Dos Reis et al., 2016).
Applying the test dimension-wise and then taking the minimum extends the method to multiple dimensions (see Algorithm 1).This does not take drift in the correlation into account.It was suggested to use random projection to cope with this problem (Rabanser et al., 2019;Hinder et al., 2022b) which, however, might not work well in practice (Hinder and Hammer, 2023).
The kernel two-sample test (Gretton et al., 2006;Rabanser et al., 2019) is another important candidate.It is based on the Maximum Mean Discrepancy (MMD) which is similar to virtual classifiers: In contrast to virtual classifiers, the MMD is computed implicitly using kernel methods.For samples X 1 , . . ., X m ∼ P, X m+1 , . . ., X m+n ∼ Q and a kernel k, we have the estimate MMD b = w ⊤ Kw where K ij = k(X i , X j ) is the kernel matrix and w = ( 1 m , . . ., 1 m , − 1 n , . . ., − 1 n ) ⊤ a weight vector.Using the kernel two-sample test for drift detection, we again use raw data (stage 1) coming from an arbitrary space.The descriptor is given by the kernel matrix K (stage 2) and the score by the MMD (stage 3).For normalization, permutation, or bootstrap testing schemes can be used.Another approach is to use a Pearson curve that is fitted using higher moments.Several more approaches follow similar lines or arguments based on various descriptors or metrics (Rosenbaum, 2005;Harchaoui and Cappé, 2007;Harchaoui et al., 2008Harchaoui et al., , 2009;;Chen and Zhang, 2015;Bu et al., 2016Bu et al., , 2017)).
As we make use of statistical tests which are valid the drift detector is valid as well (under the same assumptions; see Section 6).Choosing a valid split point is critical but likely from a theoretical point of view (Hinder et al., 2021b(Hinder et al., , 2022b)).
The aforementioned approaches have two main problems: (1) the split point relative to the change point has a huge influence on performance and (2) we face multi-testing problems, that is, the chance of a false positive increase for more tests.Both problems can be addressed by making use of meta-statistics.
. Meta-statistic based So far we have been dealing with two-sample approaches.In a sense, those are the simplest approaches as they consider every time point in the stream separately.This leads to issues such as the multiple testing problem, sub-optimal sensitivity, and high computational complexity.Meta-statistic approaches try to deal with some of these issues by not considering each estimate separately but rather combining the values of several estimates to get better results.To the best of our knowledge, there are only very few algorithms that fall into this category.We will describe two algorithms in detail.
while Not at end of stream S do 4: x ← GE TNE X TSA M P L E(S) 5: x ← P O P(W 2 ) ⊲ Move sample from current to reference window 8: p ← 1 15: 16: for i ∈ {1, . . ., d} do 17: end while 28: end procedure
. . AdWin
AdWin (Bifet and Gavaldà, 2007) stands for ADaptive WINdowing and is one of the most popular algorithms in supervised drift detection.It takes individual scores like model losses or p-values as input to estimate the actual change point (see Figure 7).The values are stored in a single growing or sliding window S(t) (stage 1).Then for every time point s ∈ W(t), the maximal (variance normalized) difference of means is used as a score (stage 2 and 3): For Bernoulli random variables, corresponding to right and wrongly classified, a p-value for the H 0 hypothesis "classification performance only increases" is computed (stage 4).In case of rejection, the moment of drift is the moment of largest discrepancy.Efficient, incremental implementations of this scheme exist.Yet, the connection between model loss and drift is rather vague (Hinder et al., 2023c,d) so it is questionable whether the method is surely drift-detecting or valid.
. . ShapeDD
The Shape Drift Detector (ShapeDD; Hinder et al., 2021b) is another meta-statistic-based drift detector.In contrast to AdWin, it focuses on the discrepancy of two consecutive time windows, a quantity referred to as drift magnitude (Webb et al., 2017): Several choices of distances d are allowed making the method widely applicable.Here, we will focus on MMD.The core idea is that in the case of drift, σ not only takes on values larger than 0 but it has a characteristic shape that depends on model parameters only and thus can be detected more robustly (see Figure 8).
Algorithmically, the MMD is computed on two consecutive sliding windows (stages 1-3).Then, the shape function is computed by taking the convolution of σ with a weight function w which is given by w(t) = −1/l for −2l ≤ t < −l, w(t) = 1/l for −l ≤ t < 0 and w(t) = 0, otherwise.The points where the shape function changes sign from positive to negative are candidate change points which can then be checked using the usual MMD test (stage 4).All steps can be computed efficiently in an incremental manner.As a consequence of the shape match, most potential split points are not considered in the first place and the candidate points are usually far apart.This reduces the average computational complexity of the method and the chance of encountering false alerts due to multitesting while also preventing finding the same drift event twice.Furthermore, as the candidate points coincide with the change points up to a known shift, ShapeDD also provides the precise change point (Hinder et al., 2021b) which increases the statistical power of the validation step.This is in contrast to most other two-window approaches.Together with the validity of the kernel two-sample test, this shows that the method is valid and surely driftdetecting for all distribution processes with abrupt drifts that are sufficiently far apart.
However, the characteristic shape is, in fact, an artifact that results from the way the sampling procedure interacts with a single drift event.Thus, it is no longer present if we consider a different windowing scheme (stage 1), several drift events in close succession, or gradual drift.One way to solve the latter issue is to make use of even more advanced meta-statistics that analyze the entire data block at once.
. Block-based
In contrast to all other drift detectors considered so far, blockbased methods do not assume a split of the data into two windows at any point.Instead, they take an entire data segment into account and analyze it at once.
. . Independence-test-based
Dynamic Adaptive Window Independence Drift Detection (DAWIDD; Hinder et al., 2020) is derived from the formulation of concept drift as statistical dependence of data X and time T and thus resolves drift detection as a test for statistical independence.Here, we will make use of the HSIC test (Gretton et al., 2007) which is a kernel method similar to MMD.However, instead of searching for a map that discriminates the two datasets, it searches for a pair of maps that align well, that is, sup f : T→R,g : X→R cov(f (T), g(X)) where f and g are found using kernel methods.The test requires a single collection of data points and thus a sliding window (stage 1).If available, the real observation time points can be used; otherwise, it was suggested using the sample id, that is, sample X i was observed at time T i = i.Using HSIC we compute the kernel matrix of data K X and time K T as descriptor (stage 2).The HSIC statistic is then a measure of the dependence of data X and time T and is estimated by trace(K X HK T H), where H = I − n −1 11 ⊤ is the kernel-centering matrix (stage 3).Similar to MMD, the HSIC can be normalized using higher moments which allow fitting a Gamma distribution (Gretton et al., 2007) or a permutation test approach (stage 4).Due to better performance, we make use of the latter.Notice that if the actual observation time is not available, we can use the same time kernel matrix K T and thus precompute HK T H as well as the permutated versions resulting in a drastic reduction in computation time.
DAWIDD makes the fewest assumptions on the data or the drift.This allows for detecting more general drifts but comes at the cost of needing more data-a usual complexity-convergence trade-off.As DAWIDD is again a statistical test, it is also universally valid and surely drift-detecting.
. . Clustering-based
Clustering offers another block-based approach that structurally falls between independence-test-based and twosample-test-based approaches.Such methods cluster time points into intervals such that the corresponding data points also form clusters.For the HSIC test, one considers kernalized correlation which can be thought of as fuzzy cluster assignments.In contrast, in clustering, each data point is assigned to a single cluster, which, however, is not predefined by the windows as in the two-sample case.Using a distributional variance measure V, such algorithms solve the following optimization problem for a predefined number n: where T = (t 0 , t n ] and w is a weighting function. An instantiation of this approach was proposed by Harchaoui and Cappé (2007) using kernel-variance V(P) = sup f H ≤1 var X∼P (f (X)) which can be estimated by n −1 trace(K X H) (Arlot et al., 2019).The clustering problem can then be solved using dynamic programming.The resulting algorithm is commonly called Kernel Change-point Detection (KCpD).Later on, Arlot et al. (2019) introduced a heuristic to estimate the number of change points n using model selection, that is, separating two clusters decreases the objective significantly while splitting one cluster does not.
From a more algorithmic point of view, KCpD searches for blocks along the main diagonal of the kernel matrix so that the mean value of the entries inside the blocks is maximized.The number of blocks is then chosen such that more blocks no longer increase that value significantly.Other algorithms implement similar ideas, for example, Keogh et al. (2001).
Since KCpD is a mainly heuristic method, it is hard to make any statement about its limiting behavior.However, the statistic of the 1-split point case is very similar to the one considered by Hinder et al. (2022b).Furthermore, it is well known that in many cases, kernel estimates have uniform convergence rates.It is thus reasonable that one can derive universally valid surely drift-detecting methods that make use of the same ideas.
FIGURE
Illustration of used datasets (default parameters, original size).Concepts are color-coded (Before drift: blue, after drift: green).
. . Model-based
In addition to the classical kernels which are predefined and not dataset-specific, we can also construct new kernels using machine learning models.In Hinder et al. (2022b), Random Forests with a modified loss function that is designed for conditional density estimation, so-called Moment Trees (Hinder et al., 2021c), are used to construct such kernels.To do so, the model is trained to predict the time of observation T from the observation X.The resulting kernels show drastic improvements in drift detection tasks (Hinder et al., 2022b).We can also apply this procedure directly to obtain model-based block-based approaches that can be thought of as an extension of the virtual-classifier-based twowindow approaches to continuous time by removing the time discretization.The relation between the resulting approaches to DAWIDD is then very similar to the relation of MMD to the model-based two-window approaches like D3.
Analysis of strategies
So far, we categorized different drift detection schemes and described them according to the four stages discussed in Section 4. In this section, we will consider the different strategies on a more practical level and investigate experimentally in which scenarios which drift detection method is most suitable.For this purpose, we identified four main parameters that describe the data stream and the drift we aim to detect: We investigate the role of the drift strength, the influence of drift in correlating features, the data dimensionality, and the number of drift events.To cover the strategies described in Section 5, we select one representative technique per category.As these approaches are structurally similar, from a theoretical viewpoint they carry the same advantages and shortcomings.We will present and discuss our findings in the remainder of this section.
. Experimental setup . . Datasets
For our experiments, we consider three 2-dimensional, synthetic datasets with differently structured abrupt drift (see Figure 9).We use modifications of these datasets to evaluate the properties of the discussed drift detection methods.
1. Uniformly sampled from the unit square, drift is introduced by a shift along the diagonal.Intensity is shift length; noise in additional dimensions is uniform.2. Data sampled from a Gaussian (normal) distribution with correlated features, drift flips the sign of the correlation.Intensity is correlation strength; noise in additional dimensions is Gaussian.3. Data sampled from two overlapping uniform squares, drift rotates by 90 o .Intensity is inverse to the size of overlap; noise in additional dimensions is uniform.
Using these base datasets, we generate data streams consisting of 750 samples with drift times randomly picked between t = 100 and t = 650 by varying the following parameters: • Intensity, default is 0.125.
• Number of drift events, default is 1.
• Number of dimensions by adding non-drifting/noise dimensions, default is 5, that is, 3 noise dimensions.
. . Methods
We make use of D3 (used model: Logistic Regression, Random Forest), KS, MMD, ShapeDD, KCpD, and DAWIDD.For MMD, ShapeDD, KCpD, and DAWIDD, we used the RBF kernel and 2,500 permutations.This way we cover every major type and sub-type (see Section 5).
For KCpD, we use the extension proposed in Arlot et al. ( 2019) and choose the smallest α-value to detect a drift as a score.All other methods provide a native score.
The stream is split into chunks/windows of 150 and 250 samples with 100 samples overlapping.Two-sample (split point is midpoint) and block-based approaches are applied to each chunk.Meta-statistic approaches are applied to the stream; then, the chunk-wise minimum of the score is taken.
We use the implementation provided by Jones and Harchaoui ( ) and
FIGURE
Drift detection performance for various intensities, number of dimensions, and number of drift events.
. . Evaluation
We run each setup 500 times.The performance is evaluated using the ROC-AUC which measures how well the obtained scores separate the drifting and non-drifting setups.The ROC-AUC is 1 if the largest score without drift is smaller than the smallest score with, it is 0.5 if the alignment is random.Thus, the ROC-AUC provides a scale-invariant upper bound on the performance of every concrete threshold.Furthermore, the ROC-AUC is not affected by class imbalance and thus a particularly good choice as the number of chunks with and without drift is not the same for most setups.
. . Drift intensity
We evaluate the detectors' capability to detect very small drifts.From a theoretical perspective, we expect that smaller drifts are harder to detect.However, the notion of small here depends on the used detector, for example, the model for D3 or the kernel for MMD, DAWIDD, and KCpD, as well as potential preprocessing (Rabanser et al., 2019;Vaquet et al., 2021;Hinder et al., 2022b;Hinder and Hammer, 2023).
Our results are visualized in Figure 10.As expected, all methods improve their detection capabilities with increasing drift strengths.ShapeDD performs particularly well.Since it makes use of MMD to test for drift, this implies that the metaheuristic is quite important.Also for D3, we observe the predicted interplay of model and dataset: For simple datasets, Logistic Regression performs better, and for more complex datasets we need a more advanced model.We will discuss both points later on in more detail.DAWIDD and KCpD also perform quite well, but KCpD requires larger intensities.The global variant of KCpD outperforms all online algorithms closely matched by ShapeDD.Thus, we suggest incorporating as much domain knowledge into the choice or construction of the descriptor as possible.Furthermore, we recommend the usage of meta-statistic or blockbased methods.
. . Drift in correlating features
Drift can affect the correlation or dependency of several features only, in which case it cannot be detected in the marginal features.We captured this phenomenon in the Gauss and two-overlap datasets.In these cases, KS shows performance close to random chance and D3 with Random Forests (an axis-aligned model) shows issues that cannot be observed for the kernel-based methods.
We thus advise only using methods that make heavy use of feature-wise analysis if drift in the correlations only is either less relevant or very unlikely.If this is not an option, ensemble-based drift detectors that combine feature-wise and non-feature-wise approaches may provide an appropriate solution.
. . Number of drift events
The number of drift events per time is another relevant aspect in practice.Usually, this number per window is assumed to be comparably small which need not be true in practice.Figure 10 shows the results for different numbers of change points, alternating between two distributions.All drift detectors suffer in this case, which is particularly interesting for global KCpD.
We thus advise making use of block-based drift detectors if several drift events are to be expected.In particular, we suggest not to make use of meta-statistic-based methods unless they can explicitly deal with the setup.
. . High dimensional data streams
In practice, data are frequently high dimensional with drift only affecting a few features which may cause issues.In Figure 10, we present the results for runs on different numbers of dimensions.Observe that all methods suffer heavily from high dimensionality.For the kernel-based methods, this can be explained by the choice of the RBF kernel, also explaining why global KCpD performs quite poorly.In the case of D3 with Random Forest, this result is somewhat surprising due to the inherent feature selection of tree-based methods.Yet, on Gauss where trees have a harder time exploiting the structure, the method suffers the most.
We further analyzed the behavior in the case of the uniform dataset with a single drift in the middle and 250 samples (see Figure 11).As can be seen, for D3 and MMD, the drift becomes harder to detect while KS suffers from the multi-testing problem, that is, drift-like behavior emerging by random chance.
Thus, we advise choosing appropriate preprocessing techniques or descriptors to select or construct suited features.Furthermore, Frontiers in Artificial Intelligence frontiersin.org in case of high dimensionality with high cost in case of false alarms, one should refrain from using drift detectors that operate feature-wise. .
. E ect of split point
Meta-statistics and two-window-based methods differ in that the former optimizes the used split point.We study this effect using the uniform dataset with 250 samples, either with optimal or with random split points (see Figure 11).We observe a significant increase in performance which is also more reliable in case of a correct split point.This fits the considerations in Hinder et al. (2021b).We thus advise the user to investigate options to preselect a good candidate split point, either through prior knowledge or by choosing an appropriate algorithm.
. . D model choices
For D3, the metric is implicitly given by the model making it interesting to study.We consider D3 with different models: Logistic Regression (log.reg.),Random Forests (RF), Extra Tree Forests (ET), and k-Nearest Neighbor classifier (k-NN; see Figure 12).
The performance is impacted by the model and its interplay with the dataset, for example, k-NN is best on Gauss but worst on uniform.Yet, similar models behave alike, for example, ET and RF.Interestingly, feature selection cannot be observed or is ineffective.
Thus, models pose a way to integrate prior knowledge into the detection.This result matches the observations of Hinder et al. (2022b) where the authors argued that the descriptor (stage 2) is more important than the metric (stage 3) derived from it.
. . Loss-based approaches
Finally, we also considered outlier-and density/loss-based approaches (Pedregosa et al., 2011): one-class-SVM (SVM; RBF kernel), Local Outlier Factor (LOF; k = 10), Isolation Forests (IF), Kernel-Density Estimate (KD; RBF kernel), and Gaussian Mixture Model [GMM; mixture components ≤ 10 cross-validation (CV) or Dirichlet prior (Bayes)].We use either the outlier score or the sample probability as the drift score.We use the same datasets as before.Here, we use the first 100 samples for training, and the remainder is used for evaluation (see Figure 13).Due to poor performance, we increased the default intensity to 0.5.Otherwise, the results are similar to the other drift detectors.
We thus found additional empirical evidence for the results of Hinder et al. (2023d) which challenge the suitability of lossbased approaches for drift detection from a formal mathematical perspective and therefore suggest the reader not to make use of loss-based approaches.
Guidelines and conclusion
In this study, we provided definitions of drift and drift detection and discussed the relevance of unsupervised drift detection in the motoring setting.Furthermore, we categorized state-of-the-art approaches and analyzed them based on a general, four-staged scheme (Lu et al., 2018).Table 1 and Figure 4 provided a condensed summary of the proposed taxonomy and summarize how different methods are implemented according to the common staged scheme as visualized in Figure 2.
In addition, we analyzed the different underlying strategies on simple data sets to showcase the effects of various parameters reducing the effect of other dataset-specific parameters.From these experiments, we can derive the following guidelines for the selection and usage of drift detection schemes in monitoring settings: • A main finding is that as much domain knowledge as possible should be incorporated when designing drift detection schemes.This concerns selecting appropriate preprocessing techniques, constructing and engineering suitable features, and choosing fitting descriptors in stages 1 and 2 for the process.• Over all experiments, we found that it is advisable to use metaor block-based methods.• Choosing good split points is crucial for obtaining good detection capabilities.• A feature-wise analysis should only be performed if it is expected that the drift does not inflict itself in correlations.
Otherwise, relying on multi-variant techniques seems to be the better solution.• When working with high dimensional data, one should avoid using dimension-wise methodologies, especially if false alarms are costly in the considered application.It might be beneficial to consider feature selection approaches.• If multiple drifts are expected, applying block-based detectors is particularly suitable.• Finally, but maybe most importantly loss-based strategies should be avoided when the target of the drift detection is monitoring for anomalous behavior.
Note that our datasets are comparably simple for the sake of controlling a number of parameters.While one might argue that the generality and universality of our findings are of course limited, we think that these controlled experiments provide a first set of guidelines that are valuable as a starting point for developing reliable monitoring pipelines.In particular, we were able to confirm the theoretical considerations of Hinder et al. (2023d) in our experiments.
This study is the first part of a series of studies in which we also cover topics that are closer related to the analysis of concept drift like drift localization and drift explanation.The full series can be found on ArXive (Hinder et al., 2023b).
FIGURE
FIGUREVisualization of drift detection on a data stream: data point x i was observed at time t i .Given a data stream, for each time window W(t), a distribution D W(t) generates a sample S(t).In this case,W(t) = [t − l, t]has a length l and thus S(t) = {x k | t k ∈ W(t)} = {x i , . . ., x i+n }.A drift detection algorithm estimates whether or not S(t) contains drift by performing a four-stage detection scheme.Illustrated drift detector uses two sliding windows (stage ), histogram descriptor (stage ), total variance norm (stage ), and permutation-based normalization (stage ).
FIGURE
FIGUREIllustration of reference window types.Area in brackets refers to reference window W(t), W(s) for time point t < s.Border of W(t) is marked in dark blue, border of W(s) in light green, and overlapping borders in gray.Here, h is a learning model that implicitly stores the data by learning it.
b
Headers stand for Drift Detection (DD), Drift Pinpointing in time (DP), Drift Localization (DL), and Drift Explanation (DE).Stage 1 current window is a sliding window in all cases.Type refers to the type of normalization strategy used: Statistical Test (ST), model Loss-Based (LB), virtual classifier/Model-Based (MB), and CLustering heuristic-based (CL).a Depends on model but low requirements (Hinder et al., 2022b).b Used by Hinder et al. (2023a) as basis.c Depends on model, dataset, and training(Hinder et al., 2023c,d).dFordistant abrupt drifts.
FIGURE
FIGUREect of total number of dimensions or choice of split point for various drift intensities and drift detectors.Graphic shows median (line), % − %-quantile (inner area), min − max-quantile (outer area), and outliers (circles).
FIGURE
FIGUREDrift detection performance for various models used by D .
FIGURE
FIGUREDrift detection performance of various model/loss-based approaches.Experiments use di erent scale for intensity than previous experiments.
Another frequently employed method is generating i.i.d.samples from D W within a specified time window W. Importantly, observations within a time window W based on D perfectly replicate the distribution described by D W .Both methods are formal procedures for obtaining data over time.
TABLE Overview of unsupervised drift analysis methods from the literature.
classifier-based drift detection. .Collect data (moment of arrival is color-coded: dark blue to green), .Mark all samples arrived before a certain time as class -(cross) and after as class + (plus), .Train model to distinguish class -and + , and .Evaluate model, if performance is better than random chance then there is drift.
FIGUREVisualization of virtualFIGUREVisualization of Kolmogorov-Smirnov test for drift detection. .Collect data (two windows; S − (t) blue and S + (t) green), .Feature-wise CDF, .Compute largest di erence (red line) between CDFs ( Ft− of S − (t) and Ft+ of S − (t)) of feature-wise before and after distribution, and .Use analytic H distribution to obtain p-value. | 13,627.6 | 2024-06-19T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Application of Holograms in WDM Components for Optical Fiber Systems
Coarse Wavelength Division Multiplexing (CWDM) technologies are being widely deployed internationally in metropolitan and access networks due to the increased demand for delivering more bandwidth to the subscriber, created by the need of enhanced services, (Koonen, 2006). For metro, and mainly for access networks applications, an increment in capacity may be achieved with a cost-effective multiplexing technology without the need for the high channel counts and closely spaced wavelengths typically used in long haul networks. A channel space of 20 nm, as proposed in the G. 694.2 ITU Rec., can be used relaxing the processing tolerances and potentially lowering the cost of components. CWDM technology reaches those requirements and it has been proposed for these applications. It is in this context that holographic optical devices have a potential use. This chapter describes the theory, design, and experimental results of a generic multipurpose device that can operate as a tunable wavelength filter, wavelength multiplexer and wavelength router. This device could be especially useful in optical network applications based on both Coarse and Dense Wavelength Division Multiplexing technology (CWDM/DWDM). The enabling component is a Ferro-electric Liquid Crystal (FLC) Spatial Light Modulator (SLM) in which dynamic holograms are implemented in real time. As a consequence, the device will be able to carry out different functions according to the hologram recorded on the SLM. The great advantage of this device is polarization insensitivity in the region of operation, allowing low cross-talk and simple handling. As hologram management is the basis for this device, some topics in the Computer Generated Hologram (CGH) design process are commented on and general guidelines are also considered. Laboratory experiments have demonstrated the capability of a phase FLC-SLM, with the great advantage of polarization insensitivity operation, to diffract the incident light according its wavelength and hologram patterns, for the use in the former applications. Two typical applications of this technology are described: the first one is a design of an equalized holographic Reconfigurable Optical Add-Drop Multiplexer ( ROADM), where this device can address several wavelengths at the input to different output fibers, according to the holograms stored in a SLM (Spatial Light Modulator), all the outputs being equalized in power; the second one is dealing with the design of an holographic router with loss compensation and wavelength conversion whose main application is in Metro networks in the interconnection nodes. This device uses a SOA (Semiconductor Optical Amplifier), in the non linear region, to do the wavelength conversion and, in addition, to supply the gain in order to compensate for the intrinsic losses of the holographic device.
Introduction
Coarse Wavelength Division Multiplexing (CWDM) technologies are being widely deployed internationally in metropolitan and access networks due to the increased demand for delivering more bandwidth to the subscriber, created by the need of enhanced services, (Koonen, 2006).For metro, and mainly for access networks applications, an increment in capacity may be achieved with a cost-effective multiplexing technology without the need for the high channel counts and closely spaced wavelengths typically used in long haul networks.A channel space of 20 nm, as proposed in the G. 694.2 ITU Rec., can be used relaxing the processing tolerances and potentially lowering the cost of components.CWDM technology reaches those requirements and it has been proposed for these applications.It is in this context that holographic optical devices have a potential use.This chapter describes the theory, design, and experimental results of a generic multipurpose device that can operate as a tunable wavelength filter, wavelength multiplexer and wavelength router.This device could be especially useful in optical network applications based on both Coarse and Dense Wavelength Division Multiplexing technology (CWDM/DWDM).The enabling component is a Ferro-electric Liquid Crystal (FLC) Spatial Light Modulator (SLM) in which dynamic holograms are implemented in real time.As a consequence, the device will be able to carry out different functions according to the hologram recorded on the SLM.The great advantage of this device is polarization insensitivity in the region of operation, allowing low cross-talk and simple handling.As hologram management is the basis for this device, some topics in the Computer Generated Hologram (CGH) design process are commented on and general guidelines are also considered.Laboratory experiments have demonstrated the capability of a phase FLC-SLM, with the great advantage of polarization insensitivity operation, to diffract the incident light according its wavelength and hologram patterns, for the use in the former applications.Two typical applications of this technology are described: the first one is a design of an equalized holographic Reconfigurable Optical Add-Drop Multiplexer ( ROADM), where this device can address several wavelengths at the input to different output fibers, according to the holograms stored in a SLM (Spatial Light Modulator), all the outputs being equalized in power; the second one is dealing with the design of an holographic router with loss compensation and wavelength conversion whose main application is in Metro networks in the interconnection nodes.This device uses a SOA (Semiconductor Optical Amplifier), in the non linear region, to do the wavelength conversion and, in addition, to supply the gain in order to compensate for the intrinsic losses of the holographic device.
Operating principle
The working principle of a holographic device design is based on the wavelength dispersion produced in a diffraction grating element (Agrawal, 2002).When a polychromatic light reaches a diffraction grating, there is an angular dispersion (diffraction) according to the incident light wavelength.Equation (1) expresses the relationship between the diffraction angle and the wavelength of the incident light : (1) by considering the incident light perpendicular to the grating, Ф is the diffracted light angle, m is the diffraction order and d the grating spatial period.
The light diffracted, in a far field approximation, follows the Fourier transform distribution and the intensity for the different diffraction orders, m, is proportional to sinc 2 (Фd/ ); the separation between diffraction orders is given by R/d, where R is the distance between the binary transmissive diffraction grating and the Fourier plane (Kashnow, 1973).
Most diffraction grating elements are not practically useful for changing the spatial period or the wavelength.A way to allow these variations is, by using a Spatial Light Modulator (SML), to implement on it a Computer Generated Hologram (CGH).The pixelated structure of the SLM produces the effect of a two-dimensional diffraction grating when the device is illuminated with a coherent light.In the SLM every ferro-electric liquid crystal (FLC) pixel can be electro-optically configured to provide a phase modulation to the incident light.Therefore, by managing the hologram on the SLM and its spatial period a programmable diffraction grating is obtained.
In optical fiber communications, wavelengths around 0.8 -1.6 µm are used.Thus, an SLM pixel pitch close to these wavelength values is required.Unfortunately, current commercial SLMs do not have enough resolution.Therefore, to solve this limitation, a fixed diffraction grating with a low spatial period, together with the SLM giving a high resolution filter, is used (Parker et al., 1998
2 and 4-phases holograms
Different types of holograms can be used (Horche & Alarcón, 2004) in the SLM.In order to optimize losses, phase holograms are preferred instead of amplitude holograms due to its intrinsic 3 dB of loss and 4-phase holograms are used instead of 2-phase (binary) holograms because of its greater efficiency (40.5% 81%), which is proportional to sinc 2 (π/M), where M is the number of phases.Table 1 summarizes the relationships between phase and contrast for 2 and 4 phase holograms.
Computer generated hologram design
Taking into account the former considerations and by implementing a hologram on the SLM where its spatial period can be modified in real time, we obtain a programmable diffraction grating.
The relationship between the hologram and its Fourier Transform function are: In order to implement the CGH, holograms are calculated by using a program based on a variation of the widely adopted simulated annealing optimization algorithm (Dames, Dowling et al., 1991), (Broomfield, Neil et al., 1992) whose cost function to minimize the calculation error is: where I i 2 is the calculated spot intensity for the diffraction order i; A i 2 is its defined intensity and A 2 is the average intensity for the diffraction target spots; t is the number of process calculations.
There are three steps in a CGH design process: 1. Target definition: the target is the diffraction pattern that is to be obtained from the SLM.Depending on the use: filter, switch or others, this target is usually an array or a matrix of spots.This is the input for the program.2. Fourier transform calculation: the program calculates the inverse Fourier transform (F.T.) -1 of the target.The optimization algorithm compares the FT of the hologram with the defined target improving the efficiency at each calculation time.Hologram pixels are flipped between the amplitude values 0, (or phase 0, π) to reduce an error function, (2), specifying the difference between the desired target in the Fourier plane and the reconstruction obtained from the current state of the hologram, improving the efficiency at each calculation (Efficiency defined as: η = Σ m orders diffracted light /total incident light).3. Finally, CGH implementation in an optical substrate, using a photographic film or SLM.The CGH designed for this work is a black & white bars pattern implemented onto a Spatial Light Modulator, where there are only two possible states: "1" for white (total transparency or π phase shift) and "0" for black (total darkness or 0 phase shift).Fig. 3 shows the original diffraction target (a), an array of spots with different light intensities (non uniform, as in Fig. 3a), and three consecutive holograms (b, c, d), calculated by the program carrying out the inverse FT according to the algorithm efficiency.A 45% efficiency is an initial calculation value and close to 90% efficiency is practically the best result in the optimization process.During the calculation of the hologram, the program can find out different holograms which match the diffraction target.It is possible to change, dynamically, the initial conditions (original diffraction target and efficiency, optimization process parameters), to change the direction for the optimization process allowing the algorithm to escape from local minima and reach the correct hologram.Computer calculations are very sensitive to the geometrical distribution of the original diffraction target.A very slight misalignment on it (centre: x = 0, y = 0) can produce a hologram completely different from the correct one.This effect is shown in Fig. 4 when the original array of spots (Fig. 4a) is shifted by 30% of spot separation δ (Fig. 4b), along the vertical axis y; the calculated target (Fig. 4c) is an array of spots "duplicated" and "shifted" instead of a singular one.
To avoid small misalignments, along the x axis, of the output fibers array positions , with impact on the efficiency, we can optimize the hologram pattern, introducing an offset in the bar positions to correct them (Crossland et al., 2000) An offset of 5% of the hologram period would impact the efficiency up to a 40%.
For the operation of holographic devices after the generation of holograms, it is necessary to configure with them the SLM.To perform the switching operation a closed control between the holographic component (SLM) and the computer is needed to assign the correspondent hologram from a local database.This procedure is represented in Fig. 5, where a switching control acts over the PC-SLMs interface. H
Dynamic holographic device design
In order to design a holographic optical device a "4f" structure is chosen using a transmissive SLM and fixed grating.Fig. 6 illustrates the device used in the present work.
The previously calculated CGH (black and white bars) is loaded onto the SLM via a PCbased interface.The SLM-FLC and fixed grating are illuminated by light coming from a singlemode optical fiber collimated by means of a lens.A second lens produces the replicated array of spots explained above on the back focal plane of the lens.
In our experiments, we are interested only in the array of spots corresponding to the first order of diffraction.Therefore, the output optical fibers array is placed in the back focal plane of the lens at a certain angle in order to optimize the coupling.Because of the small size of the singlemode fiber radius, it acts as a spatial light filter.Output fibers F 1 ,..,F 10 , must be located at the Fourier lens plane in order to receive the maximum light intensity of the diffracted beams.The relationship between the system diffraction angles (Parker et al., 1998) is in agreement with the expression: where x is the distance of the output optical fiber from the optical axis, f is the focal length of the lens, d is the spatial period of the fixed grating and H is the hologram spatial period, which relationship with D, the size of the pixel, and N, the number of pixels in one dimension of the SLM is given in (4): where n is the integer number of black & white bar pairs and depends on the type of hologram (pattern).For small angles, equation ( 3) can be simplified as follows: 1 1 When the other holographic device parameters are fixed, only depends on n, as can be shown from (5).According to fixed or variable values for n or/and x, different applications for our device can be considered giving an idea of the device's versatility (see Table 2).
In the following sections, we design a generic multipurpose device based on the experimental scheme explained above that can operate as a tunable wavelength filter, wavelength multiplexer and wavelength router, by simply modifying in real time the CGH loaded on the SLM.The PC-based interface used to load the CGH on to the SLM also serves to calculate the different patterns needed.The electronic interface allows an automatic program to be developed for loading different patterns when they are needed.In all cases, the central wavelength channel, 0 , is obtained for n = N/4 in (5), and the operating wavelength range Δ f is given by: The -3dB passband width, BW, for each wavelength channel tuned, is limited by the output fiber characteristics and the wavelengths coupled inside the core diameter φ core .Taking this into account, and from (3), the following expression relates the bandwidth BW for every wavelength channel tuned in the filter and the focal distance f of the lens according the optical power coupled into the output optical fiber (Parker et al., 1998): In order to obtain minimum losses, the collimated light through the SLM has to illuminate the maximum quantity of pixels.As its intensity distribution has a Gaussian profile, it is sufficient that 1/e 2 beam bandwidth illuminates the SLM aperture.According to optical Gaussian laws, the following condition is reached: For commercial FLC-SLMs, available pixel size D is > 5 µm and the number of pixels, N, usually is from 250 to 1000.From expressions ( 5) and ( 7), it is possible to calculate the x value and max and min for the operating range of tuning.
Tunable holographic filter application
In order to design a tunable holographic filter with a -3dB passband width, BW, of 1 nm (125 GHz), for each wavelength channel tuned, we take d = 3.5 µm for the spatial period of the fixed grating.To use the same device for CWDM/DWDM, a SLM with a value of N = 720 and D = 7 µm for the spatial period, is chosen.The output singlemode fibers used in our device have a core diameter, φ core , of 9 µm.Then, from (7), f must be greater than 23.9 mm.As a practical value we assume f = 25 mm.
Wavelength response
For CWDM applications the holographic filter has a tuning range of Δ f =1591 -1311 = 280 nm with a -3dB passband of 1 nm.In Fig. 7 the transmission response is shown, according to (Parker et al., 1998).For wavelengths very close to the centre, the shape is Gaussian ( < 0 +/-1,5 nm); from these wavelengths the shape is like a Bessel function and the zero convergence is slower ( > 20 dB for > 0 +/-1,5 nm ; > 40 dB for > 0 +/-5 nm).
Table 5 shows, in case of CWDM systems, the different values of n and corresponding central wavelengths separated by 40 nm, from 1311 to 1591 nm.In this case, an adjacent channel isolation > 50 dB is achieved and the complete filter tuning range is covered according to the type of hologram.
266
This feature allows the possibility of a multiple pass band filter in the same optical fiber, but with an increment of losses penalty according to the expression 10 log C (dB), where C is the number of simultaneously tuned channels (Parker et al., 1998).In this case, C = 4 and therefore the increment of losses in the device is: Δ losses = 10 log 4 = 6 dB.
Loss estimation
There are three different sources of loss in this holographic device: a. SLM losses An FLC-SLM works as a dynamic full π-binary phase fixed grating hologram with a diffraction efficiency η = 36.5% (4.38 dB) for the first diffractive order (m =1) and a FLC switching angle of 45º.
Another cause of losses is the insertion of the hologram, for a phase SLM, as a result of the light polarization plane and FLC switching angle different to 45º (theoretical optimal angle); at least another 2 dB are lost, assuming a good alignment of the collimated input light and the FLC pixels.b.Fixed grating losses The diffraction efficiency for a fixed grating, binary π-phase, is η = 36.5% for the first diffractive order (m=1).That means a loss of 4.38 dB.c.Fiber/lens coupling efficiency A fiber/lens coupling efficiency of 50% is a good approximation; therefore another 3 dB of losses have to be added (2 dB, with very good alignment).
Losses can be improved using multiple-phase or blazed gratings; in this case the efficiency can reach η ≈ 80 -90% and the losses decrease to 1.5 dB (Ahderom, Raisi et al., 2002).
WDM (wavelength division multiplexing) application
We can use this device as a 1x M demultiplexer, where M is the number of output fibers.For this, a fixed value of n is used and the output fibers are located in certain x positions.Output fibers (9/125 µm) must be placed in agreement with the diffracted angles Ф, according to input wavelengths and they have to be separated at least Δx = 125 µm.From (5), we can calculate the Δx taking the value of center to center wavelength channel separation, Δ , into account: In order to design a compatible device with the frequency grid provided in ITU-T G.694.1/G.694.2Rec. for CWDM/DWDM systems, a 1x4 demultiplexer (M = 4) for DWDM1 with Δx = 161 µm and a 1x8 demultiplexer (M = 8) for CWDM with Δx = 321 µm, can be implemented.
Table 6 summarizes the fiber positions in order to demultiplex the wavelengths used in the CWDM/DWDM systems.A CWDM system uses F 1 , F 2 , F 3 , F 4 , F 5 , F 6 , F 8 and F 10 and a DWDM uses F 7 , F 8 , F 9 , and F 10 output fibers (see Fig. 6).It is necessary to emphasize that a better performance as demultiplexer could be implemented if only this function is required.
For example, we could design a demultiplexer device with channel separation smaller than 50 GHz (Parker, Cohen et al., 1997).However, the novel idea is to design a compatible CWDM/DWDM device able to carry out different functions.
Wavelength routing application
Maintaining output fibers in the same place as shown in Table 6, if n value (type of hologram) is properly varied, a certain wavelength coming from the input fiber can be routed to any one of the output fibers.As an example, Table 7 highlights the n values for routing 0 = 1431nm (CWDM) and 0 = 1551 (DWDM) towards an output fiber; these values have been calculated from (10), considering the variation of n according to Δx: For Δx = 161 µm, Δn was calculated by using (10) resulting in Δn = 21 and for Δx = 321 µm, Δn is 45.Therefore, the device is a 1x8 router in case of CWDM and a 1x4 router for DWDM systems.It is necessary to highlight that the positions of the fibers are compatible with all applications and that the crosstalk resulting from high-order diffraction beams (m = 2, 3,.) are outside of the locations of the output array fibers (ΔФ = 4º), (Horche, 2004).
Basic experimental results
In this section two complementary experiments have been made.The first one is related to diffraction patterns measurements for different bars holograms and the second one to a SLM characterization for holographic filters, demultiplexers and routers use with reference to the devices whose design and characteristics have been described in the previous sections.Due to the unavailability of components in the laboratory with the characteristics previously described, the experimental optical bench is somewhat different from the appropriate one, but, the measurements obtained are in agreement with the calculations.
In order to carry out the measurements, the experimental lab bench showed in Fig. 8 was used; it is in agreement with the structure of Fig. 6, but, in this case, we used a reflective SLM instead of a transmissive one; therefore, it is necessary to include a polarizing beam splitter in order to direct the reflected beam to the lens.Due to the "spatial invariability" of the Fourier transform, it is not necessary to illuminate the entire SLM active surface to reproduce the diffraction pattern; taking this into account we can select, by a diaphragm aperture, the SLM zone where the incident light is focused.The characteristics of a commercial binary phase SLM are shown in Fig. 8(c).
As optical sources, a green He-Ne laser and a tunable Argon laser with g = 528.7 nm (green) and b = 462.6 nm (blue) have been used.These wavelengths have been selected because they belong to the visible spectrum and the correct alignment of the system is easier, a critical factor in the experiment.In this case, a detector-array (6.3 × 4.7 mm) of a CCD camera is placed at the "focal plane", as an image sensor, to analyze the results.A single personal computer, PC, is used to generate the CGHs for the design process described previously, and they are loaded onto the SLM by changing its pixels state; the diffracted patterns were stored in the same PC, where they could be observed and processed.To recalculate the new output fibers position, the distance for the diffraction order (x) is derived from (5) without the fixed grating: where H, defined in (4), is the hologram spatial period, but now, the maximum value for n = N/2 = 128, the size of the pixel, D =15 m and the number of pixels in one dimension of the SLM, N = 256, have been taken into account.
SLM characterization for wavelength routers
When we implement different holograms, according to n values, in the SLM and an incident light wavelength is illuminating the pixels, different diffracted angles are obtained; by placing an array of fibers at the output, in the focal plane, a -router is implemented.
To test the capability of a commercial SLM as a part of a dynamic holographic router, the holographic setup shown in Fig. 8 (a) was implemented; a photo of the experimental optical bench is shown in Fig. 8(b).For this test the optical source was an He-Ne laser at = 528.7 nm (green wavelength) and the lens focal length, 8 cm.In order to route the green wavelength to the F 4 output fiber, according to Fig. 9, it is necessary to load a CGH-(A) with a spatial period H corresponding to n = 64 in the SLM and for routing the same wavelength to the F 8 output fiber, a CGH-(B) with n = 128 was calculated and implemented onto the SLM.
SLM characterization for filters and demultiplexers
Other measurements, to test the capability of a commercial SLM as a part of a dynamic holographic device, have been done with the holographic setup shown in Fig. 8 (Alarcón, 2004).According to Fig. 9, if a CGH black & white bars type hologram with n = 128 is loaded onto the programmable SLM, a blue wavelength channel will reach the F 7 output fiber and a green wavelength channel will reach the F 8 output fiber.
The diffracted light spots distance, calculated from (9) without a fixed grating, is: (12).Therefore, in this way, we can build an optical 1x2 demultiplexer.The central light spot is due to the zero diffraction order m = 0, with the maximum light intensity diffracted (x = 0); it can be reduced with a SLM with better performance by impacting on the total insertion losses reduction.
The temporal response of the system was also measured.The SLM optical switching time was estimated to be roughly 250 s, as the sum of the electric storage and FLC material response times (Alarcón, 2004).We also noticed a damped response when low-frequency switching is carried out; this is probably due to relaxation of the FLC molecules.
Design of equalized holographic ROADMs for application in CWDM metro networks
These type of ROADMs are designed for application in CWDM (Coarse Wavelength Division Multiplexing) networks, where the distance between the different wavelengths allow the use of DML (Direct Modulation Lasers) without cooling, reducing the cost and the tolerances of the network components.Application in METRO networks and its interconnection with some PON (Passive Optical Network), as a part of the access to the subscriber, is reviewed.Different technologies have been proposed for the implementation of ROADMs (Ma & Kuo, 2003), (Homa & Bala, 2008).Each of them has its own advantages and drawbacks.The main characteristic of holographic ROADMs is the easy way of changing the tuning and power level of the signal at the output fibers by the dynamic implementation of different holograms on the SLM according to the requirements of the network management.
Holographic ROADM structure
The working principle of the dynamic holographic device is based on the wavelength dispersion produced in a diffraction component (grating, spatial light modulator) as explained in Section 3. We use for this application a phase reflective spatial light modulator (SLM) and a fixed transmissive diffraction grating to select the corresponding output wavelength from an set of channels in the input, as shown in Fig 11 .The active element of the SLM is a Ferroelectric Liquid Crystal (FLC) with a low switching time (less than 50 s) that allows a real time operation.The role of the fixed diffraction grating is to provide more wavelength tuning range and greater total diffraction angle.
Fig. 11. Reflective holographic router
One of the reasons because we have chosen this type of "2f-folded"implementation, , is the reduced size of the device in comparison with the other possible structure, "linear-4f", where the length of the optical axis is four times the focal distance of the lens used.Its working operation has been described in Section 3.
Holographic ROADM design 8.2.1 Dynamic wavelength tuning
At the input of the router there are different wavelengths 1 , 2 ,….n according to some ITU Rec.For the design of this holographic router, these wavelengths are in agreement with the G.694 Rec. for use in CWDM systems.The range of wavelengths is from 1271 nm to 1611 nm with 20 nm as separation between channels; 4, 4+4, 8, 12 and 16 groups of channels are specified distributed along the complete range.
In a holographic router the tuning of this wavelength range is achieved by changing the spatial period of the hologram ND/n, where n is the number of pairs of bars (2-phases) or number of four bars (4-phases), N is the number of pixels and D the size of the SLM pixels.
The expression which allows the selection of the output wavelength , according to the physical parameters and structure of the device, is (Martin Minguez & Horche, 2007): where x is the distance from the optical axis to the output fiber, f is the focal length of the lens, d is the spatial period of the fixed diffraction grating and M is the number of phases.Fig. 12 shows some tuned wavelengths according to different values of n, for a typical holographic device.
Fig. 12.Four different tuned wavelengths at the output of the holographic router As we have commented in Section 4.1, for wavelengths close to the central value, the filter response is very similar to the Gauss filter; for wavelengths far from the central value, the filter response is similar to a 3 nd order Bessel filter with less out band attenuation.Both of them have a linear phase characteristic, which means a constant group delay.These simulations are in agreement with experimental measurements shown in (Parker et al.., 1998).
Holographic device losses
The losses produced in this holographic router, as we have commented before, are due to the following causes: diffraction loss: the total light diffraction is composed of the transmissive diffraction in the grating (twice) and the reflexive diffraction in the SLM.Using a 4-phases SLM and a grating with 1 st order intensity efficiency of about 80%, the total losses are 10 x log (3x.0.8) ≈ 3 dB.intrinsic SLM loss: it is due to the liquid crystal (LC) switching angle different from optimal and the coverage of SLM aperture (1/e 2 of ND x ND).A typical value is 2 dB.fibre/lens coupling: by considering 90% efficiency, 1 dB is added.In total, with an optimized holographic device, a loss about 6 dB has to be taken into account.
Channel power equalization
Power equalization at the all output channels is necessary to compensate the different response of the network components and distances for the used channel wavelengths.To reach it and to compensate for the holographic device losses, a gain component, such as a Semiconductor Optical Amplifier (SOA), has to be employed.The total equalization takes into account the gain-wavelength variation of this amplifier, ΔG A , whose typical response is drawn in Fig. 13, (the maximum gain, G A is about 20~25 dB).The target is to have at the output fibers a net loss of 0 dB (GT), according to the equation: where ΔAt is the total attenuation range for channels to be equalized at the input of the device; L HR is the intrinsic holographic router losses (≈ 6 dB) and the term ΔL HR =10 x log (number of channels) has taken into account the additional loss due to the mixed holograms utilized for equalizing all the input channels.This point will be explained in detail in the following paragraphs.Fig. 14 shows the structure of an Equalized Holographic ROADM (EH-ROADM) for 4 input channels with full routing of them to the 4 output fibers.A way to obtain at the output fibers tuned wavelengths with different relative attenuation between them is to control the losses due to the SLM aperture, as pointed out in Fig. 15.
The minimum losses due to the SLM aperture are obtained when the incident light, with a Gaussian distribution, fills the complete surface ND x ND of the SLM.Therefore, the losses are proportional to the quantity of SLM aperture illuminated by the collimated light coming from the lens, as in Fig. 15.A practical way to reach the former condition is by changing the size of the hologram according to the number of active pixels.
Mixed hologram operation
The EH-ROADM is able to select at the output fibers any combination of wavelengths at the input fiber, from all input wavelengths in just one output fiber to each input wavelength at the corresponding output fiber, including all intermediate cases.This operation mode is done by the selection in the SLM of a mixed hologram composed of all individual holograms corresponding to each input wavelength.Fig. 16 shows an example for three input wavelengths and its holograms, formed, in this case, by black and white bars (2-phases).For every input wavelength (channel) a hologram is assigned, where n i (spatial period) produces the pass-band filter for the channel and N i takes into account the number of active pixels to reach the correct attenuation, At channel , to equalize the output signals.This mixed hologram produces the additional loss in the holographic router, 10 x log (number of channels).A more fitted equalization can be obtained by monitoring the outputs with a feed-back loop to adjust the size of the holograms according to the wanted output signal level.
Design calculations
Having chosen the SLM, the focal length of the lens, f, to illuminate with collimated light the complete active surface ND x ND of the SLM (see Fig. 15), is related to the number of pixels N and their size D according to expression (8): where Φ core is the input fiber core diameter and 0 the central wavelength in the operation region.The 3 dB pass-band filter bandwidth of the device, BW, is (Parker et al., 1998) 3/2 2 0 2 if the condition 8D >> d is reached, where d is the fixed grating spatial period.For our calculations, we have a reflective 4-phases SLM with N = 1024 and D = 8 m (ND = 8.192 mm).Then, the focal distance for the lens is 37.655 mm and the BW ≥ 1.52 nm (190 GHz), d being = 6.5 m, the spatial period of a 4-phases transmissive diffraction grating and Φ = 9 m the core diameter of a singlemode fiber.By changing d we can adjust the BW of the holographic filter.
In the expression (13) the selected wavelength of operation is calculated.The value of n is varied from n = 0 (for maximum wavelength) and n = N/4 (for minimum wavelength); the central wavelength 0 is obtained when n = N/8.For the design of a 1x4 router working in the upper band of the CWDM grid, 1471-1611 nm, we take 0 = 1541 nm.In this case min = 1407 nm and max = 1693 nm; these values cover the entire CWDM upper band.The distance from the optical axis to the output fiber, (see Fig. 11), where the 1 st order of the total diffraction is produced, is x = 9.808 mm and the total diffraction angle φ ≈ 14.6º.From ITU G.694 Rec., all CWDM channels are spread Δ = 20 nm to allow Direct Modulated Lasers (DML) wavelength variation with temperature and filter tolerance; therefore, an ΔΛ = (8-1).Δ = 1611-1471 = 140 nm range assumes ΔX = 1260 m, according to the relation: In an equalized holographic router, the directing of the input wavelengths to the output fibers is done by the choice of three parameters: n ij for wavelength tuning, N i for power equalization and Δx j for placing the output optical fibers.Subscript i is related to the number of input wavelengths and subscript j related with the number of output fibers.Haqving fixed the separation between fibers, in our case Δx = 180 m, we obtain the corresponding value of n ij from ( 13), according to the input wavelength(s) and output fiber(s) considered.
As we are managing different sets of n ij values, all of them have to be different in order to avoid cross-talk between wavelengths on different output fibers.
Table 8 shows the holograms (n ij ) and number of active pixels (N i ) for a 4-channels grid according to the ITU G.695 Rec.For instance, in Fig. 11, a mixed hologram 113+95+78+61 addresses the 4 input wavelengths ( 3 + 4 + 5 + 6 ) to the output fibre 3; a mixed hologram 113+121+128+135 addresses 3 to fibre 3, 4 to fibre 4, 5 to fibre 5 and 6 to fibre 6.In each case, every i has the corresponding N i range to assure the power equalization at the output.Table 9 is a summary of the losses in the device (SOA+EH-ROADM) according to the different input channels, whose variation in wavelength is in agreement with Fig. 15.In this case, there is a net gain of 10 dB to compensate for the power variation due to different paths of the input channels along the network.The N i range, 256÷1024, in
CWDM METRO networks application
The use of tunable holographic devices in Access and Metro networks, like demultiplexers or routers has been studied in different papers (Koonen, 2006), (Martin Minguez & Horche, 2010).In Fig. 17 an application for the equalized holographic ROADM is represented.
Design of an holographic router with λ conversion and losses compensation
Fig. 18 shows a device composed of a Semiconductor Optical Amplifier (SOA) and a holographic wavelength router.The SOA performs the wavelength conversion by a non linear operation using the Cross Gain Modulation (XGM) method.An incident wavelength, i, modulated by a digital signal is combined with the wavelength CWj generated by a tunable laser (CW) into the SOA.At the amplifier output, according to different CWj wavelengths, CWj signals are obtained modulated with the digital signal from the incident i wavelength.These CWj signals are also amplified and inverted.The holographic wavelength router, depending on the input signal, CWj, , and the generated hologram (nij) stored in the SLM, addresses this signal to the assigned output.As has been stated, this technology has the drawback of high insertion losses (less than 10 dB, using an optimized device).In order to solve this problem, by combining a SOA with the holographic router, this insertion loss is compensated with the amplifier gain in the saturation zone of operation.A parameter to control in the SOA operation, is related to the amplified spontaneous emission (ASE) because of the impact on the signal distortion.Fig. 19 shows the simulation of this device, composed of three different blocks: a CW tunable laser, a wavelength conversion semiconductor optical amplifier and a wavelength holographic router.In Fig. 20, the response of the Wavelength Conversion and Routing Holographic Device (WCR-HD) is represented for a 2.5 Gb/s input signal, i = 1540 nm, which is converted to an output signal, o = 1520 nm, where the losses of the holographic router are compensated by the gain of the SOA.
Conclusion
In this chapter the design of a singular device for use both in CWDM/DWDM systems has been studied.Applications such as, tunable optical filters, demultiplexers and wavelength routers, using holographic SLM technology, have been reviewed taking into account ITU-T G.694.1 and G.694.2 Recs.for central wavelengths allocation.Application of Computed Generated Hologram design (CGH) to CWDM/DWDM systems has been studied and some comments about this hologram generation technique and its results have been made in order to highlight the phases of the process implementation and the issues related to the diffraction target misalignment and the use of 2-4 phase holograms, etc.
The novel idea in this work is the design of a compatible CWDM/DWDM device able to carry out different multiplexing functions.As we commented before, a better device performance as a tunable filter, demultiplexer or router could be implemented if only one of these functions is required.The design of equalized holographic ROADM devices for applications in CWDM optical networks has been developed.By using a mixed hologram, corresponding to the combination of several input wavelengths, the tuning of a broad range of wavelengths has been obtained allowing the full routing of several channels from the input fiber to the outputs.As it is possible to change the active pixels in the SLM for each hologram, in order to maintain a fixed output power level, channel equalization has been reached.Intrinsic losses of the device have been optimized using 4-phases holograms whose diffraction efficiency, for the 1 st order, is twice that of binary holograms.Also, the ROADM size has been minimizing by using a "2f-folded" instead of a "linear-4f" for the optical structure.To reduce the total insertion losses of the holographic device a SOA has been added increasing the input power range for equalization.An example of use of these ROADM devices in CWDM Metro and Access Networks (PONs) has been reviewed.Another example of application is dealing with the design of a holographic router with losses compensation and wavelength conversion, whose main application is in the interconnection nodes of Metro networks.This device uses a SOA (Semiconductor Optical Amplifier), in the non-linear region, to do the wavelength conversion and, in addition, to supply the gain in order to compensate for the intrinsic losses of the holographic router.
Other applications in Metro networks like path protection between nodes or switch matrix for ring networks interconnection could be implemented showing the versatility of these devices (Tibuleac & Filer, 2010).Laboratory experiments testing the capability of a phase FLC-SLM to be used in these devices have been carried out and results show that, for different types of holograms, the possibility of distributing several wavelengths depends on the diffracted angle and, therefore, enabling the building of filters, demultiplexers or wavelength routers.
Fig. 2 .
Fig. 1. two/four-phases bars holograms Fig 1 shows a bars hologram for 2 and 4-phases and their diffraction target in a far field approach.As we can see, the main difference in the holograms is the grey bars in the 4phases holograms; in this case there is a white bar, a black bar and two different grey bars for addressing the 4-phases (π/4, 3π/4, -3π/4, -π/4); with regard to the diffraction target.Another characteristic is the loss of the symmetry for the diffraction orders.
Fig. 9 .
Fig. 8. (a) Optical bench diagram; (b) experimental optical bench and c) binary phase SLM characteristics 12) where: Δ = 66.1 nm; f = 8 cm; n = 128; N = 256 and D = 15 m.In Fig.10 light spots captured by CCD camera, from the CGH with n = 128 (black and white bars) are shown.In this case the tunable Argon laser with the blue and green colors has been used.The experimental diffracted light spot distances were x blue = 1233.6µm (F 7 ) and x green = 1409.8(F 8 ) µm, separated by ∆x = 176.25 µm according to Fig. 9, in good agreement with
Fig. 15 .
Fig. 15.Losses in the incident light due to different ND x ND hologram apertures
Fig. 17 .
Fig. 17.Application of an EH_ROADM in a CWDM METRO networkA double ring CWDM METRO topology is used to connect this primary access network, through an Optical Line Termination (OLT), with some Fiber to the Office (FTTO) or Fiber to the Home (FTTH) networks with Passive Optical Network (PON) structure; on the other side, a connection to a DWDM METRO network, by an OXC (Optical Cross Connect) with conversion, is provided.The target is to address the wavelengths of the double ring network, 1 , 2 , 3 and 4 to four different PONs with the possibility of wavelength reallocation.
Fig. 18 .
Fig. 18.Device composed of an optical converter and a holographic router
Table 1 .
Relationship between phases and contrast
Table 2 .
Different device applications
Table 3
summarizes the filter figures for CWDM systems applications where channels are allocated between min = 1290 nm and max = 1590 nm, with central wavelength 0 = 1431 nm, and for DWDM systems ( min = 1530 nm and max = 1590 nm, 0 = 1551 nm).
Table 9 .
SOA gain, EH_ROADM losses and total net gain | 10,022.6 | 2011-11-09T00:00:00.000 | [
"Engineering",
"Physics"
] |
Variation of Oxygenation Conditions on a Hydrocarbonoclastic Microbial Community Reveals Alcanivorax and Cycloclasticus Ecotypes
Deciphering the ecology of marine obligate hydrocarbonoclastic bacteria (MOHCB) is of crucial importance for understanding their success in occupying distinct niches in hydrocarbon-contaminated marine environments after oil spills. In marine coastal sediments, MOHCB are particularly subjected to extreme fluctuating conditions due to redox oscillations several times a day as a result of mechanical (tide, waves and currents) and biological (bioturbation) reworking of the sediment. The adaptation of MOHCB to the redox oscillations was investigated by an experimental ecology approach, subjecting a hydrocarbon-degrading microbial community to contrasting oxygenation regimes including permanent anoxic conditions, anoxic/oxic oscillations and permanent oxic conditions. The most ubiquitous MOHCB, Alcanivorax and Cycloclasticus, showed different behaviors, especially under anoxic/oxic oscillation conditions, which were more favorable for Alcanivorax than for Cycloclasticus. The micro-diversity of 16S rRNA gene transcripts from these genera revealed specific ecotypes for different oxygenation conditions and their dynamics. It is likely that such ecotypes allow the colonization of distinct ecological niches that may explain the success of Alcanivorax and Cycloclasticus in hydrocarbon-contaminated coastal sediments during oil-spills.
Members of these ubiquitous genera occupy distinct trophic niches, where usually the aliphatic hydrocarbon degrader Alcanivorax blooms first, followed by the (poly-)aromatic hydrocarbon degrader Cycloclasticus, as polyaromatic hydrocarbons (PAH) are less amenable to degradation (Head et al., 2006).
Despite our improved knowledge of the ecology of MOHCB, their behavior when confronted with fluctuations in environmental parameters is far from understood. Coastal marine sediments are subjected to fluctuating oxygenation and thus redox conditions from tidal cycles, diurnal cycles (and thus photosynthetic oxygenation), and macrofaunal burrowing activities that in turn affect microbial degradation processes (McKew et al., 2013;Cravo-Laureau and Duran, 2014;Duran et al., 2015a,b). The time span of these oxygen intrusions often varies from few minutes to several hours (Wakeham and Canuel, 2006;Militon et al., 2015Militon et al., , 2016. Previous studies demonstrated that anoxic/oxic oscillations promote organic matter biodegradation (Abril et al., 2010), and more particularly hydrocarbon degradation Vitte et al., 2011Vitte et al., , 2013. However, the microbial processes underlying the biodegradation of hydrocarbons within the anoxic/oxic transitional zone is not well understood Cravo-Laureau and Duran, 2014), especially for MOHCB subjected to such extreme fluctuating conditions.
The interaction of microorganisms with their environment is a key question in microbial ecology. Microorganisms have developed several metabolic and behavioral strategies to survive under different environmental conditions (Shade et al., 2012), which include physiological versatility and plasticity (Dolla et al., 2006;Fourcans et al., 2008;Evans and Hofmann, 2012) and dormancy (Lennon and Jones, 2011). Alternatively, the adaptation could involve ecotypes, subpopulations with specialized adaptations to microenvironments (Hunt et al., 2008;Coleman and Chisholm, 2010). The ecotype formation is an adaptive process allowing subpopulations to occupy an ecological niche, a first step in the speciation process (Wiedenbeck and Cohan, 2011). An impressive example is provided by the multiple ecotypes of the cyanobacterium Prochlorococcus, which was found in different abundances according to environmental factors such as light, temperature, and phosphate and nitrate contents (Martiny et al., 2009). Such ecotype diversification supports the success of Prochlorococcus as a dominant phototroph in oligotrophic seawater (Martiny et al., 2006(Martiny et al., , 2009. Ecotypes are recognized as phylogenetic clusters occupying a specific habitat. The existence of ecotypes has be shown by the habitat specificity of 16S rRNA sequences, as demonstrated for Synechococcus adapted to different temperatures (Melendrez et al., 2011) and for the PAH-degrading Alteromonas adapted to different depth of seawater (Math et al., 2012). Based on the oligotyping approach, a computational method that reveals the micro-diversity within sequences clustering in a single OTU (Eren et al., 2013), Kleindienst et al. (2016) proposed ecotypes for the hydrocarbon-degraders Cycloclasticus, Colwellia and Oceanospirillaceae adapted to hydrocarbon gradients.
In order to test the hypothesis that the success of MOHCB, particularly the most ubiquitous Alcanivorax and Cycloclasticus, to colonize distinct niches is based on ecotype diversification, an experimental ecology approach that offers the possibility to test ecological hypotheses under controlled conditions (Cravo-Laureau and Duran, 2014) was implemented. A microbial hydrocarbon-degrading community was maintained in bioreactors and exposed to different oxygenation regimes including permanent anoxic, anoxic/oxic oscillation and permanent oxic conditions. The bacterial diversity dynamic was assessed by16S rRNA gene transcript sequences (using high throughput sequencing technology) and the micro-diversity of Alcanivorax-and Cycloclasticus-related sequences was examined. First, oligotypes were defined and correlated with the redox conditions, suggesting the presence of subpopulations. Then, single nucleotide difference analysis confirmed the presence of cohesive subpopulations for the different oxygenation conditions representing specific ecotypes. The distribution of the ecotypes throughout the incubations explained the dynamics of Alcanivorax and Cycloclasticus during the anoxic/oxic oscillations.
Bioreactors System and Experimental Set Up
The slurry (1.8 L) was distributed in nine bioreactors (with a working volume of 2 l) to apply the three following conditions in triplicate: anoxic/oxic oscillation, permanent oxic and permanent anoxic conditions as detailed (Terrisse et al., 2015). The incubations were carried out for 15 days in batch conditions, with stirring (250 rpm, Stuart SS20), in the dark and at room temperature (ranging from 17 to 24 • C, InPro6800 sensors, Mettler Toledo International Inc.). Bioreactors were maintained for 5 days, under oxic or anoxic conditions (for the permanent anoxic and the anoxic/oxic oscillation conditions) for a microbial community stabilization period, before addition of crude oil and starting anoxic/oxic oscillations for the oscillating conditions. At day 5, Russian export blend crude oil (REBCO) was added with a concentration of 21.2 ± 5.7 mg/g of dry weight sediments. The REBCO is an Ural type crude oil, distilled at 110 • C to eliminate the more volatile hydrocarbon compounds. This oil contained 59.9% of saturated and 24.8% of aromatic hydrocarbons, 10.2% of resins and 5.1% of asphaltenes. In order to be close as possible to conditions prevailing in the environment, where oxygen pulses often occur at timescales of minutes to hours (Wakeham and Canuel, 2006;Militon et al., 2015Militon et al., , 2016, anoxic/oxic oscillations consisted of an alternation of anoxic periods with two 1-day periods of aeration performed at days 7 and 10. Aeration periods were produced by injection of filtered air (Acro R 37 TF Vent Device with a 0.2 µm PTFE Membrane, Life Sciences) using air pumps (flow rate of 70 L/h, Rena air 50) into the gas and aqueous phases, generating bubbling in the latter. Permanent oxic conditions were produced in the same way. Periods of anoxic conditions were achieved by stopping aeration, sealing, closing the system and creating a slight overpressure with nitrogen gas. The same process was used to achieve the permanent anoxic conditions. Dissolved oxygen, temperature (InPro6800 sensors, Mettler Toledo International Inc.), pH and redox potential (InPro 4260i/SG/225 sensors, Mettler Toledo International Inc.) were measured twice a day.
Sample Collection
Samples were collected with a sterile syringe (TERUMO Corporation), connected to bioreactors by a Norprene tube, for chemical and biological analyses. The sampling system was purged before each sampling. 10 mL samples were collected for chemical analysis from each reactor along the incubation at 5.4 (i.e., 10 h after oil addition), 7, 8, 10, 11, and 15 days of incubation. Sampling was performed before switching condition. Samples were stored in amber glass bottles with polytetrafluoroethylene stoppers (WHEATON) at −20 • C. For molecular analyses 1.5 mL of slurry was collected and immediately mixed with 190 µL of RNA stabilization solution (the RNA stabilization buffer was prepared as follow: 5 mL of phenol were mixed with 5 mL of 1 M Na-acetate buffer pH 5.5, then the two phases were separated by centrifugation at 4000 × g for 3 min. The phenolic phase was finally added to 95 mL of pure ethanol) to preserve rRNA transcripts integrity. Samples with RNA stabilization solution were homogenized and centrifuged at 10000 × g for 5 min at 4 • C (Jouan MR 1812). The supernatant was then removed and tubes containing pellets were introduced immediately in liquid nitrogen and stored at −80 • C. Samples were collected in triplicate from each reactor at days 0, 5 (before oil addition), 5.4 (10 h after oil addition), 7, 8, 10, 11, and 15.
Aqueous samples were analyzed after addition of internal standards (perdeuterated PAHs), extracted according to the Stir Bar Sorptive Extraction protocol and analyzed by SBSE-GC/MS as previously described (Stauffert et al., 2013).
Chemstation software was used for determining concentrations of hydrocarbons. The total petroleum hydrocarbons and the target molecules of n-alkanes and PAHs were quantified relatively to the perdeuterated eicosane and PAHs (internal standards) using calibration curves of REBCO crude oil (from 0.05 to 5 mg/mL), of n-alkanes (TRPH Standard from 0.5 to 50 µg/mL, Ultra Scientific, Florida) and of PAHs (CUS-9306 from 0.25 to 25 µg/mL, LGS Standards, France), respectively.
Total DNA and RNA Co-extraction
Genomic DNA and RNA transcripts were co-extracted using the commercial RNA PowerSoil R Total RNA Isolation Kit (MoBio Laboratories). Extractions were performed according to the manufacturer's instructions with a slight modification: step 4 was amended by suspending nucleic acids in 100 µL of SR5 buffer (supplied in the kit) after precipitation with ethanol 70% v/v. The separation and purification of DNA and RNA were performed with the commercial Allprep DNA/RNA Mini Kit (QIAGEN) following the manufacturer's recommendations. RNA and DNA were separately eluted in 100 µL of free DNase/RNase sterilized MilliQ water. RNA extracts were treated with Turbo DNA free kit (Ambion, Applied Biosystems) according to the manufacturer's instructions to ensure total elimination of DNA. DNA total elimination from RNA extracts was checked by negative 16S rDNA PCR amplification on the extracts. The quality and the size of DNA and RNA obtained were verified by electrophoresis on a 1% w/v agarose gel in Tris-Borate-EDTA buffer. RNA quality and concentration were also investigated by micro-capillary electrophoresis using RNA 6000 Nano LabChips kit and an Agilent 2100 Bioanalyzer (Agilent Technologies). RNA Integrity Numbers (RIN) between 7 and 8.5 were obtained for the RNA extracts, indicating good quality. The DNA and RNA extracts were aliquoted and stored at −80 • C.
Reverse Transcription of RNA
The synthesis of a complementary DNA strand (cDNA) from a RNA template strand was performed by reverse transcriptase as previously described (Stauffert et al., 2014b). Equipment and solutions used were certified RNase and DNase free or previously treated with DEPC (diethylpyrocarbonate). The reaction (final volume of 20 µL) was carried out with 10-60 ng of RNA per sample, adding 40 U of RNase OUT (Invitrogen by Life Technology), 0.5 mM of dNTPs, 10 ng/µL of random hexamers (Roche), 0.01 M DTT, 1 × enzyme buffer and 200 U of the Moloney murine leukemia virus reverse transcriptase (M-MLV, Invitrogen by Life Technology), following suppliers' advice. The cDNA products were then used directly in a PCR reaction or stored at −80 • C.
Quantitative-PCR of 16SrDNA on DNA and cDNA DNA and cDNA were amplified using primers 338F (5 -CTCCTAC GGGAGGCAGCAGT-5 ) and 518R (5 -GTATTACCGCGGCTGCTG-3 ) targeting bacterial 16S rRNA as previously described (Giloteaux et al., 2010). LightCycler R 480 SYBR GREEN I Master was used to prepare 10 µL reactions containing 5 µL of 2 × MasterMix, 0.4 µM of each primer, 2.5 µL of 1/100 diluted DNA or cDNA extracts. Quantitative Polymerase Chain Reaction (Q-PCR) was performed with the Roche LightCycler 480 Real Time PCR system. The cycling program was as follows: 95 • C for 5 min followed by 40 cycles with denaturation step at 95 • C for 15 s, hybridization step at 60 • C for 15 s and elongation step at 72 • C for 20 s. At the end of the program, an increase of the temperature from 64 • C to 97 • C provided the melting curves, informing on the amplification quality. The LightCycler 480 Software was used to analyze fluorescence. 16S rRNA copy numbers per µL from DNA or cDNA amplifications were quantified relatively to calibration curves as previously described (Paissé et al., 2012).
Sequencing and Data Analysis
High-throughput sequencing analysis, targeting bacterial 16S rRNA, was performed from 0.1 µg cDNA at days 0 (mix from 3 extract samples for day 0), 5, 5.4, 7 (under oxic and anoxic conditions), and 8, 10, 11, 15. Amplicon 454-pyrosequencing was performed by the Molecular Research DNA laboratory (MR DNA, Shallowater, TX, United States) following process originally described by Dowd et al. (2008). The 16S universal bacterial primers 27Fmod (5 -AGRGTTTGATCMTGGCTCAG-3 ) and 519Rmodbio (5 -GTNTTACNGCGGCKGCTG-3 ) were used, targeting V1-V3 variable regions. A single-step PCR using HotStarTaq Plus Master Mix Kit (Qiagen, Valencia, CA, United States) was used with the following conditions: 94 • C for 3 min, followed by 28 cycles of 94 • C for 30 s, 53 • C for 40 s and 72 • C for 1 min, after which a final elongation step at 72 • C for 5 min was performed. The different PCR products were mixed in equal concentrations before purification using Agencourt Ampure beads (Agencourt Bioscience Corporation, Beverly, MA, United States). Roche 454 FLX titanium instruments and reagents were used to perform the sequencing following manufacturer's guidelines.
The open source software QIIME (Quantitative Insights Into Microbial Ecology) was used for sequence read analysis of the bacterial 16S rRNA gene sequence (Caporaso et al., 2010). From the raw sequencing output, data cleaning was initially performed (Quince et al., 2009) by denoising of the data and by eliminating chimera and sequences with length under 450 bp, with mistakes in primers sequences or with homopolymers. Sequences reads were aligned on a sequence length of 400 bp. The data OTU picking was carried out according to the Usearch method (Edgar, 2010) defining reference sequences at the similarity threshold of 0.97. The taxonomy assignment was performed comparing reference sequences to a reference database of known 16S rRNA genes, the Ribosomal Database Project (RDP) database. Results of the number of times an OTU was found in each sample were tabulated and the taxonomic predictions were added for each OTU. The OTU abundances for each sample were rarefied to the same number corresponding to the minimum read number observed for a sample (3,102 sequences). This normalization of the OTU abundance data per sample was performed with the package vegan (Oksanen et al., 2013; Version: 2.0-10) with R software. The complete dataset was deposited in the NCBI Sequence Read Archive (SRA) database and is available under the Bioproject ID PRJNA383383.
Statistical Analyses
The effects of the incubation conditions (anoxic/oxic oscillation, permanent oxic or permanent anoxic conditions), time and interactions among these factors, on the variability of gene expression rates, biodegradation ratios, coverage values and univariate diversity indexes (richness and Shannon's diversity estimated with R and Mothur software), were tested by two multifactorial analysis of variance. A linear mixed model was performed with the effects of the incubation condition and the time (and their interactions) as fixed factors and a random factor to take into account the effect of repeated measurements over time, corresponding to the non-independent nature of the various samples in bioreactors (lme function in R package nlme). A second linear model was conducted by testing the same factors as fixed factors but without taking into account the random factor (lm function in R). To test the effect of the random factor, analysis of variance (Kunin et al., 2008) has shown no significant difference between these two models, the linear model without repeated measurement was chosen. The logarithmic transformation of the numeric variable was sometimes necessary in order that the residuals follow a normal distribution. ANOVA and Tukey's HSD tests (Honest Significant Difference) were carried out following the linear models to compare between each other the different modalities of the various factors and highlight significant differences. Permutational multivariate analysis of variance (PerMANOVA) tested significance among incubation conditions and time as fixed factors and the random factor to take into account the effect of repeated measurements over time, with 16S rRNA sequencing data (based on dissimilarity matrix of the Bray-Curtis distances). Bacterial community composition (16S rRNA sequencing data) under the different incubation conditions was analyzed using non-metric multidimensional scaling (nMDS) implemented in Primer 6 (version 6.1.16). For nMDS ordinations, the Bray-Curtis distance was used to generate dissimilarity matrices. Confidence ellipses were based on cluster analyses (Primer 6 software; version 6.1.16). Schematic representations of OTU distribution according to incubation conditions and time were drawn with ade4 package in R software (Dray and Dufour, 2007).
Micro-Diversity Analyses
Alcanivorax and Cycloclasticus were perfect candidates for microdiversity analysis, since they were largely present within our samples playing a key role in oil-degrading microbial assemblage. First, subpopulations were examined by oligotyping (Eren et al., 2013). Oligotyping analysis generates oligotypes by systematically identifying nucleotide positions that represent information-rich variation among closely related sequences. The identification of nucleotide positions of interest was performed using the Shannon entropy (Shannon, 1948). A total of 53,842 and 26,120 sequences were extracted respectively from the dominant OTUs (represented by more than 100 sequences) of Alcanivorax ). An alignment was performed against the SILVA database (release 128) and sequences were trimmed to a consistent start and end position. Using the Shannon entropy analysis, a total of 35 and six information-rich positions contributing to the Alcanivorax and Cycloclasticus oligotypes were identified. To reduce the noise, only oligotypes that occurred in more than 1% of the reads for at least one sample in which the most abundant unique sequence represented more than 0.01% of all reads, were retained. The analysis of the distribution of oligotypes was conducted from the raw counts data without normalization, because the identified oligotypes presented low diversity in each sample. Then, in order to resolve in depth the distribution of each genus within the different oxygenation conditions, a new clustering of Alcanivorax and Cycloclasticus was performed using the swarm algorithm (single-linkage clustering method) implemented in Qiime (Caporaso et al., 2010;version 1.9.1), with a local clustering threshold of 1 (d = 1). Thus, clusters of sequence were obtained differing by one nucleotide defining swarm OTUs that correspond to ecotypes. Only swarm OTUs composed of more than 100 sequences were retained. The distribution of these swarm OTUs was represented in heatmaps carried out with DECIPHER (R package; Wright, 2016) using the log-transformed data in order to deal with the skewed and wide distribution of the raw data. Since swarm OTUs represented cohesive subpopulations specifically correlated with an oxygenation condition they were defined as ecotypes.
Redox Status and Hydrocarbon Degradation
The redox and oxygen saturation, followed during the 15 days bioreactor-incubation of the hydrocarbon-degrading bacterial community, indicated that the hydrocarbon-degrading microbial community was effectively subjected to three oxygenation regimes, namely permanent oxic, permanent anoxic and anoxic/oxic oscillation conditions (Supplementary Figure 1). Under anoxic/oxic oscillation condition, the oxygenation periods at days 7 and 10 were characterized by a liquid phase with oxygen saturation above 50% and redox potentials higher 15 mV while the anoxic periods were characterized by a liquid phase with oxygen saturation at 0% and redox potentials bellow −200 mV as observed for the permanent anoxic condition. Such redox oscillations modify the availability of oxidants and reductants, which in turn influences microbial community structure (Lipson et al., 2015) and metabolic activities (DeAngelis et al., 2010), supporting increased microbial activity (Fenchel and Finlay, 2008). Previous studies demonstrated that anoxic/oxic oscillations promote organic matter biodegradation (Abril et al., 2010), and more particularly hydrocarbon degradation Vitte et al., 2011Vitte et al., , 2013. In our study the n-C 17 /pristane ratio (McKenna and Kallio, 1971) and phenanthrene/dimethylphenanthrene ratio (Michel and Hayes, 1999), used as indexes reporting the biodegradation for n-alkane and PAH respectively, indicated that the most efficient biodegradation for n-alkanes was under anoxic/oxic oscillation condition while similar efficiency was observed for PAH biodegradation under permanent oxic and anoxic/oxic oscillation conditions at the end of incubation (Supplementary Figure 2). It was notable that n-alkanes were progressively depleted under anoxic/oxic oscillation conditions while PAH biodegradation was more efficient during the aeration periods. This observation suggests co-stimulation of aerobic and anaerobic metabolism for the biodegradation of n-alkanes as proposed for organic matter degradation (Abril et al., 2010) while PAH biodegradation involved exclusively aerobic metabolisms. The latter implies that PAH-degrading microorganisms developed strategies for their maintenance under anoxic conditions. Such an assumption was supported in our study by a significant correlation (Pearson correlation index r = 0.904, p-value < 0.05) between PAH biodegradation index (phenanthrene/dimethylphenanthrene ratio) and expression rate (rRNA transcripts copies number/rRNA genes copies number) of 16S rRNA gene (Supplementary Figure 3) indicating growth stimulation of PAH-degrading microorganisms after oxygen input. It is thus likely that PAH-degrading microorganisms were in dormancy during the anoxic periods of the anoxic/oxic oscillation condition. Dormancy strategies play crucial role in microbial community stability in fluctuating environmental conditions, particularly by providing seed bank maintaining metabolic diversity through various mechanisms including persistent-subpopulation ecotypes, niche complementation and functional redundancy (Lennon and Jones, 2011;Shade et al., 2012). Dormancy strategies have been evidenced for marine microbial communities in coastal areas (Campbell et al., 2011;Hugoni et al., 2013). However, in a previous study we demonstrated that rhd gene transcripts, encoding ring hydroxylating dioxygenase involved in the first step of PAH biodegradation, were continuously produced during anoxic/oxic oscillation conditions including the anoxic periods (Vitte et al., 2013) suggesting a metabolic activity of some PAHdegrading populations even in absence of oxygen. It is likely that some of the hydrocarbon-degrading populations may adopt a dormancy strategy while another part would remain metabolically active. Determining the dynamic of the hydrocarbon-degrading populations during fluctuating redox conditions at a fine taxonomic scale may help to unveil the microbial processes involved in the adaptation to fluctuating redox conditions.
Influence of Oxygenation Regimes on the Overall Bacterial Community Organization
In order to determine the mechanisms underlying the behavior of hydrocarbon-degrading bacterial communities under different oxygenation regimes, bacterial community compositions were characterized at different incubation times for the three oxygenation conditions by high throughput sequencing of 16S rRNA gene transcripts, thus assessing active bacterial communities. After trimming and rarefication 3,102 sequences were obtained per sample corresponding between 177 and 600 OTU 97 s at the species level (97 % similarity threshold) per sample. The characteristics of the revealed diversity are presented in Supplementary Table 1. Good's coverage were above 0.90 indicating that the number of sequences per sample was sufficient to describe the accessible diversity. The α-diversity indexes [Richness (R) and Shannon (H)] varied significantly with the condition (PerMANOVA, p-value < 0.001), the time (PerMANOVA, p-value < 0.005 and <0.05 respectively) and time variation was different between conditions (PerMANOVA, p-value < 0.001). Furthermore, R and H were higher for permanent anoxic and permanent oxic conditions (PerMANOVA, R > 400 OTUs; p-value < 0.001; H > 3.6 and 4.0 respectively) than for anoxic/oxic oscillating conditions after the first period of aeration (R < 250 OTUs; H < 2.5). These observations corroborated the effect of oxygen regime on bacterial community organization, which was further supporter by nMDS analysis showing three main clusters (i) anoxic/oxic oscillating communities, (ii) permanent oxic communities and (iii) anoxic/oxic oscillating + permanent anoxic communities (Figure 1). The Venn diagram comparing bacterial communities revealed that 1,736 OTU 97 s were shared between the three conditions, which represent 46, 49, and 56% of total OTU 97 s for the permanent anoxic, permanent oxic and anoxic/oxic oscillating conditions respectively (Figure 2), suggesting that a large proportion of microorganisms have the capacity to tolerate the different oxygenation regimes. It is notable that these OTU 97 s represented above 70% of the sequences retrieved in each condition corresponding thus to the most abundant OTU 97 s. Among these shared OTU 97 s, the most abundant were related to the MOHCB genera Alcanivorax (37%) and Cycloclasticus (28%), some species of which related species have been described as aerobic and microaerobic (Dyksterhouse et al., 1995;Yakimov et al., 1998;Lai et al., 2013). Regarding the specific OTU 97 s for each condition, low abundant OTU 97 s that may play an important role in the microbial assemblage organization and functioning were found. Among the less abundant OTU 97 s (Supplementary Figure 4), sequences related to genera know to present hydrocarbon degradation capacities were found, such as Thalassolituus (Yakimov et al., 2007), Marinobacter (Duran, 2010), Marinobacterium (Sherry et al., 2013), Pseudomonas (Paisse et al., 2011) and Desulfobacterium (Paissé et al., 2008). Interestingly, the specific sequences were also related to Cycloclasticus and Alcanivorax OTU 97 s, which dominated the less abundant OTU 97 s in each condition, except in the permanent anoxic condition where specific Alcanivorax OTU 97 s were observed at low abundance (<50 sequences). It is important to note that under permanent anoxic conditions the specific OTU 97 s represented 2% of the sequences corresponding thus to rare OTU 97 s (Figure 2), which were dominated by Cycloclasticus and Ilyobacter. Members related to Ilyobacter have been found specialized in the degradation of hydroaromatic compounds under anoxic conditions (Brune et al., 2002). Both Alcanivorax and Cycloclasticus are widely distributed, detected in various marine environments including surface water, hydrothermal vents, deep sea water bodies, coastal and mudflat sediments (Bordenave et al., 2004a(Bordenave et al., , 2008McKew et al., 2007;Yakimov et al., 2007;Staley, 2010;Paisse et al., 2011;Coulon et al., 2012). It is not surprising to observe these microorganisms together because they use different hydrocarbon substrates as carbon and energy sources (McKew et al., 2007;Coulon et al., 2012). More intriguing is their presence in anoxic conditions, particularly for Cycloclasticus, species of which related have been described as aerobic bacteria. However, Cycloclasticus species have been detected in bio-irrigated coastal marine sediments (Montgomery et al., 2008) and isolated from marine polychaete burrows in an intertidal mudflat (Chung and King, 2001), suggesting that related members of this genus have the capacities to withstand fluctuating environmental conditions. Additionally, Cycloclasticus relatives have been found active in methane-enriched microcosms (Sauter et al., 2012) and environments (Redmond and Valentine, 2012). Recently, the metabolic versatility of Cycloclasticus in the degradation of hydrocarbons was demonstrated (Rubin-Blum et al., 2017) and Cycloclasticus ecotypes, specific for the deep sea methane-rich hydrocarbon plume arose with the Deepwater Horizon oil spill, have been proposed (Kleindienst et al., 2016) based on the oligotyping approach (Eren et al., 2013).
Influence of Oxygenation Regimes on Alcanivorax and Cycloclasticus Populations
The distribution of Alcanivorax and Cycloclasticus OTU 97 s within the different oxygenation conditions was examined in order to determine whether specific OTU 97 s explain their survival under the distinct conditions. Alcanivorax and Cycloclasticus were represented by six and eleven OTU 97 s respectively, the relative abundance of which varied between the different conditions during the incubation period (Figure 3). It is important to note that the distribution of Alcanivorax and Cycloclasticus OTU 97 s did not show difference between biological replicates (PerMANOVA p-value > 0.05). Alcanivorax and Cycloclasticus genera adopted different strategies to face fluctuating conditions: Alcanivorax taking advantage of brief favorable condition (oxygenation) for growth under fluctuating conditions whereas Cycloclasticus requiring stable conditions to attain its maximun abundance (Figure 3).
Alcanivorax was the most abundant genus under anoxic/oxic conditions despite the fact that relatives were not detected in the metabolically active bacterial communities at the beginning of the incubations. They were below the detection limit under the permanent anoxic incubation. Alcanivorax related species have been described as nitrate reducing bacteria (Yakimov et al., 1998), but here the redox potential was obviously too low to allow their development. Two OTU 97 s (ID.A37 and ID.A2593) were found abundant under the permanent oxic condition and poorly represented under the other conditions ( Figure 3A). These OTU 97 s were closely related to Alcanivorax sp. NBRC 102021 (acc. no. AB681668) isolated from seawater for ID.A2593 and Alcanivorax sp. OM-2 (acc. no. AB053128) isolated from oiled marine sediments for ID.A37. Interestingly, four Alcanivorax OTU 97 s (ID.A0, ID.A9793, ID.A8753 and ID.A485), although they were either not detected or in very low abundance under the permanent oxic condition, were found with high relative abundances following periods of aeration under the anoxic/oxic oscillating condition (Figure 3A). This observation suggested that an episodic presence of oxygen was favorable for Alcanivorax growth. Episodic intrusion of oxygen in anoxic zones has been shown to support aerobic metabolism in typical anoxic environments (Ulloa et al., 2012). It is likely that such Alcanivorax related OTU 97 s lack the capacity to outcompete strict aerobes in the oxic condition and adopt a fast growing strategy (r-strategist) when oxygen becomes available in the anoxic condition, which is a typical lifestyle in unstable and unpredictable environments (Andrews and Harris, 1986). These OTU 97 s were closely related to a sequence retrieved in desalination plant (IDA484: clone UV-RV-025, acc. no. HQ326427) and a sequence detected in deep-sea petroleum contaminated sediment (ID.A0, ID.A9793, ID.A8753: clone TVG01-83, acc. no. KF545057). The presence of different Alcanivorax related OTU 97 s is in accordance with previous reports showing the functional redundancy of hydrocarbon degradation within the Alcanivorax genus . Alcanivorax species have been shown to exploit distinct hydrocarbon substrates as carbon and energy sources following different physiological strategies and presenting different susceptibilities to hydrostatic pressure . Such metabolic diversity may explain Relative abundances of the specific OTU 97 s for each condition. The analysis was performed at the genus level applying a threshold similarity of 97% for OTU identification (OTU 97 s). The analysis includes all samples covering the whole incubation period. The analysis is based on biological triplicates. The other * group combine the genera related to rare OTUs that are represented by less than 1% of total sequences per sample. The relative abundances of the OTU 97 s belonging to the "other group" are presented in Supplementary Figure 4. niche differentiation as observed for Alcanivorax phylotypes, phylotype SK2 occupying floating biofilms while phylotype OM-2 living in the sediment (Coulon et al., 2012). In our study, the dissimilar behavior shown by OTU 97 s ID.A9793 and ID.A8783 between the two oxygenated phases suggested the presence of subpopulations. Cycloclasticus, which was also among the most abundant genus, was represented by OTU 97 s showing different behavior (Figure 3B). Eight OTU 97 s (ID.C9747, ID.C3, ID.C8389, ID.C4039, ID.C7228, ID.C1181, ID.C178, ID.C2805) were detected under all conditions, but they were most abundant under permanent oxic condition. These OTU 97 s were related to Cycloclasticus pugetii 15BN12L-10 (acc. no. KF470997) isolated from Artic Ocean deep-sea hydrocarbon-contaminated sediment (ID.C9747, ID.C3 and ID.C8389), sequences of clones (acc. nos. AM882527, EU438147 and KJ094255) detected in petroleumcontaminated coastal marine sediments (ID.C4039, ID.C178 and ID.C2805 respectively), sequences of clones (acc. nos. FJ981469 and FJ980914) obtained from deep-sea hydrothermal plume (ID.C7228 and ID.C1181 respectively). Two OTU 97 s (ID.C5592 and ID.C7005) were detected under both permanent anoxic and permanent oxic conditions while the OTU 97 s ID.C9052 was not detected under the permanent anoxic condition. These OTU 97 s were closely related to Cycloclasticus pugetii 15BN12L-10 (ID.C7005: acc. no. KF470997), extracellular symbiont BG-C1 of Benthomodiolus (ID.C5592: acc. no. AB679348) and a sequence obtained in petroleum-contaminated coastal sediment (ID.C9052: acc. no. FM242294). Cycloclasticus was more sensitive to anoxic/oxic oscillations than Alcanivorax (Figure 3). Although this condition was unfavorable for its optimal growth, Cycloclasticus showed almost constant low abundances with only slight increases during the oxygenated phases, behavior that is more typical of K-strategist with slow growth. Such a strategy may explain the metabolic activity of some PAH-degrading populations that were shown previously to express rhd gene transcripts even in absence of oxygen during anoxic/oxic oscillations (Vitte et al., 2013). Because Cycloclasticus species been described as obligate aerobes (Staley, 2010), the presence of a single OTU 97 in all conditions may FIGURE 3 | Relative abundances of Alcanivorax OTU 97 s (A) and Cycloclasticus OTU 97 s (B) according to oxygenation regimes. The heatmap was performed at the species level applying a threshold similarity at 97% for OTU identification (OTU 97 s). The OTU 97 s abundances were normalized by rarefication to the lowest read count of a sample. Phylogenetic trees are shown on the left (the bars represent 0.8% estimated sequence divergence). ID sequences and the highest BLAST hits are indicated on the right. Color legend shows OTU 97 s abundance. The analysis is based on biological triplicates. * indicates oxygenation period under anoxic/oxic oscillation condition.
suggest the existence of distinct ecotypes, which correspond to a subpopulation that has acquired the genetic capacity to inhabit a slightly different ecological niche (Konstantinidis and Tiedje, 2005).
Micro-Diversity
The micro-diversity of Alcanivorax and Cycloclasticus was examined in order to determine whether ecotypes explain their distribution in ecological niches distinguished by their oxygen availability. The resolution of 16S rRNA gene transcript sequences at the subpopulation level allows the identification of specific ecotypes inhabiting distinct ecological niches (Eren et al., 2013;Tikhonov et al., 2015). We acknowledge the inherent biases of 16S rRNA gene-based approaches to define ecotypes, particularly the limited capacity in defining phylogenetic cohesive populations (Berry et al., 2017) and functional traits (Martiny et al., 2013). But such 16S rRNA-based approaches provide useful information to draw hypotheses that could explain observed ecological behavior of hydrocarbon-degrading bacteria.
hydrocarbon polluted coastal sediment (acc. no. AM882527 associated to OTU97 ID.C4039). Interestingly, oligotype C4 emerged specifically after the second oxygenation phase under the anoxic/oxic oscillation condition (Figure 4B), which was observed within all three biological replicates. This oligotype was affiliated to Cycloclasticus pugetii (acc. no. KF470997 associated to OTU 97 s ID.C3 and ID.C9747). The oligotypes explained only partly the behavior of Alcanivorax and Cycloclasticus related species to withstand oxygenation conditions. For example, it is not clear whether specialized ecotypes drive the success of Alcanivorax under anoxic/oxic oscillation since oligotype A1 was found dominant during all incubation period for both permanent anoxic and anoxic/oxic oscillation conditions. Similarly, although the distribution of Cycloclasticus oligotypes showed specialized ecotypes for anoxic permanent condition, the distribution of oligotypes C1 and C3 found under both anoxic and oxic samples suggested non-homogeneous subpopulations. We thus explored the micro-diversity of Alcanivorax and Cycloclasticus more deeply (swarm analysis with a dissimilarity threshold = 1), which revealed 10 and 23 subpopulations respectively (Figure 5). We assume that these subpopulations represent ecotypes since they occupy distinct ecological niches as reported by Tikhonov et al. (2015), who defined ecological subpopulations differing by one nucleotide in their 16S rRNA gene sequence. Furthermore, swarm analysis revealed cohesive subpopulations specifically observed under an oxygenation condition (Figure 5). PerMANOVA did not reveal difference in the distribution of Alcanivorax and Cycloclasticus ecotypes within biological replicates (p-value > 0.5). Alcanivorax ecotypes were related to the OTU 97 s ID.A9793 and ID.A8753 whereas the OTU 97 s ID.A0 and ID.A37 (ecotypes A1 and A2 respectively) showed uniform and cohesive populations irrespective of the oxygenation condition ( Figure 5A). The ecotypes' succession reflected the distinct ecological niches occurring throughout the incubation under the anoxic/oxic oscillation condition characterized not only by oxygen availability but also by metabolites appearing during aliphatic hydrocarbon degradation as well as the presence of other organic substrates. Ecotypes A6 (91.5% of oligotype A6) and A8 (100% of oligotype A3) related to OTU 97 ID.A9793 emerged after the first oxygenation period while ecotype A5 (100% of oligotype A6) related to OTU 97 ID.A8753 was observed after the second oxygenation period (Figure 5A). Ecotypes A3 (100% of oligotype A1) and A7 (94.6% of oligotype A1) bloomed during the second anoxic period while ecotypes A4 FIGURE 5 | Relative abundances of Alcanivorax ecotypes (A) and Cycloclasticus ecotypes (B) according to oxygenation regimes. The heatmap was performed at the sub-species level (swarm dissimilarity threshold = 1 on log-transformed data) representing ecotypes, which correspond to specific cohesive subpopulations correlated to an oxygenation condition. Each ecotype represents more than 100 sequences. The color legend at the left indicates the ecotype abundance (sequence counts). ID sequences and their affiliation (highest BLAST hits) are indicated on the right. The analysis is based on biological triplicates. * indicates oxygenation period under anoxic/oxic oscillation condition.
(100% of oligotype A5), A9 (100% of oligotype A4) and A10 (100% of oligotype A6) appeared at the end of the incubation after the third anoxic phase. Exploring the micro-diversity at one nucleotide difference level allowed the identification of specific ecotypes occupying distinct ecological niches during the incubation under anoxic/oxic oscillation condition. It is likely that such Alcanivorax ecotypes may take advantages of oxygen-fluctuating hydrocarbon-contaminated environments allowing them to compete with other alkane-degraders such as Marinobacter and Thalassolituus detected in our study at lower abundances (0.27 and 0.51% of total OTU 97 s, respectively), and probably with other alkane-degraders such as Oleibacter, Oleispira and Colwellia (all below 0.005% in our study), which have been detected in hydrocarbon-contaminated environments (Coulon et al., 2012) including during Deepwater Horizon oil spill (Kleindienst et al., 2016;Yang et al., 2016a).
CONCLUSION
An investigation of the behavior of Alcanivorax and Cycloclasticus, the most widely distributed MOHCB, under well-controlled different oxygenation regimes, revealed that members of these genera adopted distinct strategies to develop under oxygen-fluctuating conditions in oiled sediments. Anoxic/oxic oscillations were more favorable for Alcanivorax, which was more abundant than under the other conditions, suggesting that Alcanivorax behaved as a typical r-strategist. In contrast, Cycloclasticus abundance was lower in such fluctuating conditions compared with both permanent anoxic and permanent oxic conditions. Oligotyping revealed that the distribution of Alcanivorax and Cycloclasticus subpopulations (oligotypes) correlated with the redox conditions, but the analysis was unable to fully explain how these hydrocarbon-degraders may withstand anoxic/oxic oscillations. Further micro-diversity analysis at one nucleotide difference level (swarm analysis) allowed the identification of specific subpopulations, which we assume correspond to ecotypes occupying distinct ecological niches during the incubation under anoxic/oxic oscillation condition characterized by substrates and oxygen availabilities. Such ecotypes allow the colonization of distinct ecological niches that may explain the success of these MOHCB genera during an oil-spill. However, further efforts are required to isolate and characterize Alcanivorax and Cycloclasticus ecotypes to gain new insights on their ecological role within microbial networks involved in oil degradation in marine environments.
AUTHOR CONTRIBUTIONS
RD, CC-L, and CC conceived and designed the study. RD, CC-L, CC, and FT ran the experiments. RD, CC-L, CC, FT, CN, AD, and TMG analyzed the resulting data. RD wrote the manuscript. CC-L, CN, and CC revised the manuscript.
FUNDING
We thank the French National Research Agency (ANR) for their support through the DECAPAGE (ANR 2011 CESA 006 01) project. We acknowledge the financial support by a Ph.D. grant from the French Ministry of Higher Education and Research to CN. | 8,890 | 2017-08-16T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Webb Space Telescope primary mirror development: summary and lessons learned
Abstract. The primary mirror is central to the success of the Webb Space Telescope and the product of 100s of engineers and technologists who invented technologies and processes for its manufacture and test. We summarize the Webb mirror technology development program, explain how the technology was demonstrated to be TRL-6 (including the importance of an Engineering Development Unit), and list some of the author’s personal lessons learned.
Introduction
][3] From the beginning, mirror technology was identified as a critical capability.The summer study of 1996 determined that achieving the desired science objectives required a never before demonstrated space telescope capability: one with an 8-m primary mirror (providing 50 m 2 of collecting aperture) that is diffraction limited at 2 μm and operates at temperatures below 70 K. 4,5 Furthermore, because of launch vehicle limitations, two very significant architectural constraints were placed upon the telescope: segmentation and mass.Each of these directly resulted in specific technology capability requirements.First, because the launch vehicle fairing payload dynamic envelope diameter is ∼4.5 m, the only way to launch an 8 m class mirror is to segment it, fold it, and deploy it on orbit.Second, because of launch vehicle mass limits, the primary mirror allocation was only 1000 kg-resulting in a maximum areal density of 20 kg∕m 2 . 6inally, a cost goal of $500 M was levied on the Optical Telescope Assembly (OTA)-yielding an area cost of 50 M∕m 2 .Also, a production goal of 1 m 2 of glass per month was defined. 7n assessment of the pre-1996 state-of-the-art (as demonstrated by existing space, ground, and laboratory test bed telescopes) indicated that the necessary mirror technology was at a technology readiness level (TRL) of 3 (see Table 1).The largest space telescope was Hubble.Its 2.4m glass primary mirror has an areal density of 180 kg∕m 2 and operates at 300 K. Additionally, its primary mirror assembly has an areal density of 240 kg∕m 2 , and its OTA has an areal density of 420 kg∕m 2 .All values were significantly higher than what NGST required.Ground telescopes, such as Keck, demonstrated 10-m class semiactively controlled segmented mirrors.But as ground telescopes, they were exceedingly massive (2000 kg∕m 2 ) and thermally unsuitable.Test beds, such as the ITEK Advanced Large Optical Telescope (ALOT) and the Kodak Advanced Optical System Demonstrator (AOSD), demonstrated a proof of concept for a 4-m class pseudospace-qualifiable actively controlled segmented telescope in a laboratory environment; 8,9 the US Air Force Large Active Mirror Project (LAMP) demonstrated a 4-m actively controlled segmented primary mirror operating in a vacuum environment. 10But again, these test beds were 2× to 6× too massive for Webb (50 to 150 kg∕m 2 ) and only operated at ambient temperatures.The largest cryogenic mirror under development was the 0.85-m diameter Infrared Telescope Technology Testbed (ITTT) Beryllium primary mirror, which would eventually fly in the Spitzer Space Telescope in 2003.Additionally, the cost per square meter of the primary mirror for both Hubble and Spitzer was ∼$10 M∕m 2 (FY 2010), and the production rate for Hubble had been ∼1 year∕m 2 of polished glass, whereas Spitzer produced 1 m 2 in 4 months.
Finally, because one cannot make what cannot be measured, the Webb mirror technology development program required the invention and development of new optical metrology technologies.
This paper reviews the Webb mirror technology development program, explains how the technology was demonstrated to be TRL-6 [including the importance of an Engineering Development Unit (EDU)], and lists some of this author's personal lessons learned.This paper summarizes (by merging three papers) [11][12][13] 15 years of work by many people and organizations.
Mirror Technology Development
Based on the 1996 assessment and architectural concept studies performed by Lockheed-Martin, TRW (now Northrop), and NASA Goddard Space Flight Center (GSFC), it was concluded that NGST was feasible-provided that a well-planned, aggressive technology development effort was implemented early in the development phase. 4Thus a systematic mirror technology development program was initiated to invent mirror systems that could meet the NGST requirements; reduce the cost, schedule, mass, and risk of such mirror systems; and demonstrate a TRL of 6.An excess of $40 M was invested in mirror technology development from 1998 to 2004.As the lead for NGST Mirror Technology Development, Marshall Space Flight Center (MSFC) managed the investment and provided the study's principal investigator.The investment occurred through a series of related contracts: Subscale Beryllium Mirror Demonstrator (SBMD, $1.5 M), NGST Mirror System Demonstrator (NMSD, $15 M), and Advanced Mirror System Demonstrator (AMSD, $26 M), as well as several small technology studies and Small Business Innovative Research contracts.Additional mirror technology developments were conducted under the TRW (now Northrop) and Lockheed Pre-Phase-A Architecture Study Contracts.
The mirror technology development program was explicitly designed to be broad, follow a sequential or spiral development path, and employ phased down-select competition to produce TRL-6 mirrors.Specific technology areas investigated included substrate material (glass, beryllium, silicon carbide, nickel, etc.; mechanical, thermal, and optical material properties; and ability to manufacture large enough substrates; etc.); mirror design (open back, closed back, arched, thin face sheet; launch loads; etc.); architecture (passive, active, rigid, semirigid, etc.); fabrication process (substrate fabrication, grind and polish, and coating); metrology (vibration insensitivity, cryogenic characterization, etc.); and performance (cryogenic, thermal, mechanical, launch loads, etc.). 14,15Full and subscale mirror systems and their constituent components (i.e.., flexures, coatings, and actuators) were fabricated and cryogenically tested.Significant investments were made in facilities, equipment, procedures, and expertise.Also, to improve the ability of models to accurately predict on-orbit performance, an extensive program was conducted to characterize the cryogenic properties [i.e., coefficient of thermal expansion (CTE) and CTE uniformity, dynamic dampening, stiffness, and tensile strength] of various mirror and structure materials as well as their susceptibility to micrometeoroid impacts.
Subscale Beryllium Mirror Demonstrator
The SBMD project produced a 0.53-m diameter beryllium mirror with a 20-m radius of curvature (ROC) mounted on a solid Be support structure built by Ball Aerospace (BATC) (see Fig. 1). 16It was cryogenically tested multiple times at MSFC and provided invaluable experience and learning. 17For example, SBMD had cryogenic quilting (cryo-quilting), but the mechanical model did not predict any cryo-quilting.After several design iterations, how to properly model the cryo-quilting was learned. 18Using this knowledge, new rules were defined for how to design lightweight beryllium mirrors without cryo-quilting.These new design rules were successfully proven on AMSD.Additionally, SBMD taught valuable lessons on how to design cryogenic interfaces that do not distort the mirror surface shape.SBMD was also used to certify that the Webb gold coating, uncorrectable surface figure error, and creep were at TRL-6 (see Table 7).However, great caution is advised whenever extrapolating technical performance results from small mirrors to large mirrors.Given its size and design, SBMD was significantly stiffer than either AMSD or the Webb flight mirrors.SBMD was also important as the first use of O-30 beryllium for a cryogenic mirror.In 1996, the state-of-the-art cryogenic mirror was the 85-cm Spitzer Telescope made of I-70 beryllium 19 with a diffraction limited performance of 5 μm.But I-70 Be was not a good choice for the NGST primary mirror.Because it was produced using a mechanical pulverization process, its powder had irregular grain shapes.This irregularity limited how densely the powder could be packed into a hot isostatic pressure (HIP) can-which limited the maximum size mirror that could be made.Also the irregular grain shapes resulted in large CTE inhomogeneity.The solution was O-30 Be developed by Brush Wellman for the Air Force in the late 1980s.Because O-30 Be is a spherical powder material, it has a high packing density (thus allowing for hot isostatic pressuring of larger billets), and its CTE distribution is very uniform (which results in smaller cryo-distortion and higher cryo-stability).Also because O-30 Be has a lower oxide content than I-70 Be, it can achieve a smoother polished surface (i.e., less scatter).The ability to HIP a meter class billet was demonstrated in the late 1990s via the VLT secondary mirror.By 1999, Brush-Wellman had full production capability sufficient for the NGST program.This author's personal lessons learned from SBMD include the following.
• Do not trust models to validate performance-test to validate performance.Not on SBMD, nor on any subsequent study, did an apriori model correctly predict a mirror's thermal performance (i.e., cooling rate or cryo-deformation).Models were only able to replicate test data after the fact.• Validate models on the smallest possible test article before scaling up, and iterate until the model matches the data within the allocated error budget uncertainty.• The importance of repetition and learning.The first time SBMD was cryo-tested, the entire process took 3 months, but after several iterations, the MSFC X-Ray and Cryogenic Facility (XRCF) Team could do a test in a month.By the end of Webb, six mirrors were being cryo-testing at a time.• The importance of subscale demonstrators.SBMD offered the team invaluable early experience on a relatively low-cost but relevant subscale system-including the opportunity to understand the impact of design parameters on cryo-performance.
NGST Mirror System Demonstrator
The NMSD project was the most technically aggressive study.It sought to explore the limits of light weighting and successfully showed what does not work.NMSD clearly demonstrated the important roles that CTE and mechanical stiffness play in the ability to design and manufacture a stable mirror system.NGST produced two 1.6-m hexagonal shaped spherical mirrors with a 20 m ROC and areal density <15 kg∕m 2 .The two mirrors were manufactured by Composite Optics Inc. (COI) and the University of Arizona.The COI mirror was a thin glass sheet bonded to a rigid graphite composite structure.The Arizona mirror was a thin glass sheet attached to a graphite composite structure via 166 actuators.Both NMSD mirrors took significantly longer to make and achieved significantly lower cryo-performance than expected.The causes for these results were assessed to be CTE mismatch and inhomogeneity; too low of areal density (i.e., too low stiffness); and overly complex designs.This author's personal lessons learned from NMSD include the following.
• Avoid mirror systems with multiple CTE materials, even if it appears that the CTEs of the various materials will match at a specific temperature.CTE homogeneity is critical for cryogenic mirrors (or mirrors that need a stable shape as a function of temperature).CTE inhomogeneity produces cryo-wavefront error, and CTE mismatch between different component materials can produce a large error.Because of CTE mismatch, the COI mirror exhibited a very large cryo-deformation and quilting.• Stiffness is more important than areal density.Although SBMD and NMSD had the same "assembly" areal density requirement (<15 kg∕m 2 ), the NMSD systems were more than 10× less stiff (and the Arizona glass face sheet was many orders of magnitude less stiff) than SBMD.The reader is reminded that stiffness increases linearly with thickness and decreases quadratically with diameter.This stiffness difference resulted in profound effects.First, standard fabrication processes, handling procedures, and optician intuitions that are perfectly appropriate for conventional mirrors are not applicable to extremely low stiffness mirrors.In fact, Arizona broke their first face sheet.Second, because of the Arizona mirror's low stiffness, it was impossible to controllably adjust the actuators to figure the mirror.The simple act of stepping onto the test platform would change the shape of the mirror.• Large mirrors are harder to make than small mirrors and scale-up incrementally.Although SMBD had been successful, NMSD's factor of 3× scale-up (from 0.53 m diameter to 1.6 m) with the same areal density was a bridge too far.It may be better to scale-up in steps of 2×.For AMSD, the scale-up was ∼2.25×.
• Avoid complexity.Complexity adds cost and schedule risk.It is much more difficult to mass produce 166 actuators than it is to build a single prototype, and one should expect up to a 30% initial failure rate.
Advanced Mirror System Demonstrator
The AMSD study was designed to explore the most likely NGST mirror technologies at an appropriate scale.Its success formed a basis for estimating Webb ambient and cryogenic performance, manufacturability, schedule, cost, and risk.Given the importance of large lightweight mirrors to many government missions, AMSD was a joint NASA and Department of Defense program.Although some mission requirements were divergent, the pooling of resources provided greater funding to explore the technology landscape more widely and deeply.
A critical element of the AMSD program was competition.Competition between ideas and vendors resulted in a rapid TRL advance of modern, large-aperture lightweight cryogenic space mirrors.AMSD followed a phased down-select approach.Phase 1 awarded contracts to five different vendors to study and develop designs for a total of eight different mirror architectures.1][22][23][24][25] All of these mirrors were 1.3 to 1.4-m point to point-just the size needed to produce a segmented primary mirror 6 to 8 m in diameter-and had an areal density of ∼15 kg∕m 2 .
1][22][23] A key element of the BATC approach is that the mirrors required cryo-null polishing to remove cool down cryodistortions.
Goodrich proposed two high-authority mirror concepts consisting of a face sheet (one concept was shallow-ribbed glass and the other was silicon carbide) supported on an array of displacement actuators.The displacement actuators would be used to correct for cool down distortion. 24The face sheets would be fabricated via stress polishing on a mandrel.Early in its design phase, because of projected cost and schedule overruns, NASA terminated the SiC concept.
Kodak (now L3-Harris) fabricated a semirigid mirror system that utilized a closed-back allglass cellular-core mirror along with a few force actuators to correct low-order mirror distortions that occur during cool down to cryogenic temperatures (see Fig. 3). 9,25,26The Kodak approach also assumed cryo-null polishing to remove both correctable and uncorrectable cryodeformations.
After Northrop Grumman (NGC) was selected as the Webb prime contractor in 2002, the Goodrich effort was terminated due to incompatibility with the NGC Webb architecture.This allowed the remaining funds to be focused on the BATC and Kodak mirrors.Both mirrors were successfully cryo-tested, and their cryogenic performance was characterized.Findings of the cryo-testing included the following: a properly designed beryllium mirror substrate will not have cryo-quilting; a properly designed mirror mount will not introduce low-order cryo-deformation; and cryo-deformation is the result of CTE nonuniformity in the mirror substrate.This author's personal lessons learned from AMSD phase 2 include the following.
• Plan for the unplanned: increase cost and schedule estimates by 50%.Maybe the most important lesson that this author learned was to calibrate my intuition regarding cost and schedule.In this author's opinion, everyone's proposed phase 2 cost and schedule seemed reasonable, except for Kodak's, which seemed unreasonably conservative.Well, everyone-including Kodak-overran both cost and schedule.But Kodak's overrun was the smallest.And maybe they would not have overrun at all if they had not broken their first mirror (a black swan event).So lesson learned, avoid your own optimism, and avoid being misled by other's optimistic thinking or deliberate false pretenses.To mitigate this risk, add 50% to any cost or schedule estimate.• Again, stiffness is more important than areal density.Standard fabrication processes must be revised for low-stiffness mirrors.Kodak broke their first mirror.The root cause was found to be using an inappropriate torque stress margin.And Goodrich, similar to Arizona before them, fractured their glass face sheet.• Intuition about how things work at ambient does not scale to cryogenic temperatures.
Goodrich wisely made a subscale pathfinder glass mirror and learned an important lesson.
Although their design worked fine at ambient, its cryo-performance was significantly degraded because of mismatches between its constituent materials modulus and CTE changes as a function of temperature.• CTE homogeneity is critical.The Kodak ULE ® mirror exhibited a significant and unexpected cryo-deformation.At first, it was believed to be mount stress.But, after removing the mount and testing the mirror "hanging" on a single point-it had the same cryo-deformation.The "agreed upon" root cause was a CTE "wood-grain" effect.ULE ® is a laminar material.Although its bulk CTE is near zero, the CTE of each layer is not zero.It was determined that the mirror's aspheric departure cut through multiple CTE layers, giving the front face-sheet a "wood-grain" CTE texture.Goodrich also had a CTE mismatch problem between their glass facesheet and stress polishing blocking body that resulted in a midspatial frequency error.• Plan for the unexpected statistical outlier-and again, do not rely on models.Space environment models for SE-L2 predicted the existence of small high velocity micrometeoroids.
To assess their potential affect, ULE ® and Be samples, as well as layers of sunshade material, were impacted with glass microspheres using the University of Auburn hypervelocity gas gun.On ULE ® , the affect was small fractures.On Be, the affect was an impact crater with localized melting/resolidification spalling.On the sunshade, the initial particle impact produced a spray of particles that penetrated subsequent layers.In all cases, the affects were deemed acceptable because the predicted probability of a large particle impact was once per 100 years.But the reality of space is different, and Webb has encountered larger, more energetic micrometeoroids than the models predicted-approximately one per month.
It is the assessment of this author that AMSD (and the broader NGST mirror technology development effort) was successful because of the following.
• AMSD had very clear specification and performance metrics that were traceable to the potential flight mission.Although these specifications eventually proved to be inappropriate for the flight mission, they focused technology development and enabled apples to apples comparisons.• The compliance of each competing mirror system was independently verified by the government team.• The entire technology development program was executed by a single organization and principal investigator.• The government team consisted of the best and brightest from multiple agencies and organizations.
• The competing contractors were treated as full members of the team.
• The government team had full insight into each contractor's efforts.
• Competition motivated the contracting teams to innovate technology solutions to achieve the required performance and programmatic objectives and, in my personal assessment, at a lower cost and faster completion than if there had not been competition.
Metrology Technology Development
In 1999, NGST had a problem.The SBMD mirror had been delivered and was being tested using an Adaptive Optical Associates Shack-Hartmann wavefront sensor-which did not have sufficient resolution and reproducibility to certify specification compliance.A method did not exist that could certify the technology development mirror's prescription (ROC and conic constant) and surface figure error at their cryogenic operating temperature inside the XRCF cryo-vacuum chamber (see Fig. 4).As the adage goes-you cannot make what you cannot measure.Because the mirror's radii of curvatures were long, their center of curvature was located outside of the vacuum chamber, and the mirrors were tested through a vacuum window.Because of this separation, the interferometer and mirrors had a relative piston motion of 4 μm-too large and too fast for a commercial temporal phase-shifting interferometer.
In November 1999, NASA hired this author as a principal investigator for the NGST Mirror Technology Development Program, in part, because of his then-current relevant experience refurbishing, aligning, and operating the 4-m 7-segment actively controlled LAMP mirror in a vacuum environment.(LAMP was part of the strategic defense initiate), but more for his It may be self-serving to state, but the most important lesson to be learned is that there is no substitute for direct relevant experience, and if you do not have that person in your organization, you must find and hire them.
PhaseCAM
Testing long ROC mirrors and overcoming the limitations of atmospheric turbulence and mechanical motion had long been an interest of this author.One solution to this problem was phase-measuring interferometry (PMI) at 10.6 μm. 27The infrared wavelength was insensitive to temperature induced index variations in the atmosphere and small amplitude mechanical vibrations.But infrared interferometers lack visible wavelength sensitivity.Another solution (developed at Breault Research Organization) was high-speed PMI with a 340-Hz Reticon CCD camera. 28If the atmospheric turbulence and mechanical motion are sufficiently slow, they can be sampled and removed via averaging-as long as the sampling interval was longer than the atmosphere's random-walk correlation time.
In 1987, Keck contracted with BRO to use this high-speed PMI system to test their primary mirror segments at ITEK.Because of their mirror's long focal length and off-axis near-parabolic optical prescription, the total optical test air-path for each segment was 48 m and required five reflective bounces-twice off mirror segment and three times off the autocollimation flat (ACF).This air path was too long for ITEK's LUPI (laser unequal path interferometer) or a commercial Fizeau temporal phase-shifting interferometer, and there was too much mechanical motion.The high-speed camera system helped, but the ultimate solution was a pseudo common-path test setup that this author had seen in Norm Cole's optical shop.The mechanical motions were mitigated by rotating the LUPI reference flat 45 deg to send the reference beam along the same path as the test beam (bouncing off the ACF and a flat attached to the segment test stand).Atmospheric turbulence was mitigated by averaging 64 measurements taken on a multiminute cadence to avoid atmospheric turbulence "random-walk" correlation. 28In the process, another problem was discovered, a problem that would be critical for absolutely characterizing the LIGO reference flats and for testing the Webb segments.Because the CCD camera frame capture was not instantaneous, laser frequency drift introduced a phase shift error from the start of the frame readout to the end.Now for serendipity, before joining NASA, while visiting MetroLaser on another matter, this author saw on a table in a back lab a breadboard setup of a "real-time" interferometer producing a phase map of a flame plume.Its potential was immediately obvious, and Bernie Seery, Pre-Phase-A Study Manager, agreed to fund a risk reduction experiment.Newly incorporated 4D Vision Technology was given a $60 K contract to build an interferometer to NASA's specification.They delivered the first ever PhaseCAM (Fig 5) in just 6 months and it worked great.Its resolution was 512 × 512; its repeatability was 1.2 nm rms; and over a 20-m air path, its measurement uncertainty was 5 nm rms.It was put into use immediately, and Webb could not have been made without the 4D PhaseCAM technology.
Absolute Distance Meter
Regarding measuring the ROC, a common technique is to use a distance measuring interferometer on a lens bench to measure the radius.But this technique would not work for the NGST mirrors because it requires a displacement measurement from "cats-eye" to the center of curvature.During a NASA fact-finding trip to SSG Walthan MA, a Leica theodolite was observed with an interesting distance measuring technology, but its accuracy was only AE5 mm, and the NGST specification was AE0.1 mm.So a development effort with Leica was funded and resulted in the absolute distance meter (ADM).The ADM was used to measure and set the ROC on all development and flight mirrors.(And maybe its funding led to Leica's commercial DISTO handheld laser distance measuring devices.)
Primary Mirror Design Iteration
In 2002, the Optical Telescope Element (OTE) aperture diameter was reduced from 8 to 6 m.This decision was made primarily for cost reasons but also, based on lessons learned from AMSD, to increase the Primary Mirror Segment Assembly (PMSA)'s areal density from 15 to 26 kg∕m 2to better survive launch and to improve manufacturability.
This architecture change initiated a design iteration that significantly improved the primary mirror architecture.The original 8-m primary mirror had 36 segments with rigid body actuation.When the aperture was reduced to 25 m 2 (i.e., 6 m), there was a trade between having 36 smaller segments (still with rigid body actuation) or having 18 larger segments with hexapod actuation.This author remembers Lee Feinberg (Webb OTA manager) deciding on the larger segments with hexapods based on "part count."But the real value of the hexapod came during fabrication of PMSAs at Tinsley.Because of uncertainty in locating each PMSA in the parent prescription space, there was always some residual astigmatic surface error.Having the hexapods and an edge gap allocation allowed for PMSAs to be adjusted in the parent space to minimize wavefront astigmatism-both during testing and on-orbit.
Webb Primary Mirror Selection
AMSD phase-3 funded two competing studies to design candidate flight primary mirrors, perform production planning, and generate cost/schedule proposals.The materials evaluated were O-30 Beryllium and ULE ® glass.The Beryllium team consisted of Ball Aerospace, Brush-Wellman, AXSYS, Tinsley, and ATK-COI.The ULE ® glass team consisted of Kodak, Corning, and ATK-COI.
A Mirror Recommendation Board (MRB) was established to evaluate the competing proposals.The MRB consisted of a balanced membership with representatives from NGC, Ball, Kodak, and NASA.The MRB also included consultants with extensive experience in technical and programmatic issues associated with optical design, analysis, manufacturing, and testing.The MRB defined evaluation criteria and key discriminators.Subcommittees were formed to review vendor data in the areas of technical performance, cost, schedule, facility, and staffing.The MRB met 3 times in the spring and summer of 2003.The final selection was briefed at the OTE Optical Readiness (OOR) review on September 9, 2003.
It was the OOR Review Panel's assessment that AMSD successfully raised both mirror technologies to TRL 5.5, reduced technical (weight and performance) and programmatic (schedule and cost) risks by fabricating full-scale mirror systems, and validated their thermal wavefront performance under flight-like operational conditions. 11,29he Ball beryllium mirror was selected for flight.Beryllium was rated as the highest performing, lowest technical risk solution.Its cited strengths included superior cryogenic CTE and thermal conductivity; significant margins on thermal performance, stiffness, and mass; and its excellent potential science performance.Specific concerns included managing surface stress to achieve convergence to the required final surface figure and manufacturing schedule.A key selection discriminator was the thermal stability of the beryllium mirror over the 30 to 50 K operating range.
Although the MRB found that ULE ® glass has significant programmatic advantages, this strength was offset by concern regarding the uncertainty about how ULE ® CTE variability impacts the thermal performance of lightweight cryogenic mirrors.Suitability of ULE ® would not have been fully proven until completion of the EDU in 2005.Sixteen of eighteen MRB members scored Beryllium higher than ULE ® . 29ext, AMSD-3 initiated the manufacture of an EDU, which was used for vibration and acoustic testing needed to achieve TRL-6.
A personal lesson learned for this author was how important open competition and the MRB/ OOR process were for building a clear unimpeachable consensus decision as to the best primary mirror architecture to take into the flight program.Another lesson was a reinforcement of the importance of competition for reducing cost.When each team presented their flight mirror proposal, each offered contract incentives (i.e., cost sharing via infrastructure investment) that exceeded the $3 M increment cost of redesigning a second mirror under phase 3.
Performance Subcommittee
Vendor primary mirror design concepts were evaluated for their impact on the Webb OTE level 2 performance requirements.Technical performance criteria were divided into two categories: mandatory and secondary.Mandatory criteria were defined as mirror performance factors that influence the ability of the OTE to meet Webb level 2 requirements.Secondary criteria were other factors that influenced OTE performance but were not directly traceable to level 2 requirements.Each mirror concept was characterized as to its ability to exceed, meet, or not meet mandatory evaluation criteria (see Table 2).
The Performance Subcommittee assessed that meeting key Webb Level 2 requirements would be very challenging but that the beryllium mirror provided significant performance advantages for Webb.
Regarding the science mission timeline, both mirrors were assumed to be able to satisfy the 10-year requirement.The only difference between them was that, if the end-of-life temperature of the OTE is 4 K different than the beginning of life temperature, then the ULE ® mirror's wavefront error (WFE) would degrade ∼8 nm rms.Regarding launch survival, it was assumed that both mirrors could be designed for the required vibroacoustic environment.
Regarding the collecting area, Webb requires each mirror segment to be polished to within 5 mm of the physical aperture.Both mirror teams had some difficulty with this on AMSD, but the ULE ® team had slightly more difficulty.
Compliance with the optical performance parameters was derived based on AMSD ambient and cryogenic test results.To facilitate a head-to-head comparison, a specific protocol was defined to ensure that all data were analyzed identically.This protocol defined in the data flow how to correct for CGH distortion; remove low-order aberrations; compensate for gravity sag; mask, clip and threshold the data; remove test setup induced misalignment aberration; and how much real cryo-deformation aberration would be allowed to be removed based on a limited ability to compensate for such errors by moving mirror segments on-orbit via a hexapod mount.
Based on AMSD results, neither mirror team predicted an OTE WFE of <117 nm rms.The Be team predicted 118 nm rms and the ULE ® team predicted 119 nm rms.However, both vendor teams subsequently figured their AMSD mirrors to a surface quality sufficient to meet this requirement.
A fundamental difference between the Be and ULE ® mirrors were their cryogenic distortion, i.e., mirror shape change from ambient to 30 K. AMSD showed that the ULE ® mirror experienced a larger cryo-distortion than the Be mirror.The total ambient to cryogenic figure change for the Be mirror was 171 nm rms and for the ULE ® mirror was 398 nm rms.As shown in Fig. 6, after performing a simulated hexapod adjustment to remove alignment aberrations, the Be mirror change was 77 nm rms, and the ULE ® mirror change was 188 nm rms.After removing 36 Zernike coefficients, the high spatial frequency residual error is 26 AE 2 nm rms for Be and 47 AE 9 nm rms for ULE ® .The ULE ® mirror exhibited obvious print through from its core structure.Although it is possible to remove cryo-deformation via cryo-null figuring, the larger magnitude of the ULE ® deformation, as well as its print through, posed more of a risk than that of the Be mirror.Also because this deformation was only recently identified as a result of the AMSD project, there was additional risk that the magnitude and sign of this cryogenic deformation could vary from mirror to mirror.
Another significant difference between the two mirrors was their thermal stability and sensitivity to a thermal operating set point.This sensitivity impacts encircled energy (EE) stability and on-orbit performance as a function of time.Over the anticipated operational temperature range of 30 to 55 K, the Be AMSD mirror experienced a 7 nm rms total surface figure change, whereas the ULE ® mirror's figure changed by 40 nm rms.Note that the Be change was at or below the measurement sensitivity.As shown in Fig. 7, after removing alignment aberrations, the ULE ® change dropped to 21 nm rms.After removing the first 36 Zernikes, the figure error for Be was 1.6 nm rms and for ULE ® was 4.6 nm rms.Using the measured AMSD optical performance as the basis, the Performance Subcommittee derived the OTE optical performance as measured by the Strehl ratio, point spread function (PSF), EE, and EE stability.Both mirrors predict an excellent Strehl ratio with a substantial margin (Be > 93.9% and ULE ® > 92.8%).The predicted EE for Be was >74%, and the predicted EE for ULE ® was >72%.However, because of the thermal stability of Be, it has a better EE stability over the entire thermal operating range and potential thermal gradient conditions.This was important because EE stability was a financial bonus performance incentive parameter in the prime contract.
Secondary evaluation criteria for each mirror concept were characterized as to its risk (high, moderate, or low) of impacting OTE performance (see Table 3).
Regarding design issues, all Beryllium design features for Webb (except for the segment size) were equal to or lower risk than what was demonstrated on AMSD.The Be Webb design had fewer pockets than AMSD with thicker ribs, a thicker face sheet, larger corner radii, and larger fillet radii.The ULE ® mirror had several parameters in its design that were higher risk than what was demonstrated on AMSD.In addition to a larger segment size, it would have a thinner front and back plate thickness, larger core depth, and a thinner edge ring.Mass was explicitly excluded as a selection factor, but at the time of the evaluation process, the Be mirror design was 32 kg over budget, and ULE ® would have been 176 kg over budget.
Finally, a new process was proposed for inspecting raw ULE ® glass before it was fabricated into mirrors.The purpose of this process was to mitigate the mechanism thought to explain the thermal deformation effect observed on AMSD.The subcommittee found both the mechanism and new process credible but unproven.Stahl: Webb space telescope primary mirror development: summary and lessons learned The Performance Subcommittee recommended Beryllium for Webb based on several factors.It was expected to meet all Webb level 2 requirements.AMSD successfully demonstrated most of the critical technology issues needed to scale up to Webb, and Webb design improvements would make the segments more producible with lower risk.The anticipated cryogenic deformation was within the range of what could be cryo-null figured.Beryllium's excellent thermal properties provided a stable mirror performance over the entire Webb operating temperature range.The subcommittee's findings on ULE ® were that its AMSD cryogenic behavior was not predictable, the mechanism for that behavior was unproven, and its thermal sensitivity could pose a risk to on-orbit optical performance.However, it was also the assessment of the subcommittee that these findings apply only to cryogenic operation and that all AMSD data support the fact that ULE ® is an excellent mirror material for ambient applications.
Manufacturing Process and Test Plan Subcommittee
Each vendor provided detailed schedules for an EDU, 18 flight segments, and 2 spare blanks with 100s to 1000s of elements.Each schedule included critical resource allocations and basis of estimates.AMSD was assumed as the basis for all operations, and detailed traceability matrices were provided to justify all Webb processes.Vendors were instructed to provide detailed justification for any process durations different from AMSD.
From the schedules, the subcommittee selected four critical milestones for assessment: EDU vibe test, EDU completion, first segment, and last segment.EDU structural and acoustic testing were selected to ensure that this critical milestone is accomplished before the nonadvocate review (NAR).EDU completion was a key selection for ensuring that all production steps were fully demonstrated before they were needed on flight hardware.The first primary mirror segment was selected because it was required for the OTE Pathfinder risk reduction activity.The last segment was selected because it defined the critical path to launch.
The manufacturing process and test plans were assessed for adequacy in terms of detail and thoroughness of understanding and adequacy of required equipment.The schedules were assessed for contingency, robustness, and flexibility.Slack to completion was evaluated for each of the four identified milestones.Available workarounds were considered, and a critical path chain analysis was performed.Each schedule was assessed for credibility and risk.The adequacy of justification for the differences between Webb and AMSD was assessed.Risk and mitigation plans were evaluated using Webb project criteria/guidelines.
Using each vendor's schedule, a comparative probabilistic schedule risk assessment was performed using Risk+© software.The assessment was based on schedule pessimistic and optimistic durations.Pessimistic assessment was based on interrogating the traceability matrix.Optimistic assessment was based on the vendor's identified process improvements.The assessment indicated Beryllium had 7% to 15% more schedule risk than ULE ® (see Fig. 8).The manufacturing process and test plan subcommittee had three findings.First, Beryllium had fewer new or modified steps from AMSD in its proposal than ULE ® .Second, the ULE ® schedule had more slack in the identified program milestones, and its processing flow was appreciably more immune to large perturbations without impacting the Webb critical path.Third, both vendor's schedules were optimistic and represented risk to the program.The probabilistic risk assessment indicated that Beryllium represented a 7% to 15% greater schedule risk than ULE ® .
Facility and Staffing
The facility and staffing subcommittee assessed that there were no major challenges for the ULE ® team and that the biggest challenge for the Be team would be setting up the polishing plant.The biggest equipment challenge for the Be team would be getting the first CNC machine on-line, whereas the biggest challenge for the ULE ® team would be getting small tool machines on-line.
Cost Subcommittee
Both vendors provided detailed cost proposals.After adjusting both proposals based on an indepth review of their basis of estimates, the cost subcommittee found little difference in either proposal's risk or content.Both vendors proposed a recurring cost that was 35% lower than one might predict using a simple AMSD extrapolation.Additionally, both vendors proposed substantial internal investment.It was the assessment of the subcommittee that much if not all of the cost of the AMSD mirror technology development program was recovered by these cost savings.Finally, based on the relatively greater schedule risk of 7% to 15% for Be, it was assessed that Be had a larger risk of cost overrun than ULE ® .However, this risk was offset by the fact that 45% of the Be proposal was a firm fixed price.
Science Subcommittee
The science subcommittee assessed that Beryllium would be the highest performing, lowest technical risk solution.It had superior cryogenic CTE and thermal conductivity-thus providing significant optical performance margin in the event of thermal gradients and bulk temperature set point uncertainty.It also was more forgiving of differences between the thermal conditions for on-orbit and ground testing.The Beryllium mirror would provide, with margin, a PSF that meets both the EE and EE stability requirements.
Engineering Development Unit
All flight programs have EDUs, but in the case of Webb, the PMSA EDU was critical.Based on lessons learned from AMSD, the Webb flight PMSA design was modified to improve the producibility, performance, launch survival, and risk (see Table 4).Because of these design changes, the fabrication process developed on AMSD needed to be modified and requalified.
Ideally and in accordance with the National Research Council report on controlling NASA space mission cost growth, 30 it would have been nice to demonstrate TRL-6 compliance on a single-mirror system before entering phase C/D, but that did not happen.AMSD ran out of both time and money, so TRL-6 was demonstrated piecewise.Furthermore, because the flight mirrors were long lead items on the critical path, it was necessary to start their production in phase B. But as well meaning as this decision was, there was a problem.Because of the length of the mirror fabrication process, too much time had passed.By the time that the OOR review panel had selected the flight mirror configuration, it had been several years since Brush-Wellman had manufactured a mirror blank, AXSYS had machined a substrate, or Tinsley had performed rough grinding.This is a problem because of the forgetting curve.Just as there is a learning curve (which reduces the cost and schedule of similar items by up to 30%), there is also a forgetting curve.(First formulated in 1880, Ebbinghaus determined that humans forget information exponentially with time.Subsequent studies confirm that, without repetition, humans forget 90% of their training within 1 month. 31This author has heard it said that organizations forget half of their corporate knowledge every 6 months.)Consequently, because of changes to the Webb flight PMSA design and forgetting, the entire fabrication process had to be relearned and revalidated on the EDU.Because of this relearning, the EDU underwent a fabrication process that was not only different from AMSD but also different from the subsequent flight mirrors.In fact, the flight fabrication process did not become truly reproducible until flight mirror #3.One example is that Tinsley's polishing compound supplier changed their compound's formula.
Additionally, just as it is important to avoid large gaps in a process to prevent forgetting, it is also important to avoid getting the subsequent flight mirrors too close to the EDU.Otherwise, it is impossible to fully implement the lessons learned from the EDU with the flight mirrors.A recommendation for any future ground or space telescope that uses multiple primary mirror segments is to process more than one EDU or manufacture the flight spares first.The purpose is to ensure that no process is ever performed for the first time on a flight mirror and that flight production begins once the process is stable.If the spares meet all requirement specifications, they can always be promoted to flight status.Some personal lessons learned from the EDU are as follows.
• Polishing a 1.5-m class mirror to within 5 mm of its physical edge is very difficult.It is possible to misregister the edge by 25 to 50 mm for a number of reasons, including the combination of image distortion when testing a mirror at center of curvature with a CGH; the effect of retrace errors when light from a rolled edge travels 16 m back to the center of curvature; or the effect of Fresnel diffraction from out of focus edges coherently summing with the surface wavefront.Extensive fiducialization is important to knowing where to small-tool polish on the mirror.• Plan for forgetting.Manufacture the EDU and flight spares before making flight units.
Because of the "forgetting" curve, fabrication and testing processes need to be relearned before making flight articles.Also during the flight production, situations in which personnel forget the process steps arose, and it was necessary to stop work for a day and retrain everyone on how to follow the process.• There is no substitute for government insight/oversight experience.Contractor personal come and go; it is the responsibility of the government insight/oversight team to ensure compliance with all specifications.To do this, the team needs to consist of persons with direct relevant experience.One example was understanding how Fresnel diffraction impacts the ability to correctly measure a mirror's edge and the consequences of CGH pupil distortion.
TRL-6 Certification
A central requirement of the mirror technology development program was to mature the TRL for mirror technology critical to Webb from the pre-1996 TRL-3 level to a level of TRL-6 for review by a technical NAR panel.Assessment of TRL-6 by the TNAR had to occur before the Webb OTA could undergo its critical design audit.This gate was achieved on January 31, 2007.The process used to certify that the Webb mirror technology was at TRL-6 was systematic and rigorous. 12,13It was accomplished by defining a set of critical technology capabilities (which flowed directly from the level 1 science requirements) that had to be demonstrated under relevant flight conditions and then demonstrate that compliance.Demonstration of compliance was accomplished piecewise using SBMD, AMSD, flight mirrors, and test coupons.
Requirements Flowdown
Technology requirements for the PMSA were derived from level 1 science requirements through level 2 mission requirements and level 3 observatory requirements (see Table 5).Level 1 science requirements were defined in the Webb Program Plan. 32Level 2 mission requirements were defined in the Webb Mission Requirements Document. 33Level 3 observatory requirements and specific mirror technology component requirements were derived during the phase 2 NGST Observatory Contract and refined after the Prime Contractor (and Implementation Team) was selected.Complete PMSA requirements are defined in the Equipment Specification for Webb PMSA. 34 comparison of these PMSA requirements with the pre-Webb state-of-the-art for space telescopes, as defined by Hubble and Spitzer (see Table 6), clearly showed that they were truly well beyond the state-of-the-art.Thus these capabilities were TRL-6 technologies that needed to be demonstrated.
Although there were literally 100's of engineering specifications necessary to manufacture a Webb PMSA, only a select few were considered technology requiring demonstration: gold coating cryo-survivability, figure thermal stability, areal density, figure launch distortion, primary mirror optical area, surface figure error (including ROC, hexapod, creep and polishing error), and cryogenic performance.Note that this list is not in a priority order, but in the order of their flow down from the level 1 science requirements developed in Tables 5 and 6.The balance of this section details the system engineering logic of how each mirror technology requirement flows from its originating level 1 science requirement.
Although the observatory operating temperature was listed as a key technology, it was really an existence principle.It is the one requirement that pervades all other requirements.To achieve the level 1 science requirement of providing a thermal environment that permits the science instruments to have zodiacal light background limited imaging performance over the wavelength range from 1.7 to 10 μm, the observatory must limit its thermal emissions by operating at a cryogenic temperature of <50 K.This directly drives the need to place the telescope at L2, which requires an EELV launch vehicle that demands low areal density mirror segments.This requirement also directly drove all operational thermal requirements, including performance, survival, and stability.Thermal modeling indicated that some of PMSAs might be as cold as 28 K.
Gold coating cryogenic survivability was a relatively minor TRL-6 technology.Level 1 science requirements specified a spectral range of 0.6 to 27 μm.This, in combination with sensitivity, flowed into a level 2 optical transmission requirement, which directly flowed into a PMSA reflectivity requirement.Uncoated polished Beryllium cannot achieve the required reflectivity over the required spectral range.Overcoats of gold, silver, and aluminum were considered.Gold was the best candidate material.It provides excellent reflectivity in the near-and mid-infrared and acceptable performance in the visible.Silver does provide better performance in the visible, but it requires a protective layer to avoid oxidation problems.Aluminum, although common for ground-based visible telescopes, does not have acceptable infrared performance.Gold is a common coating material and thus was not itself a TRL-6 technology.But the cryogenic survival of a gold coating applied to a large O-30 Beryllium mirror had never been demonstrated.
The PMSA surface figure thermal stability was possibly the most important TRL-6 technology and was a key factor in selecting Beryllium as the primary mirror material. 11,33Level 1 specified that science observations must be able to occur at any position in the celestial sphere.This placed a stability requirement on the EE as the observatory slews, which in practice was a constraint on how much the PSF shape could change due to thermal gradients introduced into the telescope as a function of angle to the sun.At the PMSA level, EE thermal stability is directly determined by the thermal stability of the surface figure shape.Although dozens of engineering issues can contribute to this stability (such as material CTE uniformity and structural design, including actuator athermalization bracket design and bimetallic effects), it was the system level PMSA performance that is the TRL-6 technology.Thus a specific PMSA design implementation must be demonstrated to have cryogenic figure stability of <0.3 nm rms per K, which manifested itself as a maximum surface figure change of 7.5 nm rms from 30 to 55 K.
PMSA areal density was one of the two key technologies identified as requiring significant development effort.The level 1 science requirement of operating the observatory at L2 flowed down to a level 2 requirement that the observatory must be launched via a heavy lift rocket (such as an Areianespace Ariane 5).This placed a mass constraint of 6159 kg on the observatory.The original primary mirror allocation of this mass was 1000 kg, and given that the original telescope collecting area was to be 50 m 2 , this placed an areal density requirement on the primary mirror of 20 kg∕m 2 .To provide margin, a technology goal of 15 kg∕m 2 was defined.It was this goal that drove the entire mirror technology development program.As the observatory architecture evolved and mass maturity of different observatory elements improved, the PMSA areal density specification was raised to 26.5 kg∕m 2 .
The PMSA diameter was the second key technology identified as requiring significant development effort.Originally, an 8 m class primary mirror was required to achieve the desired observatory sensitivity.Given that the observatory needed to operate at L2, that the only way to get to L2 was to be launched on a heavy lift rocket, and that the maximum available shroud diameter was only 4.5 m, it was clear that a segmented and deployed architecture was required.Competing design solutions required segments with diameters ranging from 1 to 3 m.Groundbased observatories (Keck, Hobby-Eberly) and test beds (LAMP, ALOT, and AOSD) had demonstrated the ability to produce segmented telescopes, but their areal densities were too high (70 to 2000 kg∕m 2 ).Thus a primary focus of the mirror technology development effort was on how to manufacture 1 to 3 m class mirror systems with the required areal density.A key task was to design and demonstrate a substrate that could be manufactured, safely handled, optically finished including ground testing, and integrated into a system that would survive launch-all with an areal density <20 kg∕m 2 .A second issue was the ability to manufacture the substrate blank.Pre-Webb, all large mirrors were glass, which, although acceptable for ambient operation, were less than ideal for a cryogenic telescope.The largest cryo-mirror was the ITTT 0.85-m I-70 Be mirror.Hence, the AMSD program was tasked with demonstrating the ability to manufacture a 1.5-m class O-30 Beryllium mirror blank-as well as the entire mirror system.The Webb PMSA diameter of 1.5-m, which is slightly larger than what was demonstrated on AMSD, was derived from a combination of the level 1 science requirement to have a minimum of 25 m 2 of unobscured optical collecting area and the choice of an 18 segment architecture.
PMSA cryogenic surface figure, creep, launch distortion, and adjustability requirements were derived from performance metrics directly traceable to the level 1 science requirement that the observatory shall be diffraction limited at 2 μm.To achieve the level 1 requirement, the telescope was required to have a residual WFE of <131 nm rms after fixing correctable on-orbit figure errors.To "fix" correctable errors, each PMSA has the ability (at temperatures < 50 K) to change its ROC and adjust its rigid body position.Detailed error budgeting by Ball Aerospace partitioned the residual WFE between multiple sources, including uncorrectable residual PMSA surface figure error; errors in the ability to adjust all PMSA's to a common ROC; errors in the ability to phase all PMSA's into a common primary mirror by correcting PMSA rigid body errors; creep of a PMSA figure as a function of time; and figure change experienced by a PMSA as a function of the launch environment.The result of this process was that each PMSA had to be able to adjust its ROC with a resolution of ≤10 nm PV sag, and each PMSA had to be able to adjust is piston position with a resolution of ≤10 nm.Uncorrectable PMSA cryogenic surface figure error-i.e., errors that cannot be corrected by ROC adjustment or sliding the PMSA in the "parent" space with the hexapod-had to be ≤23.7 nm rms at delivery to the OTE integration and test (I&T) process.Also, from the time that a PMSA is delivered for I&T through its end of life, the uncorrected surface figure error from material creep had to be ≤1.8 nm rms.Furthermore, the PMSA uncorrectable surface figure distortion due to launch had to be ≤2.9 nm rms.
An interesting detail of these requirements was the role of material stress/strain and precision elastic limit (PEL) on PMSA design and their connection with figure creep, launch deformation, surface figure error, and areal density.To meet the creep and launch figure change requirements, it was critical that the PMSA substrate had sufficient stiffness to avoid introducing excessive stress/strain into the mirror during optical fabrication.It is the release of this stress/strain from the mirror with time or exposure to the launch environment (vibration and acoustic) that causes undesired figure change.PMSA stiffness is also important for in-process optical testing and I&T; a mirror must have sufficiently small gravity sag that it can be accurately measured in one-g (i.e., on the Earth) while being manufactured for optimized performance in zero-g.So although AMSD demonstrated that an areal density <20 kg∕m 2 was achievable, a specification of 26.5 kg∕m 2 was necessary to produce a PMSA with sufficient stiffness to meet the other requirements.
Requirements Verification
To certify compliance with TRL-6, specific success criteria were established for each critical technology and then confirmed by test.Table 7 lists the critical requirements, their success criteria, and how each was confirmed by test.
All PMSA technologies necessary to meet the Webb level 1 requirements were demonstrated to be at TRL-6 via a piecewise methodology.As desirable and recommendable as it might be, TRL-6 was not demonstrated on a single-mirror system.Rather, SBMD, AMSD, flight mirrors, and test coupons were used to mature specific technologies and demonstrate their performance in a relevant environment.For example, SBMD demonstrated gold coating performance at 28 K and cryo-null figuring.Although AMSD was designed to explore fabrication limits associated with areal density and size, it could not certify everything.AMSD did produce a complete mirror system (with a design that is traceable to flight) and tested its performance in a relevant environment from 30 to 50 K.But AMSD was not designed to (nor was it ever intended to) meet Webb launch loads.To survive launch, Webb flight PMSAs were redesigned to have significantly more areal density than AMSD (which made a Webb PMSA easier to fabricate).Additionally, Webb's flight PMSA design was modified based on lessons learned from AMSD.Therefore, it was necessary to use an actual Webb flight PMSA for vibration and acoustic testing.
The PMSA-110 and PMSA-530 ability of a gold coating to survive the 28 K requirement was verified with SBMD.TRL-6 was demonstrated by performance testing at 30 K and survival testing to 28 K a gold coating deposited on the SBMD mirror.Because cryogenic adhesion of gold on O-30 beryllium was the ability being tested and not the ability to deposit gold coatings on to large mirrors, it was determined that repeating the test with a gold coated AMSD mirror was unnecessary.The deposited gold coating introduced no discernible cryogenic surface figure distortion into SBMD.The uncoated SBMD's 30 K surface figure was 52.8 nm rms, and its coated 30 K surface figure was 53.9 nm rms 35 (Fig. 9).
The PMSA-530 requirement that a PMSA could operate over a 28 to 50 K temperature range was verified with the AMSD mirror system and the Webb flight actuators.TRL-6 was demonstrated by testing the AMSD beryllium mirror system multiple times over operational temperatures from 28 to 50 K to characterize its cryogenic performance.Cryogenic figure stability was characterized.The cryogenic figure and ROC change were demonstrated, and the cryogenic ROC adjustability was demonstrated.TRL-6 was further demonstrated by testing the cryogenic performance of the Webb flight actuators.
The PMSA-170 requirement that a PMSA maintains a surface figure stability of <0.3 nm rms for a 1 K temperature change (7.5 nm rms over a 30 to 55 K thermal range) was verified with AMSD.TRL-6 was demonstrated by measuring the surface shape of the AMSD beryllium mirror system as a function of temperature.The cryogenic surface figure was measured at multiple temperatures and was found to change linearly with temperature.The total surface figure change from 30 to 55 K was 7.0 nm rms or 0.28 nm rms per 1 K temperature change 11,28 (Fig. 10).
PMSA-410 and PMSA-70 derived requirement that a PMSA could be manufactured with an areal density <26.5 kg∕m 2 was verified with AMSD and confirmed with Webb flight segments.TRL-6 was demonstrated by calculating the areal density of the AMSD beryllium mirror and an assembled PMSA (Fig. 11) from measurements of their respective masses and physical dimensions.The achieved areal density for the PMSA was 25.8 kg∕m 2 .AMSD actually demonstrated the feasibility of manufacturing a mirror system with an areal density of 15.6 kg∕m 2 .This was achieved by CNC machining a beryllium mirror substrate with exceptionally thin ribs and facesheet while controlling the introduction of residual stress.Residual stress is very important.It can adversely affect the ability to polish a beryllium mirror to the required surface figure and keep that shape because of long-term figure creep.The higher PMSA areal density requirement (allowed by design maturity, incorporating lessons learned from AMSD and validated with improved modeling) improves manufacturability and reduces risk.The PMSA-70 requirement that a PMSA could be manufactured with a polished surface area of larger than 1.46 m 2 was verified via a combination of SBMD, AMSD, and EDU.TRL-6 was verified by three specific demonstrations of fact.First, the Webb flight program successfully manufacturing and machined a 1.315 m flat to flat beryllium substrate.Although this may seem trivial now, before the mirror technology development program, there was great uncertainty as to whether or not the manufacture of beryllium substrates of that size was even feasible.Second, AMSD demonstrated the ability to fabricate a 1.2 m flat to flat polished beryllium mirror with a mechanical design and aspheric prescription traceable to Webb.Until it was surpassed by Webb, AMSD was the largest diameter beryllium mirror ever fabricated.Third, SBMD demonstrated the ability to use small tool polishing on a lightweight mirror substrate to within 5 mm of a straight edge.
The PMSA-150 requirement that a PMSA could be polished with an uncorrectable surface figure error of <23.7 nm rms was verified with SBMD and AMSD.TRL-6 was confirmed by verifying two key abilities: (1) the ability to polish a large-aperture low-areal-density aspheric O-30 beryllium mirror to the required specification and (2) the ability to cryo-null figure an O-30 beryllium mirror to have the required figure specification at temperatures <50 K.The ability to polish a meter-class highly aspheric lightweight O-30 beryllium mirror was demonstrated on AMSD.AMSD was polished to have an uncorrectable surface figure error of 19.2 nm rms over 97.1% of its aperture (Fig. 12).Achieving a <20 nm rms surface figure was actually the last major task of the AMSD program, and its accomplishment represented a never before demonstrated capability for meter-class lightweight beryllium mirrors.Furthermore, because AMSD had a 10 m ROC, it was a more difficult prescription to polish than Webb segments with their 16 m ROC.The <20 nm rms uncorrectable surface figure was achieved via a small tool computer controlled optical surfacing (CCOS) technology at Tinsley Laboratories in Richmond, CA.Critical to this accomplishment was highly spatial sampled data and precision fiducial registration knowledge.The ability to cryo-null figure such a mirror to yield the required surface figure error at cryogenic temperatures was demonstrated on SBMD.SBMD exhibited a cryo-deformation of ∼90 nm rms.This shape change consisted of a low-order mount induced error and a highorder quilting error associated with the substrate rib structure.After two cryo-cycles proved that the deformation was stable and repeatable, i.e., that the O-30 beryllium mirror had no apparent creep induced figure change associated with residual stress in the mirror, SBMD was cryo-null figured.The predicted final cryogenic surface figure was 14.4 nm rms.The actual final cryogenic surface error was 18.8 nm rms 35 (Fig. 13).
Based upon the SBMD success of cryo-null figuring via small tool CCOS technology of both low-order mount induced as well as high-order rib structure quilting, it was determined unnecessary to cryo-null figure AMSD.
The PMSA-370 requirement that a PMSA could be positioned in space with 6 degrees of freedom (DOF) with <10 nm step resolution was verified with AMSD and PMSA components.TRL-6 was demonstrated by test of the PMSA actuator performance at 30 K and analysis of PMSA hexapod motion at 30 K. The Webb cryogenic hexapod mechanism, with its six cryogenic actuators, controlled the 6 DOF position of a mirror segment relative to the Webb telescope backing structure.A seventh actuator was used to deflect the center of the mirror, changing the ROC for that segment.Although the use of a hexapod was not new technology, the actuator step size resolution required at cryogenic temperature was.To meet the hexapod motion resolution and accuracy requirements, the Webb actuators had to be independently capable of <10 nm step size resolution at <50 K.This level of motion resolution was achieved when the Webb actuators were operated in their "fine" mode.Webb actuators are dual stage with coarse and fine operating modes.The Webb actuators were developed by BATC, initially under IRAD funding, and then via AMSD, to meet specific mass, stiffness, and performance requirements.These actuators were used for both PMSA hexapod and ROC adjustments.
The key component of the actuator is a cryogenic capable geared stepper motor, which was derived from the gear motor flown on the Spitzer Space Telescope and operated at 4.5 K. TRL-6 capability was demonstrated by characterizing the cryogenic performance from 25 to 35 K of over 24 actuators: 2 actuators via Ball IRAD, 4 actuators via AMSD, 36 and 18 Webb engineering unit actuators.All actuators met the resolution requirement with the Webb engineering unit actuators showing a resolution of 7 nm (Fig. 14).Extensive testing of the actuators through a variety of fine-stage step increments verified that the actuator performs single steps, without backlash, to an accuracy of 0.6 nm rms.Finally, flight actuators were installed into a flight hexapod system and exercised at ambient temperature to show basic functionality.
The PMSA-1560 requirement that a PMSA cryogenic ROC sag could be adjusted by <10 nm peak-to-valley (pv) was verified with AMSD.TRL-6 was demonstrated by test, analysis, and corollary.PMSA mirrors were designed to adjust their ROC at cryogenic temperatures by expanding or contracting a linear actuator.The actuator, attached to the back center of the mirror, reacts its force via six struts that attach to each mirror corner through a flexured joint.A similar design was implemented on AMSD except that the actuator reacted its force against spreader bars (Fig. 11).Once at 30 K, the AMSD actuator was commanded to execute "coarse-steps" until an ROC sag change was detected.A move of 35 coarse steps resulted in an ROC sag change of 38 nm pv.By analysis, a single AMSD "coarse-step" should result in a sag change of ∼1.1 nm pv.And a single "fine-step" motion (which is 4.5 times smaller than a coarse step) should result in an ROC sag change of ∼0.24 nm pv. 28Because of the difference between where the actuator force reacts against the mirror substrate, the distance between those reaction points, and the intrinsic stiffness between AMSD and Webb, a Webb PMSA experiences an ROC sag change that is ∼110% larger per linear motion than AMSD experiences.Thus the minimum Webb coarse step is ∼1.2 nm pv, and the minimum fine step is ∼0.27 nm pv.
The PMSA-195 requirement that a PMSA could be designed such that its surface figure changes by <1.8 nm rms because of creep was verified with SBMD, AMSD, and Webb flight segments.TRL-6 was demonstrated by test and analysis.Funded via AMSD, Draper Laboratory measured the creep properties of O-30 beryllium. 37Significant creep was measured for samples stressed to 4 and 6 ksi.Neglible creep was measured for samples stressed to 2 ksi or below (Fig. 15).Analysis indicates that 2 ksi of stress will creep 1.8 parts per million over 10 years at room temperature. 38Further analysis indicates that a PMSA with a surface stress of 2 ksi will see a total figure change of <1.8 nm rms during its room temperature life prior to launch and that no figure change due to creep is expected on orbit at cryogenic temperatures.A rule was established that all beryllium components of the Webb PMSA must be designed, processed, and handled in such a way that no component had a residual stress of >2 ksi.Additionally, extensive tests were performed under AMSD III to quantify exactly how much stress was introduced into a Be mirror during the machining process at AXSYS and grinding/polishing process at Tinsley.These processes were controlled to limit the residual stress in the final mirrors to <2 ksi.Furthermore, all Be components were stress relieved throughout the fabrication process to prevent the accumulation of stress.
The PMSA-180 requirement that a PMSA could survive launch with <2.9 nm rms surface figure distortion was verified with a Webb flight segment.TRL6 was demonstrated by test.An unpolished Webb mirror segment B1 was assembled into a flight configuration PMSA and exposed to design limit loads with sine burst, random vibration, and acoustic testing.Its surface figure change as a function of each loading test was measured using a phase measuring electronic speckle pattern interferometer.The design limit load accelerations for every component within the PMSA were exceeded in each of these tests.Two acoustic tests were performed.The first test hard mounted the PMSA to a concrete wall.The second test suspended the PMSA for a "freefree" test (Fig. 16).The two different boundary conditions provide valuable information for finalizing the PMSA test environment.Neither test resulted in any measurable change to the PMSA surface figure.
The tests showed that the surface figure was repeatable to within the noise floor of the metrology system, 14 nm rms (Figs.17-19).Astigmatism and power figure terms were removed from the total surface change measurement because they can be compensated for on-orbit by adjusting the PMSA ROC or physical location via the cryogenic hexapod with the ROC actuator.The measurement of a surface figure error change that was smaller than the measurement noise floor was consistent with the pretest prediction.Nonlinear plastic material finite-element analysis predicted a surface figure deformation of only 1.6 nm rms.Although not affecting a determination of demonstrating TRL-6, there were two special circumstances associated with this test.First, the random vibration and acoustic levels were notched to maintain safe exposure levels on the PMSA, and a minor inconsistency was discovered with the design limit loads.A new test environment was defined and a minimal PMSA redesign performed to meet the new test environment.Second, although all flight PMSAs will be thermally cycled to 25 K before launch, for reasons of expedience and convenience, the B1 PMSA was only thermally cycled to 150 K.This was determined acceptable because the 150 K temperature subjected the mirror to ∼88% of the beryllium cryo-strain and over 70% of adhesive mount strain.Additionally, an extensive qualification program was conducted for the bonded joints.Test samples were cycled 3 times between 15 and 383 K and subjected to static pull testing.These samples saw only a 12% reduction in ultimate strength following thermal cycling and still maintained a margin of safety of 7.4.This testing coupled with the 150 K that the B1 segment saw was more than sufficient to assure that the TRL6 vibration testing demonstrated the true robustness of the PMSA.TRL6 was achieved by demonstrating the technology to design a lightweight beryllium mirror to design limit loads, testing it to those loads, and showing surface figure stability after exposure to the design limit load, thus assuring that lightweight beryllium mirror technology could meet the Webb launch distortion requirements.
Conclusions and Lessons Learned
One reason for the Webb Space Telescope's on-orbit performance 39 is the success of the NGST Mirror Technology Development Program.The Webb mirror material selection process and technology development program is a model for future NASA missions.AMSD presented two mature technologies to the Webb program for consideration.The competition between these two technologies advanced the TRL of both, resulted in better defined proposal plans, and significantly reduced the total program cost.
Based on the 1996 technology assessment, NASA initiated a systematic mirror technology development program to invent mirror systems that could meet the NGST requirements, reduce the cost and risk of such mirror systems, and demonstrate a TRL of 6. TRL-6 was achieved in 2007 by the combination of the mirror technology development effort and testing of flight mirrors.In the opinion of this author, this achievement was made possible by four specific technical developments and the aperture diameter descope.The four technical advances are the development of O-30 Beryllium by the Air Force with its greatly improved CTE uniformity (compared with I-70 Be used on Spitzer); improvements to computer-controlled polishing at Tinsley; the NASA funded development of the 4D PhaseCam and Leica ADM; and the AMSD program.AMSD was the key to achieving TRL-6.Its success formed a basis for estimating Webb ambient and cryogenic performance, manufacturability, schedule, cost, and risk.The aperture descope from 8 to 6 m enabled stiffer, high-areal density mirrors to survive launch and the addition of hexapods actuation for astigmatism compensation.
Programmatic factors that contributed to the NGST Mirror Technology Development success included unified civil servant technical management, well defined specifications and performance metrics, and competition between ideas and vendors.In total, at least 12 different architecture designs were funded, and the selected beryllium architecture went through five design iterations before flight.AMSD's competitive phased down-select process successfully advanced TRL for large-aperture lightweight cryogenic space mirrors from less than TRL-3 to TRL-5.5 in 4 years (1999 to 2003) with a $26 M investment.Although the effort consisted of multiple contracts, the entire effort was executed by a single civil servant technical/managerial team.The team also provided independent assessment of each contract's accomplishments via cryo-testing mirrors in MSFC's X-Ray and cryogenic test facility.This single-team eliminated the risk of stove-pipes or company proprietary compartmentalization and provided technical continuity through the entire technology development effort and into the flight project.Also, although the initial mirror specifications proved to be wrong-because a primary mirror assembly of 20 kg∕m 2 areal density could not survive launch-these specifications did provide a well-defined set of metrics for assessing the technology development.When it was determined that mirrors made to the initial specification could not survive launch nor could they achieve the desired cost goal, there was technical justification for increasing their areal density and reducing the telescope collecting area from 50 to 25 m 2 .Finally, the competed phase down-select process motivated contractors to meet their schedules and control costs.
Although the actual value cannot be quantified, AMSD certainly paid for itself in cost savings to Webb.Lessons were learned (i.e., mistakes to be avoided), and both vendors demonstrated process efficiencies that did not exist at the start of AMSD.Efficiencies promised to reduce flight mirror fabrication cost by an amount greater than the entire $26 M cost of AMSD (phases 1 to 3).Additionally, both vendors offered contract incentives (i.e., cost sharing via infrastructure investment) during the final downs select process that exceeded the $3 M increment cost of taking a second mirror into phase 3. It is the opinion of this author that these improvements would not have been developed as rapidly or cost effectively without competition.Furthermore, it is the observation of this author that the entire "feel" of Webb mirror fabrication changed once the down select was made and competition was eliminated.I am not saying the feel changed for better or worse, it was just different.There was a change in perceived urgency, need to innovate and reduce cost, transparency, etc.
Therefore, for any potential future segmented telescope (either ground or space), this author highly recommends having two competing fabrication vendors operating in a leader/follower model and giving the lead vendor most of the work and the follower the rest.Thus, if one vendor is having trouble, work can be shifted to the other vendor.Although speculative, I believe that the cost savings from competition will more than offset the infrastructure setup costs.Of course, this recommendation only applies to projects that have modularization and are making many duplicate sub-systems.But a couple of potential thought experiments are to fund five instruments but only four get to fly or design the mission to use a COTS spacecraft.
A programmatic lesson learned from the EDU, AMSD, and NMSD is to plan for the unplanned.Take the most pessimistic schedule and add an additional 50%.Both NMSD mirrors took significantly longer to make and achieved significantly lower performance than expected.On AMSD, because of unplanned activities, the fabrication process was 60% longer than its initial prediction.During the OOR process, the vendor team thoroughly analyzed the AMSD schedule, identified all of the unplanned activities, and detailed how the lessons learned from these unplanned activities had been fully incorporated into the Webb flight mirror production schedule.The government team reviewed this analysis and predicted that the EDU would take 75% longer to complete than the vendor schedule.In actuality, the EDU took roughly 150% longer than the original vendor schedule-2× longer than the government team prediction.
Throughout the process of writing this paper, this author has tried to remember and capture lessons learned during the mirror technology development effort as follows.
• Start with very clear specifications and performance metrics.
• Examine a wide solution trade space-do not limit your trade space too early.
• Use a competitive down-select process to rapidly and cost effectively develop technology.
• Place the effort under a single Government Principal Investigator and Insight/ Oversight Team.• Use a single Government Team to certify compliance with performance metrics.
• Do not trust models to validate performance-validate performance by testing at a relevant scale in a relevant environment.Then iterate until the model matches the data within the allocated error budget uncertainty.• It is nearly impossible to have sufficient "as-built" information to model a mirror's performance to optical specifications.For example, CTE homogeneity is critical for achieving stable thermal performance, but it is nearly impossible to achieve a high resolution 3D asbuilt CTE map.• Plan for failure and statistically improbable events.Mirrors break, bend, or fracture; mechanisms fail; micrometeoroids happen.• Technology development costs more and takes longer than what anyone estimates-maybe as much as 2× more and longer.• Stiffness is more important than areal density.
• CTE homogeneity and uniform properties are critical for stable thermal performance.
• Avoid complexity; it is expensive and risky.The simplest solution is always the best solution.
• Make the mirror as large as possible.Polishing edges is hard.Mechanisms are complex and have had infant mortality up to 30% • Large mirrors are harder to make than small mirrors.Demonstrate technology and processes on the smallest relevant mirror and then scale up by factors of 2×.
• You cannot manufacture something that you cannot test, and you cannot be certain that you are testing it right unless you have an independent confirming test.• Things do not behave the same at 30 K as they do at 300 K and-without experience-your intuition about how they will behave is probably wrong.• Iterate the design, and then iterate again.
• Full-scale pathfinders and EDUs are extremely valuable.If possible, make the flight spares before starting flight mirror production.• Manage the transition to production to maximize learning and minimize forgetting.
• Transparently include all stakeholders and consider alternatives to gain a consensus decision.• Most importantly, there is no substitute for relevant experience.
Disclosures
This author declares no potential conflicts of interests with respect to the research, authorship, or publication of this article.
H. Philip Stahl is a senior optical physicist at NASA MSFC and is a leading authority in optical systems engineering and metrology.He matures technology for large space telescopes; was a PI for SMBD, NMSD, and AMSD; was responsible for Webb Telescope mirror fabrication and certification; and developed the "Stahl" telescope cost model.He is a recipient of NASA's DSM; a fellow of SPIE and OSA; and the SPIE 2014 President.He earned his PhD in 1985, his MS degree in optical science from the University of Arizona in 1983, and his BA degree in physics/mathematics from Wittenberg University in 1979.
previous relevant experience developing the test setup for the Keck segments.Although unknown at the time, the solution for how to test the technology development and flight Webb mirrors (the PhaseCAM and Advanced Distance Meter) was a combination of this relevant prior experience and serendipity.
A
related risk was the change in the ROC as a function of temperature.The AMSD Be mirror predicted a radius change of −13 mm, and a change of −13.06 mm was measured.Its radius change sensitivity over the operating range is only −0.1 ppm∕K.The ULE ® mirror predicted a radius change of þ1.4 mm and measured a change of −4.3 mm.Over the operating range, the radius sensitivity was about −1 ppm∕K.Consequently, it was concluded that a ULE ® is more susceptible to uncertainty in operating temperature and thermal gradients than a Be mirror.
Fig. 17
Fig. 17Figure change from exposure to three axis sine burst testing to design limit loads.
Fig. 17Figure change from exposure to three axis sine burst testing to design limit loads.
Fig. 18
Fig. 18 Figure change from exposure to three axis sine burst testing to design limit loads and first acoustic test.
Fig. 19
Fig. 19Figure change from exposure to second acoustic test.
Fig. 19Figure change from exposure to second acoustic test.
Table 1
Webb optical system requirements versus 1996 state-of-the-art.
Table 2
Mandatory performance criteria.
Table 3
Secondary performance criteria.
Table 4
Webb primary mirror segment assembly design changes from AMSD.
Table 6
Webb mirror technology versus state-of-the-art.
Table 7
Mirror technology success criteria. | 17,551.8 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
FINANCIAL MARKET OF UKRAINE: STRUCTURE AND DEVELOPMENT TRENDS
. The purpose of this work is to study the current state and trends in the financial market of Ukraine. Methodology . The following methods of scientific research were used: analysis and synthesis; theoretical generalization; abstraction and comparison; systematic approach. Results . The article reports that the financial market of Ukraine is in the process of formation, which is due to the uneven spatial dynamics of financial capital in the course of market reforms; heterogeneity of economic and financial space of the country; fundamental asymmetry between regions in terms of financial capacity; localization of financial institutional infrastructure; level of investment attractiveness, etc. It was found that the main problem of the financial market of modern Ukraine is the inability to provide an effective redistribution of financial resources needed to meet the challenges of modernizing the domestic economy and create an innovative impetus to the reproductive processes. Practical implications . To improve Ukraine's financial sector and boost economic growth, it is necessary to create an effective mechanism for improving banks' lending policies. To develop bank lending and create financial stability in this period it is necessary to make banking services of higher quality to improve their competitiveness; to set maximum interest rates on loans within the framework of state acts and monitor their implementation; to encourage commercial banks to lend to innovative projects; to increase the size of main assets of state banks, increase their number and strengthen their role in the financial and credit market of Ukraine, etc. Value / originality . It has been researched that the structure of the financial market is dominated by the number of companies engaged in the provision of loans, loans, financial leasing, the provision of guarantees; carrying out factoring, currency exchange and money transfer operations. For the development of bank lending it is necessary to improve banking services to improve their competitiveness; set maximum interest rates on loans within state acts and monitor their implementation; stimulate lending by commercial banks to innovative projects; increase capital resources of state banks, increase their number and strengthen their role in the financial and credit market of Ukraine, etc. These measures will contribute to the recovery of the national economy and increase the level of financial efficiency of the domestic banking system.
Introduction
The financial market of modern Ukraine is still in the process of formation.Its main problem is the inability to provide effective redistribution of financial resources needed to solve the problems of modernization of the domestic economy and create an innovative impetus for reproduction processes.This is largely due to uneven spatial dynamics of financial capital in the course of market reforms; heterogeneity of economic and financial space of the country; fundamental asymmetry of regions by financial potential; localization of financial institutional infrastructure and the degree of investment attractiveness, etc.Domestic financial market, its further integration into the world financial system in accordance with its requirements, the combination of interests of all economic subjects should be provided by effectively functioning modern infrastructure, a key component of which is the financial market infrastructure.Distrust of owners of financial resources to financial intermediaries, as well as other destructive economic, political and social factors against the background of increasing stagnation processes in the state, do not allow to realize the investment potential of each subject -the owner of financial resources.
However, today among the scientific approaches and in practice there is no unambiguous and clear understanding of the classification, hierarchy of interdependence of infrastructure institutions, their elements in the economic system, quantitative assessment of performance, which does not allow to fully realize the potential of the financial system of Ukraine.Despite the importance of the available developments of researchers, the current state of functioning of the financial market infrastructure in Ukraine, some of its theoretical and practical aspects still remain insufficiently studied.
Structure of the financial market
The analysis of the financial condition will begin with the dynamics of financial market companies (Figure 1).Analysis of the data of Figure 1 shows that in 2016-2020 the number of financial companies engaged in the provision of financial loans, credits, financial leasing, guarantees and securities prevailed.They carry out factoring operations, currency exchange The smallest number of companies is characteristic of banks and non-state pension funds.In 2020, the number of banks will be 75, which is 99 less than in 2013 (reduction by more than 2 times).The number of private pension funds in 2020 will be 63 companies.The number of non-state pension funds decreases throughout the study period.
Studying the dynamics of the number of companies, we came to the conclusion that the financial market of Ukraine is dominated by financial companies, pawnshops and credit unions.But this conclusion is incorrect, because the role of one or another type of company on the market is characterized not by the number, but by the volume of activity (volume of assets, capital) (Rekunenko, 2014).
Credit unions and pawnshops are in second and third place by the number of market participants.Moreover, the number of these companies in the financial sector has decreased significantly in recent years.The maximum number of credit unions was observed in 2014 -739 companies, and pawnshopsin 2015 -482 companies.After that we can observe a decrease in the number of both companies.
On the fourth place are insurance companies, the number of which in 2020 was 225 companies.
The maximum number of insurance companies in 2013 was 414.
Let us consider the dynamics of assets of insurance companies, banks, financial companies and credit institutions (Figure 2).The assets of non-state pension funds and pawnshops were not taken into consideration during the review, as their volume is insignificant comparing to other types of participants in the financial market of Ukraine.
The data in Figure 2 shows that the absolute leader in terms of assets in the financial sector of Ukraine are banking institutions, which indicates the bankcentric nature of the financial system of Ukraine.In general, the volume of assets of the banking sector in 2014-2020 increased, except for 2017-2018.This decrease is characterized by a decrease in the number of banks operating in the market.In 2014-2020, the amount of assets of banks increased by 20.7% and in 2020 amounted to 1,360,764 million UAH.As of 2020, the assets of banks were 21.4 times higher than the assets of insurance companies, assets of financial companies -10.7 times, assets of credit institutions -619.5 times.
Analyze the amount of capital of insurance companies, banks, financial companies and credit institutions (Figure 3).
The data of Figure 3 indicate a significant predominance of bank capital over the capital of other This predominance of assets and capital of banks over the assets and capital of other participants in the financial market of Ukraine allows us to conclude that banks are the most important participants in the financial market of Ukraine (Berzhanir, 2020).
The data in Table 1 show that the assets of banking institutions increased by 32.58% between 2013 and 2020, indicating the development of the banking sector.This increase in assets is a particularly positive trend, given that the number of banks during the study period, on the contrary, decreased -by 56.9% (from 174 banks in 2013 to 75 banks at the end of 2020).In addition, the volume of loans increased by 26.35%, which also indicates the expansion of banking institutions and credit expansion of banks.This situation also indicates economic growth in the country.
Banking sector
Another positive trend is the growth of banks' deposits from individuals and legal entities.This growth in 2013-2020 was 78.4%.A decrease in funds on deposits with banks was observed only in 2015, when the banking crisis arose, which led to an outflow of funds from Ukrainian banks.
Insurance market
Next proceed to the analysis of the key indicators of the insurance market of Ukraine, which are presented in Table 2.
The analysis of these data shows a fairly rapid development of the insurance market in Ukraine.On average the number of concluded contracts of obligatory insurance against accidents, including transport, increases almost by 8-9 million units per year.The dynamics of the number of concluded contracts other than compulsory insurance against accidents, including transport, is not so positive.
In 2016-2020 their number decreased by 26.43%.The minimum value of the number of concluded contracts was observed in 2017 -61272,8 thousand units, which was a consequence of the crisis of 2015-2016, which reduced solvent demand for insurance services.Since 2018, the number of contracts began to grow, which indicates the recovery of the insurance market.
Volumes of reinsurance (premiums received by reinsurers) are also growing: by 68.63%.At the same time, reinsurers' payouts are growing faster than premiums -by 86.37% over the last 5 years.This situation indicates a decrease in profitability of reinsurers, which is a negative trend.
A positive trend is the growth of insurance reserves -by 60.85% over the last 5 years.Technical reserves significantly exceed mathematical reserves during 2015-2019, which is the result of the prevalence of insurance other than life insurance.
Assets of insurance companies also increased during this period, although insignificantly -only by 5% over 5 years.In 2017 there was a decrease in the volume of insurers' assets, which was connected with a decrease in the number of concluded contracts in that year.
Thus, the insurance market suffered from the consequences of the crisis of 2015-2016.Since 2018, the market began to resume its activity, which is reflected in its key indicators.In the author's opinion, the analysis should begin with the analysis of the main stock index of the country -PFTS.This index has been calculated in Vol. 3 No. 1, 2022 Ukraine since October 1997.The PFTS index is a price index weighted by the volume of free-floating shares.The PFTS index basket includes shares of 20 issuers.
Stock market
Based on the data in Figure 4 we can conclude that the greatest increase in trading was observed in 2019, when the value of the index was 562.91 points.The minimum value of the index in 2016 was 240.7 points, which was due to the economic and political crisis of 2015-2016.
In general, the value of the PFTS index indicates the weak development of the stock market and a rather large volume of transactions (the value of the index in 2020 is almost twice less than in 2007 before the 2008 crisis).Although since 2017 there has been some positive dynamics of the index (Aleksejenko, 2004).
In 2016-2018 the securities market decreased by UAH 1,863.25 billion (the most significant decrease was in 2017 -by UAH 1,658.86 billion or 71.14%).This decrease in transactions was due to the NSSMC's stock market clearing procedure and the increase in the transparency of transactions in the stock market segment (Berzhanir, 2021).
The procedure included the introduction of new listing requirements for securities on stock exchanges, based on European standards and allowing only high-quality securities to be listed.In addition to the reduction in the number of companies whose securities were listed under the new rules, the reduction in market transactions in 2018 was also affected by the fact that NBU deposit certificates were no longer taken into account in trade settlements.
In 2019, the market began to grow, up 26%, which is a result of the stabilization of the country's economy.
The ratio between the volume of GDP and the volume of transactions in the stock market is also important for the development of the country's stock market.During 2013-2016, the volume of trading in the market exceeded the volume of GDP, i.e., was more than 100% of GDP.Since 2017, on the contrary, the volume of GDP began to prevail.In 2018-2019, there was a significant decrease in the volume of securities trading compared to GDP, which indicates a low level of stock market development.In countries with a high level of economic development, this indicator should always be more than 100%.
Conclusions
The most active participants of the financial market of Ukraine are commercial banks, insurance companies, stock exchanges, as they account for the largest share of financial resources of the state.
To improve the financial sector and increase economic growth in Ukraine, it is necessary to create an effective mechanism for improving the credit policy of banks.For the development of bank lending and creation of financial stability in this period it is necessary to improve banking services in order to increase their competitiveness; to set maximum interest rates on loans within the framework of state acts and control their implementation; to stimulate lending by commercial banks of innovative projects; to increase the capital resources of state banks, increase their number and strengthen their role in the financial and credit market of Ukraine, etc.These measures will contribute to the recovery of the national economy and improve the financial efficiency of the domestic banking system.
An important component of the financial market is the insurance market.Insurance companies act as financial intermediaries, and their role is to reduce transaction costs associated with the movement of funds from savers to borrowers by accumulating significant funds from thousands of premium payers.Vol . 3 No. 1, 2022 The insurance market is still underdeveloped, its financial potential is not able to meet the needs of customers at the expense of small insurance premiums.An important problem is the large market value of insurance companies.To the main officials how to streamline the development of the insurance market can be derived: the undeveloped infrastructure of the insurance market, poor development of mediation and reinsurance market, as well as a guaranteed legal framework.Holders should put their own priority on the reorganization of the insurance market of Ukraine on the requirement of stable insurance activity, reducing the volume of insurance services and satisfaction of insurance customers.
An important element of the financial market of Ukraine is the stock market.The stock market is characterized primarily by the volume of exchange trading in securities.According to the results of trading in the organized market in January-December 2020, the volume of exchange-traded securities contracts amounted to 335.41 billion UAH.
To sum up, today the main task in Ukraine is to develop and implement a permanent mechanism for improvement and development of the financial market, taking into account global trends.
Figure 1 .
Figure 1.Dynamics of the number of companies in the financial sector of Ukraine in 2013-2020, pcs.Source: developed according to the official NBU data Figure 2. Asset Dynamics of Insurance Companies, Banks, Financial Companies and Credit Institutions for 2014-2020, UAH million Source: developed according to the official NBU data Figure 3. Capital Dynamics of Financial Companies, Insurance Companies, Banks and Credit Institutions in 2014-2020, UAH million Source: developed according to the official NBU data
Figure 4 .
Figure 4.The dynamics of the PFTS index in 2013-2020 | 3,390.8 | 2022-02-18T00:00:00.000 | [
"Economics",
"Business"
] |
Optimal control of Raman pulse sequences for atom interferometry
We present the theoretical design and experimental implementation of mirror and beamsplitter pulses that improve the fidelity of atom interferometry and increase its tolerance of systematic inhomogeneities. These pulses are designed using the GRAPE optimal control algorithm and demonstrated experimentally with a cold thermal sample of 85Rb atoms. We first show a stimulated Raman inversion pulse design that achieves a ground hyperfine state transfer efficiency of 99.8(3)%, compared with a conventional π pulse efficiency of 75(3)%. This inversion pulse is robust to variations in laser intensity and detuning, maintaining a transfer efficiency of 90% at detunings for which the π pulse fidelity is below 20%, and is thus suitable for large momentum transfer interferometers using thermal atoms or operating in non-ideal environments. We then extend our optimization to all components of a Mach–Zehnder atom interferometer sequence and show that with a highly inhomogeneous atomic sample the fringe visibility is increased threefold over that using conventional π and π/2 pulses.
Introduction
Atom interferometers [1] are the matterwave analogues of optical interferometers. Slow, massive atomic wavepackets replace the photons that are divided to follow separate spatial paths before being recombined to produce interference; and, in place of the mirrors and beamsplitters, carefully-timed resonant laser pulses split, steer and recombine the wavepackets. Atom interferometers have already demonstrated unprecedented performance for inertial measurement, with potential applications such as navigation [2][3][4][5], the detection of gravitational waves [6,7], measurements of the fine structure constant [8,9] and the Newtonian gravitational constant [10], and investigations of dark energy [11,12].
As with an optical interferometer, the sensitivity of an atom interferometer depends upon the lengths and separation of the interfering paths and the coherence and number of quanta detected. Whereas optical interferometers are possible on the kilometre scale using ultra-stable lasers and optical fibre components, the path separations in atom interferometers result from momentum differences of only one or a few photon recoils, and expansion of the atom cloud limits the interferometer duration. Large momentum transfer (LMT) interferometers increase the path separation by employing repeated augmentation pulses to impart multiple photon impulses [13], but any inherent sensitivity improvements thus achieved are, in practice, limited by a reduction in fringe visibility resulting from the accrued effect of repeated operations with imperfect fidelity [14,15]. LMT interferometers typically rely on an atomic sample with a narrow initial momentum distribution [15,16], with Bloch oscillations [17][18][19] and Bragg diffraction [20][21][22] demonstrating the greatest separation, but filtering the atomic sample in this way to reduce the effects of inhomogeneities and cloud expansion involves lengthier preparation and causes a fall in the signal-tonoise ratio because fewer atoms are measured.
For applications such as inertial navigation where both the sensitivity and repetition rate are important, techniques are required that are more tolerant of experimental and environmental inhomogeneities in laser intensity, magnetic field, atom velocity and radiative coupling strength. Adiabatic transfer [23][24][25][26][27] allows robust, high-fidelity state transfer, but is necessarily a slow process not suited to preparing or resolving superpositions [28]. Composite and shaped pulses [29][30][31][32] are attractive alternatives. Originally developed for nuclear magnetic resonance (NMR) spectroscopy, composite pulses are concatenated sequences of pulses with tailored phases and durations that can replace the fractional Rabi oscillations in atom interferometers and increase the tolerance of inhomogeneities in the atom-laser interaction [14,33].
We have previously investigated the application of optimal control techniques to the optimization of mirror pulses for interferometry, showing computationally how this can maximize interferometer contrast by compensating for realistic experimental inhomogeneities in detuning and coupling strength [49]. We now build on this, presenting the theory and experimental implementation of a high-fidelity inversion pulse and a novel approach to optimizing an entire 3-pulse interferometer sequence.
Our inversion pulse achieves 99.8(3)% transfer between the two hyperfine ground states in a thermal sample of 85 Rb where a rectangular π pulse achieves only 75(3)%, with a greater velocity acceptance than existing composite and shaped pulses making it particularly suited for LMT applications. Our optimized 3-pulse interferometer demonstrates a threefold increase in the experimental fringe visibility with a 94(4) μK atom sample compared to a conventional Mach-Zehnder interferometer using rectangular pulses. This is, to our knowledge, the first demonstration of shaped individual beamsplitter pulses preparing momentum superpositions being used to improve the contrast of an atom interferometer.
Theoretical system and optimization approach
We consider an alkali atom undergoing stimulated Raman transitions between hyperfine levels, forming an effective 2-level system described using the basis states ñ p g, | , and [50]. p is the atomic momentum and k L represents the difference between the wave vectors of the two lasers, or 'effective' wave vector. The change in state under the action of a pulse of constant intensity and combined laser phase f L , acting for duration Δt, is described by a propagator where C and S are defined as [51]: Ω R is the two-photon Rabi frequency, and δ is the two-photon Raman detuning [1], which depends on atomic momentum and is assumed to be approximately constant for the duration of the pulses. This is often achieved by chirping the frequency difference of the Raman beams to account for the Doppler shift caused by gravitational acceleration [50].
Rectangular π/2 and π pulses are the building blocks of conventional interferometer sequences, and result from fixing the duration of a pulse with constant laser intensity and frequency such that the quantum state undergoes a π/2 or π rotation about an axis in the xy plane of the Bloch sphere [52]. Variations in the detuning (e.g. due to thermal motion) and Rabi frequency (e.g. due to beam intensity variation) across the atom cloud degrade the interferometer signal [53,54]. In the NMR literature these errors are referred to as 'off-resonance' and 'pulse-length' errors respectively. If the individual pulses no longer perform the intended operations for each atom, the contrast of the interferometer fringes falls, their offset varies, and the inertial phase is modified [49,51,55]. Although the problem of intensity inhomogeneity in atom interferometry may be compensated by the use of collimated top-hat laser beams [56], our approach obtains tailored pulses that achieve mutual compensation of inhomogeneous coupling strengths and large Raman detunings without the need for additional optical elements.
We define our pulses in terms of multiple discrete time 'slices' where the combined Raman laser phase f L takes a different value for each equal timestep t d in the pulse. They are therefore described by profiles f L (t)={f 0 , f 1 , K, f N }, with a total duration τ pulse . Although we could also choose to vary the pulse amplitude with time, for experimental simplicity we have considered pulses with constant amplitude profiles in this work. The laser phase forms our control parameter, and the action of a given pulse on our effective two-level system may be described by a sequence of pulse propagators ) . The action of a pulse is then a time-ordered product of propagators where the propagator for the nth timestep U n takes the form of a rectangular pulse of fixed laser amplitude and phase acting for duration t d (equation (1)) [49,51].
Optimal control theory finds the 'best' way to control the evolution of a system so as to maximize some fidelity or 'measure of performance'. Often, this fidelity is taken to be the accuracy with which initial states are driven to target states by the optimal modulation of available control fields. We employ the gradient ascent pulse engineering (GRAPE) [38] algorithm to design optimal pulses for our purposes. Given an initial guess for the pulse and a choice of pulse fidelity, GRAPE efficiently calculates the required propagator derivatives and recent improvements permit the use of a fast second order optimization, known as the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) quasi-Newton method [41]. By defining an ensemble of systems with a distribution of atomic detunings (giving rise to 'off-resonance' errors) and variations in coupling strength (or, equivalently, 'pulse-length' errors), a robust pulse that maximizes the chosen fidelity over the ensemble may be obtained by averaging over the ensemble in the fidelity calculation. The length and number of timesteps are chosen at the outset and fixed when optimizing a pulse. Longer pulses can typically achieve higher terminal fidelities [39,42]. The spin dynamics simulation software Spinach [40], and its optimal control module, was used to optimize pulses in this work.
Measures of pulse performance
The choice of pulse fidelity used in the optimization depends on the application, and requires a careful consideration of the experimental requirements. For example, we may write our pulse fidelity, , for a given atom as a function of the overlap of a chosen target state y ñ T | with the final state after application of the pulse to our initial state y ñ 0 | : | |ˆ| | . This fidelity, when maximized, gives us a state-transfer or 'point-to-point' (PP) pulse [39]. Alternatively, the goal of the optimization may be to recreate a specific target propagator U T and we consider the fidelity to be = U U Tr T 1 2 (ˆˆ) † , yielding a so-called 'universal rotation' (UR) pulse [42]. Many other choices of pulse fidelity are available, however, including those which map a range of initial states to different targets, and are not aimed at obtaining full universal rotations [46]. We discuss appropriate fidelity choices that maximize fringe visibility and minimize any unwanted spread in the inertial phase for two cases. Firstly, we consider a fidelity choice for pulses used to impart additional momentum in extended LMT interferometer sequences. Secondly, we present measures of performance for each pulse within a threepulse 'Mach-Zehnder' interferometer sequence.
In LMT interferometers, the beamsplitter and mirror operations are extended by multiple 'augmentation' pulses with alternating effective wavevectors designed to swap the population of the internal states whilst imparting additional momentum [13,14,26]. In order to optimize an augmentation pulse for LMT interferometry, which we represent by the propagator U  , it may be sufficient to consider the 'point-to- without concern for the relative phase introduced between the two states. This is because, in LMT interferometers, the augmentation pulses appear in pairs within the extended pulse sequence [13,14,33] so that the interferometer phase introduced by each pulse is, to first order, cancelled out by that introduced by a subsequent one. Choosing to optimize a PP operation as opposed to a UR pulse effectively gives the GRAPE algorithm a larger target to shoot at, allowing impressive fidelity to be achieved with a modest pulse area. In a three-pulse Mach-Zehnder interferometer sequence, π/2 and π pulses are applied to atoms initially in the state ñ g | separated by equal 'dwell times' for which the fields are extinguished. Inertial effects such as rotations and accelerations imprint a relative phase Φ between the internal states at the end of the interferometer, which is mapped onto a population difference by the final π/2 pulse. The resulting excited state probability is where and are the offset and contrast of the interferometer fringes respectively. Δfrepresents a fixed shift to the inertial phase used to scan the fringe pattern, and must be constant from atom to atom. An optimal 3-pulse interferometer sequence, represented by propagators U U U , , 1 2 3ˆˆs hould maximize the contrast , minimize any unwanted variation of/in the inertial phase, Δf, and fix the offset for all atoms with the range of detunings and coupling strengths found in the atomic cloud. The optimal contrast and offset are achieved for all atoms if the following conditions on the 3 pulses are met: We interpret these conditions as requiring our beamsplitter pulses (pulses 1 and 3) to perform a 90°rotation of the quantum state about an axis in the xy plane of the Bloch sphere. Similarly, the optimal mirror pulse (pulse 2) must perform a robust 180°rotation of the quantum state. It is crucial for interferometry that the inertial phase Φ is not modified by a different amount for each atom as a result of the pulse sequence. This is equivalent to requiring the combined phase shift due to the pulse sequence, Δf, to be fixed or cancelled by the pulses for every atom, where We use the notation f á ñ a b ( | ) to indicate the argument of the overlap á ñ a b | . If Δfvaries from atom to atom as it does with some pulses [49,55], the resulting contrast after thermal averaging will be washed out. A careful choice of pulse fidelity for the beamsplitters and mirrors will lead to pulses that maximize interferometer contrast and minimize unwanted variation in the interferometer phase Φ from atom to atom.
Our first interferometer pulse U 1 is designed to take atoms from the initial ground state, to a balanced superposition with well-defined phase on the equator of the Bloch sphere, . This means our beamsplitter fidelity, 1 , can be written as yielding a PP 90°pulse. This choice of fidelity, if maximized, results in a beamsplitter which satisfies the conditions á ñ = U e g 1 2 . The mirror pulse, U 2 , acting after the first period of free evolution, is designed to swap the internal states, but introduce no relative phase between them, satisfying conditions á ñ = U e g 1 2 2 | |ˆ| | . This pulse is designed to perform a π rotation on the Bloch sphere about a fixed axis in the xy plane. We can therefore consider the mirror pulse fidelity to be a measure of how close our pulse propagator is to that for an ideal π rotation, p Û , and optimize the UR 180°fidelity Tr 10 2 2 (ˆˆ) ( ) † [42,49] . Designing the mirror pulse as a universal rotation means that variations to the inertial phase Φ that vary with δ and Ω R are minimized. We also note that if the pulse profile of the mirror pulse is made to be odd or antisymmetric about its temporal midpoint, then any modification to the inertial phase will be constant for all δ and Ω R . This follows from the fact that pulses with this symmetry fix the axis of rotation to the xz plane of the Bloch sphere for all resonance offsets, a property known in the NMR literature [42,57,58]. Antisymmetric UR 180°p ulses may be constructed by first optimizing a single PP 90°p ulse and following the steps outlined by Luy et al [59]. Finally, the third pulse is designed to accurately map the relative phase acquired between the two internal states at the end of the second dwell time, onto a difference in atomic population. This pulse does not need to be a universal 90°rotation, as only the z-component of the final Bloch vector matters when measuring the excited state population at the end of the interferometer. The action of this pulse can be thought of as a phase sensitive π/2 rotation, and may be obtained by taking the pulse profile for the first pulse f L (t), and reversing it in time about the temporal mid-point and inverting the pulse profile to obtain −f (τ−t). This 'flip-reverse' operation relies on the symmetry properties of spin- 1 2 propagators [60,61]. The result is a pulse which satisfies á ñ = U e g 1 2 , thus satisfying the condition on Δf. Optimizing the 3-pulse interferometer with these symmetry constraints (the final pulse has the timereversed and inverted profile of the first beamsplitter) minimizes the unwanted modification to the inertial phase term Φ and maximizes the contrast of resulting fringes. Figure 1 shows pulses obtained using GRAPE optimizing a PP 180°inversion pulse, a PP 90°beamsplitter pulse, and an antisymmetric UR 180°mirror pulse. The duration of each pulse, the number of timesteps, and the initial guess for the pulse, are chosen in advance. The pulse duration and timestep number are chosen so that a sufficiently high terminal fidelity (0.99) can be achieved. The chosen fidelity is averaged over an ensemble to obtain robust solutions. The ensembles are defined in terms of a sample of off-resonance and pulselength errors. The sample of off-resonance errors is taken to reflect the variation in detuning caused by the momentum distribution of the atoms at a given temperature along the Raman beam axis, which we assume to follow a Maxwell-Boltzmann distribution. Choosing to optimize over an ensemble representing a larger temperature will result in a pulse that is robust over a larger range of detunings, and will therefore still have high fidelity if the temperature of the atomic ensemble is reduced.
Results of optimizations
Taking the beamsplitter and mirror pulses shown in figure 1, we can simulate how the resulting interferometer contrast varies with detuning and variation in coupling strength. Figure 2 compares the simulated interferometer contrast obtained with our 'flip-reverse' GRAPE sequence, and the standard rectangular π/2−π−π/2 sequence, over a range of pulse-length errors and detunings. We also show the simulated contrast following a sequence composed of rectangular pulses, but where the central π pulse is replaced by an efficient state-transfer PP composite pulse known as WALTZ [14,33,62]. We find that the interferometer contrast with our optimized GRAPE pulses is more resilient to variations in detuning and coupling strength. For example, figure 2 shows that the 'flip-reverse' sequence is expected to maintain an interferometer contrast above 90% for a range of , and an antisymmetric mirror pulse 2 (c). The pulse profiles are plotted against time as a fraction of the duration of a rectangular π pulse, t π . Each optimization was continued until fidelities greater than 0.99 were reached. Each pulse was optimized for an ensemble of atoms with a temperature of 120 μK and a range of coupling strengths of±10% Ω eff . detuning which is 4.6 times greater than that for a sequence of rectangular pulses, and 1.3 times greater than when employing the WALTZ composite pulse.
Although WALTZ is able to increase the contrast for a range of atomic velocities and variations in Rabi frequency, the resulting interferometer phase Φ exhibits a large unwanted variation as a function of the Raman detuning, as shown in figure 3. This is because WALTZ is a point-to-point transformation, and therefore not suitable to replace the central mirror pulse in an atom interferometer. If atoms with different velocities obtain different interferometric phases following a pulse sequence, the interference fringes will be washed out following thermal averaging [49,55]. However, figure 3 also shows that the interferometer phase following our antisymmetric 'flip-reverse' sequence is expected to be insensitive to variations in Raman detuning, emphasising the potential applicability of robust antisymmetric pulse sequences.
Experimental procedure
We have implemented our pulses in our experimental setup, a description of which can be found in previous work [33,63], but we take a moment here to outline the most salient points.
We realize our pulses on a thermal cloud of ∼10 7 85 Rb, released from a 3D magneto-optical trap (MOT), cooled in an optical molasses for ∼5ms and optically pumped into the 5S 1/2 , F=2 state with a distribution over the five m F sublevels which, as the pumping is performed with the MOT light along all axes, is assumed to be uniform [33].
The cloud temperature is tuned by adjusting the intensity of the cooling light during the optical molasses and is measured by performing Raman Doppler spectroscopy with the Raman beams at low power [64]. The temperature can be adjusted in the range of 20-200μK and, combined with the multiple Rabi frequencies due to the different coupling strengths of the five m F levels present in the sample, provides an inhomogeneous system with which to explore the performance of pulses designed to provide robustness against the resulting off-resonance and pulse-length errors.
Interferometry pulses are realised via horizontal, counterpropagating beams with orthogonal linear polarizations that interact with the released cloud to drive two-photon Raman transitions on the D2 line between the F=2 and F=3 hyperfine ground states (states ñ g | and ñ e | respectively). Both laser fields are ∼10GHz detuned from single-photon resonance with the intermediate ¢ = F 2, 3 states of the 5P 3/2 manifold, thus allowing our atoms to be modelled as effective two-level systems [1], and the polarization arrangement removes the m F dependence of the light shift [33].
One beam is formed from the first diffracted order of a 310 MHz acousto-optic modulator (AOM) and the other from the carrier suppressed output of a 2.7 GHz electro-optic modulator (EOM). The modulator frequencies sum to the hyperfine splitting between the ground states plus a detuning Figure 2. Simulated 3-pulse interferometer contrast for (a) standard rectangular π/2−π−π/2 sequence, (b) rectangular sequence where the π pulse is replaced by the WALTZ composite pulse [62], and (c) our GRAPE 'flip-reverse' sequence, computed for a range of offresonance and pulse-length errors. Contours are at 0.6, and 0.9 respectively. The effective Rabi frequency Ω eff and π pulse duration t π are defined by requiring that Ω eff t π =π. The robustness of the GRAPE sequence to these inhomogeneities is shown by the increased area of high contrast centered on resonance. Simulated interferometer phase, Φ, as a function of the Raman detuning for different interferometer sequences. The interferometer phase following the rectangular π/2−π−π/2 sequence does not exhibit variation with the Raman detuning, but if the π pulse is replaced by a PP pulse such as the composite WALTZ pulse there is an unwanted variation in Φ. This variation of Φ with δ means that the ensemble averaged interference is washed out and contrast is reduced [49,55]. Following our antisymmetric 'flipreverse' sequence, Φ has no dependence on the Raman detuning, highlighting the applicability of interferometer sequences designed with antisymmetry. However, if the final beamsplitter is simply the same as the first and no 'flip-reverse' procedure is applied, the result is a large variation in Φ.
δ applied to the carrier frequency of the EOM RF signal. The beams are combined on a single AOM to shutter the interaction light on and off with a rise-time of ∼100ns before being separated by polarization and delivered to the atom cloud through separate polarization-maintaining fibers.
The phase of the RF signal driving the EOM is modulated with a Miteq SDM0104LC1CDQ I&Q modulator, the I and Q inputs of which are controlled by the dual outputs of a Keysight 33612A arbitrary waveform generator (AWG). To realize a phase sequence f n , n=1, 2, K, N, the outputs are programmed with the waveforms f = I sin n n ( ) and f = Q cos n n ( ) and are configured to hold the final phase value f N until a hardware trigger is received and the waveforms are played at a sample rate that is adjusted to set the total duration of the modulation τ mod 5 . As in our previous work, the fraction of atoms P e in the excited state following a Raman pulse is determined from a normalized measurement of the amplitude of the cloud fluorescence upon illumination with the MOT cooling beam in a read-out procedure that is fully detailed in [65]. By concurrently triggering the phase modulation AWG and the AOM that shutters the Raman beams to initiate a pulse, then measuring P e once the AOM shutter has been turned off again after a variable time τ, we can track the temporal evolution of the atomic state during a phase sequence. The hardware trigger delay and sample rate of the AWG are adjusted in order to vary the start time τ offset and duration τ mod of the phase waveform respectively (figure 4) and maximize the peak excited fraction in these temporal scans.
Having chosen the experimental pulse timings such that the fidelity is maximized at a single value of the Raman detuning δ, set to the centre of the light-shifted resonance determined from the spectral profile of a rectangular π-pulse, a spectral profile can be measured by measuring P e upon completion of the pulse at a range of detuning values.
Multiple pulses can be performed sequentially, separated by periods of free evolution τ dwell , in order to test optimized Above: optimal state transfer pulse designed using GRAPE to transfer atoms between levels ñ g | and ñ e | . Below: measured fraction (circles) of atoms in the excited state P e after the Raman light is extinguished at various times during a pulse. The solid line is a theoretical curve produced by the model from [33], in which the twolevel Hamiltonian is numerically integrated over the range of detunings and coupling strengths present in a thermal cloud of 85 Rb atoms, and assumes that the light reaches full intensity instantaneously and concurrently with the start of the phase modulation. Excellent agreement is observed for a simulated temperature of 35μK. is turned on at t=0, and off again at t=τ by an AOM with a risetime of ∼100ns. The sample rate of the AWG controlling the phase (bottom) is adjusted to set the total duration of the phase waveform τ mod , and this is adjusted together with the trigger delay τ offset to achieve the optimal peak transfer when τ is scanned. When the phase waveform has finished, the phase is maintained at its final value f N .
interferometer sequences as illustrated in figure 5. The interferometer contrast is then tested by varying a phase offset f bs applied to the phase sequence for the final beamsplitter pulse between 0 and 4π and fitting a sinusoidal function to the resulting fringes in P e .
Results: LMT inversion pulses
We have designed a population inversion pulse using GRAPE that maximizes the transfer of atoms initially in the state ñ g | to the state ñ e | for a cloud with a temperature of 120 μK and a large variation in Rabi frequency of W 10% eff . The pulse duration was chosen to be 12 μs for a Rabi frequency of 310 kHz, making it 7.4 times longer than a rectangular π-pulse, and allowing for a high terminal optimization fidelity. This pulse had 100 timesteps and the algorithm converged to the symmetric waveform shown in figures 1 and 6 when optimizing the point-to-point fidelity A with a penalty term added, proportional to the difference between adjacent pulse steps, to enforce waveform smoothness [43]. We found that increasing the number of timesteps in this pulse led only to a negligible increase in fidelity.
The temporal profile, after optimizing τ offset , is shown in figure 6, showing a peak in the excited fraction P e at the end of the phase sequence, represented by the vertical dashed line, after which damped Rabi oscillations are evident as the phase is kept constant.
We find that optimized pulses demonstrate a considerable resilience to variations in the trigger delay τ offset , and can be started as much as a quarter of a π-pulse duration after the light is turned on with little change to the peak transfer.
The highest excited fraction for all pulses is achieved when the light is kept on for slightly longer than the phase sequence, with τ>τ mod by ∼200ns. The AOM rise-time is not factored into the optimization process and this is the only effect of it that we observe experimentally, with the temporal data being well fitted by a numerical model that assumes the light reaches full intensity instantaneously at the start of the phase sequence as shown in figure 6. cloud. Data are shown for our GRAPE pulse (diamonds), the WALTZ pulse (empty circles) and a rectangular π-pulse (filled circles). The effective Rabi frequency was 270 kHz. Solid lines show theory curves produced from the model used by Dunning et al [33], which assumes a Maxwell-Boltzmann atomic velocity distribution. The offset of the peaks from δ L =0 is due to the light shift. Interferometer fringes obtained at 94(4) μK for rectangular pulses (empty circles) and the optimized GRAPE sequence (filled circles). The GRAPE sequence improved the contrast of the fringes by a factor of 2.8 (6). The average or, 'effective' Rabi frequency was approximately 420 kHz, and was determined empirically from the optimal duration of a rectangular π pulse. We attribute the slight deviation of the fringes from a sinusoidal form to a small nonlinearity in the response of the I&Q modulator.
To demonstrate the potential of this GRAPE pulse for augmenting LMT beamsplitters, we compare its performance with the WALTZ composite pulse, the best composite Raman pulse previously used for LMT interferometry [14], and the standard rectangular π pulse over a range of detunings δ.
In a sub-Doppler cooled cloud of 35μK, GRAPE and WALTZ pulses achieve close to 99.8(3)% and 96(2)% transfer respectively about the light-shifted resonance, while a rectangular π-pulse achieves just 75(3)%. This is shown in figure 7, where the broadband nature of the GRAPE pulse is evident, while the fidelity of the WALTZ pulse drops below 90% when detuned 100kHz from resonance, the GRAPE pulse can be detuned by 380kHz for the same fidelity. This broad spectral profile is a signature of a good LMT augmentation pulse, which must work equally well for atoms that have received a large number of recoil kicks [14].
In a cloud with a temperature much closer to the Doppler cooling limit ∼150μK, for which the peak transfer of a rectangular π-pulse is just 54(2)%, the broader spectral width of the WALTZ and GRAPE pulses results in more efficient state transfer on resonance. This is also shown in figure 7, where the GRAPE pulse achieves 89(4)% transfer on resonance compared with 81(4)% for WALTZ.
Results: Mach-Zehnder interferometer
Using fidelities for optimal beamsplitter and mirror pulses ( 1 and 2 ) we optimized all three pulses of the Mach-Zehnder interferometer sequence for an atomic sample with a temperature of 120 μK and a coupling strength variation of±10% Ω eff . The resulting pulse profiles are shown in figures 1(b), (c). The phase sequence of the final pulse was taken to be the inverted and time-reversed profile of the first according to the design procedure outlined in section 3. As illustrated in figure 2, we expect our optimal Mach-Zehnder sequence of pulses is capable of maintaining a higher contrast than conventional rectangular pulses despite significant variations in detuning and Rabi-frequency in the atomic cloud.
We have started to test these three-pulse interferometer sequences in our experimental apparatus, comparing the performance with that obtained using a sequence of conventional rectangular π/2 and π pulses. Interferometer fringes are obtained by scanning the phase offset f bs applied to the final pulse in the sequence. The dwell time between pulses is limited, at present, to 100μs by suspected phase noise between the counter-propagating beams that also limits the overall contrast. The initial results are promising, and the relative improvement in contrast provided by the GRAPEoptimized sequence as the cloud temperature is varied is shown in figure 8. GRAPE improved the contrast of the fringes at all temperatures investigated and, in a hot 94(4) μK sample where the contrast loss is dominated by atomic temperature and not the laser phase noise, nearly a threefold enhancement is observed, the fringes for which are shown in figure 9. This enhancement is also apparent in the uncertainties in the phases of the fitted sinusoids. At 94(4) μK the average uncertainty in the fitted phase due to the GRAPE sequence was a factor of 2.9 smaller than the uncertainty in the fitted phase from the rectangular pulse sequence.
Applying the 'flip-reverse' operation to obtain the final beamsplitter is necessary to observe an increase in contrast with GRAPE. For example, at a temperature of ∼25 μK, the contrast following the full antisymmetric 'flip-reverse' sequence depicted in figure 5 was higher by a factor of 2 than the contrast following a sequence which was in all respects identical but where the flip reversal procedure was not applied to the final beamsplitter. This sequence leads to a large variation of the interferometer phase with detuning (shown in figure 3), thereby causing atoms with different velocities to exit the interferometer with different phases and the show the shifts in the fringe offset 2 and phase from their respective mean values. The GRAPE interferometer has consistently higher contrast than the rectangular counterpart, and exhibits less variation in both fringe offset and phase. Temperature error bars are omitted for clarity, but are the same as in figure 8. The vertical error bars represent the fitting uncertainty of the sinusoidal curves to each experimental run; for the contrast and fringe offset measurements these have magnitudes ∼±0.005, while the phase becomes more uncertain with reduced contrast. Consequently, at 25μK and 94μK the phase variation errors for GRAPE pulses are ∼±0.04rad and ∼±0.1rad respectively, while those for rectangular pulses are ∼±0.07rad and ∼±0.3rad. Data points for several experimental runs at each temperature are shown, and their vertical spread gives indication of the variation of each parameter over longer timescales measured in tens of minutes.
interference to wash out when the signal is averaged over the ensemble.
Some experimental evidence suggests that GRAPE sequences are less susceptible to drifts in offset and phase of the interferometer fringes. The variation of the fringe offsets and phases from their respective means for a range of temperatures for GRAPE and rectangular interferometers are shown in figures 10(b) and (c), and we hope to explore this aspect more systematically in future work. In particular, there appears to be a systematic shift in the interferometer phase as the temperature is increased that we do not fully understand, but could possibly be caused by an offset of the laser detuning from the centre of the atomic momentum distribution [66]. It is notable that the GRAPE interferometer appears less susceptible to this shift.
Another route of inquiry will be to explain why the increase in contrast is quite so large only when employing a fully optimised pulse sequence. While the mirror pulse, with its increased Doppler sensitivity, should be the dominant source of contrast loss in a Mach-Zehnder interferometer [49], only a slight enhancement was observed when just this pulse was replaced. To see significant improvement from a fully optimized sequence, maintaining the overall antisymmetry in a 'flip-reversed' configuration proved necessary. When this constraint was met, the contrast improvement far exceeded that of replacing just the mirror, or indeed the beamsplitters, in isolation.
Conclusion
We have presented the design and experimental implementation of optimal Raman pulses that obtain significant improvements in fidelity compared with conventional pulses. We have used optimal control to design Raman pulses that achieve robust population inversion in the presence of variations in the atom-laser coupling strength and detuning, designed to improve the contrast of LMT interferometers. We have also outlined and demonstrated a design for optimal pulses that improve the contrast of the three-pulse Mach-Zehnder sequence. We expect such optimal pulse sequences to have applications in improving the sensitivity and robustness of atom interferometric sensors operating in non-ideal environments, relaxing the requirement for low atomic temperatures and potentially reducing their susceptibility to drifts in the signal phase and offset.
Specifically, we have presented a 'point-to-point' inversion pulse that achieves 99.8(3)% transfer between hyperfine ground states in a thermal cloud of 85 Rb atoms. Our pulse has a broad spectral profile making it a good candidate for an augmentation pulse in LMT interferometry where a large velocity acceptance is required. The best results of Raman LMT interferometers to date have been achieved with adiabatic augmentation pulses [26,27], but optimal control pulses such as this have the potential to realize similarly robust transfer with significant reductions in pulse duration [39,67]. Furthermore, we have detailed a strategy for optimizing 3-pulse Mach-Zehnder type interferometer sequences in which optimized beamsplitter and mirror pulses are combined in a 'flip-reversed' configuration to maximize the contrast of interferometer fringes where the phase of atomic superpositions is important. We have shown up to a threefold contrast improvement in a proof-of-principle interferometer with hot 94(4) μK atoms, although our current investigations have been limited by experimental phase noise. We emphasize that although our interferometer dwell time is limited to 100 μs at present, our technique can mitigate errors in coupling strength and detuning that are often present in more sensitive interferometers, for example those which increase the scale factor by increasing the momentum splitting of the diffracting wave packets [14,26].
Future work will extend our experimental study of optimized interferometer sequences to test their efficacy and robustness when experimental noise is no longer such a limiting constraint. The excellent agreement between experiment and theory over the timescale of a single pulse suggests that, at present, the experiment is not limited by the mapping of the phase modulation waveforms and that the phase noise is only significant over longer timescales. We therefore suspect lower frequency, mechanical noise to be the dominant factor and are working to minimize this.
We are also exploring alternative fidelity measures and ways to optimize all pulses concurrently within interferometer sequences, along with considering-both computationally and analytically-how to use the framework of optimal control to engineer robustness to other factors such as laser phase noise in atom interferometers. | 8,692.6 | 2020-03-23T00:00:00.000 | [
"Physics"
] |
A Group Theoretic Analysis of Mutual Interactions of Heat and Mass Transfer in a Thermally Slip Semi-Infinite Domain
: Group theoretic analysis is performed to get a new Lie group of transformations for non-linear differential systems constructed against mass and heat transfer in the thermally magnetized non-Newtonian fluid flow towards a heated stretched porous surface. The energy equation is used with additional effects, namely heat sink and heat source. The chemical reaction is also considered by the use of the concentration equation. The symmetry analysis helps us in numerical computations of surface quantities for (i) permeable and non-permeable surfaces, (ii) thermal slip and non-thermal slip flows, (iii) magnetized and non-magnetized flows, (iv) chemically reactive and non-reactive flows. For all these cases, the concerned emerging partial differential system is transformed into a reduced ordinary differential system and later solved numerically by using the shooting method along with the Runge-Kutta scheme. The observations are debated graphically, and numerical values are reported in tabular forms. It is noticed that the heat transfer rate increases for both the thermal slip and non-slip cases. The skin friction coefficient declines towards the Weissenberg number in the magnetized field.
Introduction
Materials processing, power generation, transportation, civil infrastructure, food production, automobiles, and hydroelectric power plants, to mention just a few, are the dominant parts of fluid mechanics applications.Owning such a vital role in fluid science, mathematicians, oceanographers, geologists, biologists, atmospheric scientists, physicists, and engineers pay attention to fluid flow fields.For example, the heat transmission characteristics of a visco-elastic fluid flow by way of a stretched surface carrying a heat sink and heat source were examined using mathematical analysis by Vajravelu and Rollins [1].In terms of Kummer's and parabolic cylinder functions, solutions for heat transfer rate and temperature towards the Prandtl number were obtained.For small Prandtl numbers, it was demonstrated that there was no boundary layer in the solution.A mathematical analysis of heat and momentum for flow towards a stretched surface was performed by Andersson et al. [2].Accurate similarity transformations were carried out to diminish the flow of time-dependent equations to ordinary differential equations.For certain iterations of an unsteadiness parameter and Prandtl number, the resulting problem was numerically solved.The temperature grew monotonically towards the ambient field.The visco-elastic heat and flow properties subject to the porous stretched medium were considered by Abel et al. [3].Temperature-dependent viscosity was considered.Such assumption results in non-linear equations.Such involved equations were solved numerically.The impact of visco-elastic permeability parameters and fluid viscosity towards numerous conditions were investigated for two separate scenarios, namely heat flux (PHF) and prescribed surface temperature (PST).In this examination, the most noteworthy discovery was that the skin friction admits an inverse relation towards the permeability parameter.The magnetized heat transmission over a stretched sheet in the visco-elastic fluid was investigated by Zakaria [4].The successive approximation approach was used to get the solution of flow equations.On the temperature and velocity, the impacts of relaxation time parameter, Prandtl number, Alfven velocity, surface mass transfer, and elastic velocity coefficients were debated graphically.The heat and flow aspects of a second-grade liquid towards a stretched surface were investigated by Cortell [5].The order reduction of the involved flow equations was conducted via similarity transformations.The energy equation with viscous dissipation was considered, and the variation in both temperature gradient and temperature was investigated.Abel and Mahesha [6] investigated magnetized a visco-elastic liquid towards a sheet with the assumptions of a heat source that is non-uniform with thermal radiation.The thermal conductivity was believed to change with temperature in a linear fashion.The primary flow equations were partial differential equations (PDEs), which were later reduced as ordinary differential equations (ODEs) by using the appropriate transformations.The regular perturbation scheme was used to solve the altered equations.The efficient shooting method also yielded a numerical solution, which agrees well with the analytical answer.In several plots, the various flow variables that determine temperature profiles, such as heat sink/source, thermal radiation, and visco-elastic parameters, Eckert, Prandtl, and Chandrasekhar numbers, were shown, and the rate of heat transfer coefficients was also measured against these parameters.By considering heat sink/source, viscous dissipation, and magnetic field, an analysis was performed to examine the convective flow having an exponentially stretched plate and temperature regime.The extremely non-linear momentum and energy equations have approximate analytical similarity solutions.The current findings admitted great accord with the existing work on a variety of special instances.For various values of the governing parameters, heat transfer rate and temperature regimes were examined.On the directional flow coordinate, numerical solutions were derived towards exponentially stretched velocity.The implications of numerous physical parameters on dimensionless heat transfer characteristics, such as Prandtl number, Hartman number, and Grashof number, were thoroughly examined.It was discovered that increasing the Prandtl number lowers the drag faced by fluid while increasing the magnetic field strength raises the local Nusselt number.By considering heat transfer, volumetric heating, magnetic field, and a variety of other factors, the flow of a visco-elastic fluid over a porous stretched surface was considered by Pal [7].It was demonstrated that their solutions do not appear to have emerged earlier and can be developed by making careful picks of the involved functions.On a stretching sheet, the pseudoplastic flow field was investigated by Ashrafi and Meysam [8].The flow equations were converted into lower-order equations via transformations.After that, the system was numerically integrated with the RK scheme.The impact of various flow variables, such as heat transfer coefficient, apparent viscosity, temperature, velocity, towards Prandtl number, pseudo-plasticity index, and unsteadiness parameter, were studied in depth.The stretched surface was used by Zhang et al. [9] to explore heat transfer aspects of Oldroyd-B fluid with suspended nanoparticles.Here, with Ag and Cu, the polyvinyl alcohol is treated as a base fluid.The stretched sheet velocity and temperature were supposed to vary.The flow equations were first constructed, and the similarity transformation was used to convert them to ODEs.The homotopy analysis method (HAM) was used to derive the analytical answers, which exhibit good agreement with earlier results.For analyzing the heat transmission of fractional visco-elastic magnetohydrodynamics (MHD) fluid flow field, the Lie group was introduced by Chen et al. [10].The Grünwald scheme approximation was used to minimize and solve fractional equations conjectured with Riemann-Liouville operators numerically.The results reveal that the wall stretching exponent, fractional derivative, and magnetic field all have a significant impact on skin friction and heat conductivity.For high fractional-order derivatives, visco-elastic fluids move faster and do not stick near the outer flow.As the magnetic field parameter is increased, skin friction increases, while heat transmission decreases.Refs.[11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] can be used to examine past and recent trends.Besides this, heat transmission and transport phenomena in porous media are significant processes in many technical applications, including heat pipe technology, chemical catalytic reactors, electronic cooling, pack-sphere bed, and heat exchangers, to name a few.Heat transport in the porous fibrous medium is a well-known and complicated subject that has received a lot of attention.Many drying and heating uses of such materials require a detailed grasp of this challenge.Similarly, several textile materials and, more recently, ovens that are utilized as heat barrier materials are being investigated.As a result, due to its ever-increasing applications in industries and modern technology, the study of flow and heat transfer in porous media exposed to non-Newtonian and Newtonian fluid flows has gotten a lot of attention [31][32][33][34][35].
By considering mathematical formulations, their solutions, and the motivations carried by the researchers reported above, we propose a new symmetry transformation for non-Newtonian thermally magnetized flow subject to both non-permeable and permeable sheets.The effort in design is explained in the various sections.In Section 1, the limited literature survey on mathematical formulation subject to fluid flow fields is reported.In Section 2, the flow formulation is disclosed.The procedure to obtain a scaling transformation is debated in Section 3. Section 4 is devoted to sharing the procedure adopted for numerical solutions.The observations are debated in Section 5 with the parallel graphical outcomes.The important results are itemized in Section 6.We conclude that the new scaling group of transformations will help researchers implement such exercise on their problems for batter narration of flow problems by using symmetry analysis with mathematical modeling.
Flow Formulation
The flow of Williamson fluid (WF) is considered over a heated flat surface.The surface was taken as porous, and an external magnetic field was applied perpendicular to the flow.The heat transfer in fluid with heat source/sink effect was entertained, and concentration aspects of fluid were taken in the presence of a first-order chemical reaction.At the surface, both temperature and velocity slip effects were also assumed.The physical model of the flow problem is given in Figure 1.The present physical model was studied by using the continuity equation, Cauchy momentum equation, energy equation, and concentration equation.The said ultimate equations are: To obtain a system of differentials for our problem, we needed the particular rheology of the Williamson fluid model [36], which could be assessed using the following constitutive relation for the Williamson fluid model: Ω is given as: . To obtain a system of differentials for our problem, we needed the particular rheology of the Williamson fluid model [36], which could be assessed using the following constitutive relation for the Williamson fluid model: where is given as: For the present problem we considered 0 and 1. gives: Carrying the binomial expansion in Equation (3), we get: ( ) The Williamson fluid velocity . Further, by considering assumptions of steady flow, the external magnetic field effect, heat sink/source effect, chemical reaction effect, and using the Williamson fluid constitutive relation (Equation ( 5)) into Equation (1), resulted in the following coupled non-linear flow narrating differential equations for our physical problem: For the present problem we considered µ ∞ = 0 and Γ .
Ω < 1.Hence, Equation (2) gives: Carrying the binomial expansion in Equation (3), we get: The Williamson fluid velocity U 1 is taken along the X 1 -axis and U 2 is towards X 2 -axis.Further, by considering assumptions of steady flow, the external magnetic field effect, heat sink/source effect, chemical reaction effect, and using the Williamson fluid constitutive relation (Equation ( 5)) into Equation (1), resulted in the following coupled non-linear flow narrating differential equations for our physical problem: along with boundary constraints: Further, we used: and obtained: ∂U ∂X while the corresponding equations are: The partial differential Equations ( 12) to ( 15) form a structure.These equations are strongly non-linear and coupled.So far, finding an exact solution was not possible.Therefore, we reduced the independent variables by using single parameter transformations for examination.Rather than continuing with transformations accessible in the literature, we found some specifically for the current problem.We needed the stream function relation: Equations ( 12)-( 16) under Equation ( 17) gets the forms: the concerned endpoint conditions are:
Symmetry Exploration
The Equations ( 6)-( 9) are a series of partial differential equations that are strongly non-linear and coupled.Our aim was to decompose the equations into independent variables.For this, we needed a series of scaling transformations.As a result, the following one-parameter transformation category can be considered: (23) here, the old coordinates (C, T, Γ, Ψ, Y, X) were replaced with (C * , T * , Γ * , Ψ * , Y 1 , X 1 ) due to Equation (23).In this case, the implementation results in: carrying invariant condition for Equations ( 24)-( 26) under G s , we get: further (boundary conditions) BCs give λ 4 = 0, and λ 6 = 0. Equation ( 27) results in: Equation ( 23) under Equation ( 28) offers: Equation ( 29) with the use of Taylor's expansion around ε = 0 up-to O(ε), we get: Therefore, we have: The use of Equation (31) results in: Equations ( 19)- (22) under Equation (32) gives: while the reduced endpoint conditions are: The Williamson fluid fills a semi-infinite domain.As a result, the Sherwood number (ShD), Skin Friction Coefficient (Skin-FC), and Nusselt number (Nm) are among the surface quantities.These surface quantities have the following mathematical expression: the corresponding dimensionless forms can be written as: and the flow parameters are given as: where Wb, Mg, Pr, Hr, Sc, Vp, Ts, Pm, and Rs denote the Wessinberg number, magnetic field parameter, Prandtl number, heat generation parameter, Schmidt number, velocity slip parameter, thermal slip parameter, porosity parameter, and chemical reaction parameter, respectively.
Solution Procedure
We took into account the Williamson fluid flow towards a stretched porous surface in this paper.In the existence of heat absorption, heat generation, and chemically reactive species, the slip flow field was thermally magnetized.The assumptions are mathematically represented as PDEs.The use of a Williamson fluid with many physical special effects resulted in highly non-linear PDEs, making an exact solution unattainable at this time.When an exact answer appears to be impossible, we always look for a numerical solution [37,38].Researchers in this subject, in particular, used a variety of approaches to report numerical solutions.The majority of methods for investigators seeking a numerical solution are by transforming the PDEs into ODEs, which are then solved by the numerical scheme.It is worth noting that the conversion from PDEs to ODEs is accomplished through a specific set of transformations [39,40].Many scholars consider such transformations straight from the literature rather than obtaining specific transformations of the flow problem.This does not allow for a better narration of the flow fields.In our situation, we used symmetry analysis to provide transformations for equations.We created coupled ODEs using these transformations and solved them using a shooting method combined with the RK arrangement.It is worth noting that we employed self-coding to implement the shooting approach by converting Equations ( 33)-( 36) into an initial value problem.The reduced initial value problem was then transformed into a system of seven first-order differential equations, which were solved by selecting three missing conditions as appropriate starting estimates.The computed solution converges if the boundary residuals are less than the tolerance error 10 −6 .If the calculated results do not meet this criterion, the starting estimates are adjusted using Newton's method, and the operation is repeated until the solution meets the chosen convergence threshold.Further, ShD, Nm, Skin-FC, WF concentration, WF velocity, and WF temperature are considered as the quantities of interest.Here, we have eight differential flow variables, namely Wb, Vp, Mg, Pm, Ts, Hs, Pr, and Hr.The effects of such flow variables are inspected on WF temperature, WF velocity, WF concentration, Skin-FC, Nm, and ShD.The range of flow variables was chosen in such a way that the method's convergence and stability are maintained.The final observations in this direction are shared by using line graphs and tables.The ShD, Nm, and Skin-FC were the surface quantities of concern.These quantities were estimated by taking into account a variety of physical frames.The variation in Skin-FC for non-slip and slip flows is shown in Table 1 with positive Wb values.When Wb = 0.1, 0.2, 0.3, and 0.4 were increased for Vp = 0, the Skin-FC decreased intensely.Furthermore, the Skin-FC was found to be a diminishing function of Wb when Vp = 0.5.For iteration in porosity parameter, Table 2 provides the Skin-FC outcomes for both magnetic and non-magnetized cases.When Mg = 0, the Skin-FC was found to be rising as a function of Pm in an absolute sense.Furthermore, when Pm = 0.1, 0.2, 0.3, and 0.4 were increased for Mg = 0.5, the Skin-FC displayed inciting values.The Skin-FC was found to be an increasing function of Pm for both fields.Skin-FC numerical values for non-permeable and permeable sheets are listed in Table 3.In this scenario, we looked at the magnetized flow on a case-by-case basis, namely permeable surface (Pm = 0) and non-permeable surface (Pm = 0.5).When we increased Wb = 0.1, 0.2, 0.3, and 0.4 for Pm = 0, the Skin-FC decreased considerably.Furthermore, at Pm = 0.5, Skin-FC was found to be a decreasing function of Wm.Physically, the resistance encountered by particles decreased as Wb grew.
Results Analysis
The WF was fitted above the permeable magnetized surface.In the existence of heat generation and absorption special effects, the thermal flow regime was carried out.Velocity and thermal slip were also taken into account.Consideration of the concentration equation, as well as the chemical reaction, adds to the novelty.The problem was investigated numerically.In detail, the impact of Pm, Mg, Vp, and Wb were investigated on WF velocity and shown with the help of Figures 2-5, respectively.In detail, the effect of Pm = 0.0, 0.3, 0.6 on WF velocity was investigated, and observations in this way are shown in Figure 2. It is worth noting that when Pm = 0.0, the fluid flow is across a non-permeable surface, and the velocity strength of WF is much higher than when Pm = 0.3 and Pm = 0.6.Overall, it is seen that velocity declines as Pm increases.The porosity parameter possesses an inverse relation towards permeability of the porous medium; therefore, increases in the porosity parameter cause a decline in the stretching rate of the flat surface.Since we considered the no-slip condition, a reduction in stretching rate resulted in lower values of the velocity of WF. Figure 3 offers the variations Mg = 0.0, 0.5, and 0.9, which are all positive values.Here, Mg = 0.0 denotes the absence of a magnetic field, and velocity has a large value because there is no Lorentz force.The WF velocity is retarded for Mg = 0.5 and Mg = 0.9 as compared to Mg = 0.0.When Mg = 0.9 was compared to Mg = 0.0 and 0.5, the fall in WF velocity magnitude was greater.It has been noticed that as Mg is increased, the WF velocity decreases.One should note that the non-zero value of the magnetic field parameter gives birth to Lorentz forces.Here, we simulated Mg = 0.0, 0.5, and 0.9.The positive values 0.5 and 0.9 enhance the strength of the Lorentz force, and due to the resistive nature of the Lorentz force, the WF particles met higher resistance; this led to a decline in WF velocity.For Vp = 0.0, 0.2, and 0.4, the velocity distributions subject to the permeable magnetic surface are shown in Figure 4. We established non-slip Williamson fluid flow with Vp = 0, and the WF velocity magnitude was larger than with Vp = 0.2 and 0.4.We discovered that WF velocity over a magnetic surface decreased as Vp increased.Variations in Wb = 0.1, 0.3, and 0.7 are shown in Figure 5 It is worth noting that when Pm = 0.0, the fluid flow is across a non-permeable surf and the velocity strength of WF is much higher than when Pm = 0.3 and Pm = 0.6.Ove it is seen that velocity declines as Pm increases.The porosity parameter possesses an verse relation towards permeability of the porous medium; therefore, increases in the rosity parameter cause a decline in the stretching rate of the flat surface.Since we con ered the no-slip condition, a reduction in stretching rate resulted in lower values of velocity of WF. Figure 3 offers the variations Mg = 0.0, 0.5, and 0.9, which are all posi values.Here, Mg = 0.0 denotes the absence of a magnetic field, and velocity has a la value because there is no Lorentz force.The WF velocity is retarded for Mg = 0.5 and = 0.9 as compared to Mg = 0.0.When Mg = 0.9 was compared to Mg = 0.0 and 0.5, the in WF velocity magnitude was greater.It has been noticed that as Mg is increased, the velocity decreases.One should note that the non-zero value of the magnetic field par eter gives birth to Lorentz forces.Here, we simulated Mg = 0.0, 0.5, and 0.9.The posi values 0.5 and 0.9 enhance the strength of the Lorentz force, and due to the resistive nat of the Lorentz force, the WF particles met higher resistance; this led to a decline in velocity.For Vp = 0.0, 0.2, and 0.4, the velocity distributions subject to the permeable m netic surface are shown in For both magnetized and non-magnetized situations, Table 4 was used to calculate the Skin-FC changes towards Wb.To begin, we assumed that the WF fluid was positioned above a non-magnetic (Mg = 0) permeable surface.When Mg was set to 0, we saw a drop in Skin-FC in absolute terms as Wb was increased.For magnetized WF flow across a stretched surface, the Skin-FC was also determined.Therefore, for Mg = 0.5, we discovered that Skin-FC decreased the function of Wb.In terms of physics, Skin-FC implied that the surface exerts an opposing drag force on fluid particles.As a result, increasing Wb = 0.1, 0.2, 0.3, 0.4 reduced the resistance faced by particles in both frames.Skin-FC numerical values for non-permeable and permeable sheets are presented in Table 5 for Mg = 0.1, 0.2, 0.3, and 0.4, respectively.Table 5 shows that in both non-magnetic and magnetized fields, the resistance provided by the surface increased as Mg values increased.Table 6 shows Nm variations for both magnetic and non-magnetized frames as they approach greater Pr values.When Pr = 1.1, 1.2, 1.3, and 1.4 were increased for Mg = 0, which is the non-magnetized condition, the Nm increased considerably.Similarly, when Mg = 0.5, Nm rose in response to greater Pr values.For both magnetized and non-magnetized situations, Table 4 was used to calculate the Skin-FC changes towards Wb.To begin, we assumed that the WF fluid was positioned above a non-magnetic (Mg = 0) permeable surface.When Mg was set to 0, we saw a drop in Skin-FC in absolute terms as Wb was increased.For magnetized WF flow across a stretched surface, the Skin-FC was also determined.Therefore, for Mg = 0.5, we discovered that Skin-FC decreased the function of Wb.In terms of physics, Skin-FC implied that the surface exerts an opposing drag force on fluid particles.As a result, increasing Wb = 0.1, 0.2, 0.3, 0.4 reduced the resistance faced by particles in both frames.Skin-FC numerical values for non-permeable and permeable sheets are presented in Table 5 for Mg = 0.1, 0.2, 0.3, and 0.4, respectively.Table 5 shows that in both non-magnetic and magnetized fields, the resistance provided by the surface increased as Mg values increased.Table 6 shows Nm variations for both magnetic and non-magnetized frames as they approach greater Pr values.When Pr = 1.1, 1.2, 1.3, and 1.4 were increased for Mg = 0, which is the non-magnetized condition, the Nm increased considerably.Similarly, when Mg = 0.5, Nm rose in response to greater Pr values.The WF temperature dependency on Pr, Hs, Ts, and Hr was examined and offered in terms of line graphs, as shown in Figures 6-9, respectively.To be more specific, the influence of Pr on temperature is depicted in Figure 6.The WF temperature dependency on Pr, Hs, Ts, and Hr was examined and offered in terms of line graphs, as shown in Figures 6-9, respectively.To be more specific, the influence of Pr on temperature is depicted in Figure 6.The WF temperature dependency on Pr, Hs, Ts, and Hr was examined and offered in terms of line graphs, as shown in Figures 6-9, respectively.To be more specific, the influence of Pr on temperature is depicted in Figure 6.In this case, the positive variation in Pr = 1.5,1.8,and 2.1 was taken into account.We have seen that when we have such a higher iteration, the temperature drops.Because Pr has an inverse relation with the fluid thermal diffusivity, we witnessed a decline in thermal diffusivity and, as a result, a considerable drop in WF temperature when we iterated Pr = 1.5, 1.8, and 2.1.The effect of Hs on WF temperature is depicted in Figure 7. WF temperature decreased when we iterated Hs = 0.0, −0.3, and −0.5.Iterations with Hs = 0.0, −0.3, and −0.5 caused energy loss and a drop in the overall temperature.The influence of the thermal slip parameter on WF temperature is offered in Figure 8. Ts = 0.0, 0.2, and 0.3 were employed in this line graph investigation.It is worth noting that we got a non-thermal case for Ts = 0.0.In this situation, the temperature appeared to be higher than Ts = 0.2 and Ts=0.3.Overall, the WF temperature appeared to be falling as Ts increased.Figure 9 depicts the impact of heat generation on the thermal flow regime of a WF fluid towards a In this case, the positive variation in Pr = 1.5,1.8,and 2.1 was taken into account.We have seen that when we have such a higher iteration, the temperature drops.Because Pr has an inverse relation with the fluid thermal diffusivity, we witnessed a decline in thermal diffusivity and, as a result, a considerable drop in WF temperature when we iterated Pr = 1.5, 1.8, and 2.1.The effect of Hs on WF temperature is depicted in Figure 7. WF temperature decreased when we iterated Hs = 0.0, −0.3, and −0.5.Iterations with Hs = 0.0, −0.3, and −0.5 caused energy loss and a drop in the overall temperature.The influence of the thermal slip parameter on WF temperature is offered in Figure 8. Ts = 0.0, 0.2, and 0.3 were employed in this line graph investigation.It is worth noting that we got a non-thermal case for Ts = 0.0.In this situation, the temperature appeared to be higher than Ts = 0.2 and Ts=0.3.Overall, the WF temperature appeared to be falling as Ts increased.Figure 9 depicts the impact of heat generation on the thermal flow regime of a WF fluid towards a In this case, the positive variation in Pr = 1.5, 1.8, and 2.1 was taken into account.We have seen that when we have such a higher iteration, the temperature drops.Because Pr has an inverse relation with the fluid thermal diffusivity, we witnessed a decline in thermal diffusivity and, as a result, a considerable drop in WF temperature when we iterated Pr = 1.5, 1.8, and 2.1.The effect of Hs on WF temperature is depicted in Figure 7. WF temperature decreased when we iterated Hs = 0.0, −0.3, and −0.5.Iterations with Hs = 0.0, −0.3, and −0.5 caused energy loss and a drop in the overall temperature.The influence of the thermal slip parameter on WF temperature is offered in Figure 8. Ts = 0.0, 0.2, and 0.3 were employed in this line graph investigation.It is worth noting that we got a non-thermal case for Ts = 0.0.In this situation, the temperature appeared to be higher than Ts = 0.2 and Ts = 0.3.Overall, the WF temperature appeared to be falling as Ts increased.Figure 9 depicts the impact of heat generation on the thermal flow regime of a WF fluid towards a porous magnetic surface.The WF temperature increased dramatically when we iterated Hr = 0, 0.2, and 0.3.When Hr = 0.0, the thermal flow regime has no heat-producing impact, and the temperature of WF is smaller in magnitude than when Hr = 0.2 and 0.3 are utilized.This is because energy is generated inside the flow regime when we iterate the Hr, resulting in a rise in WF temperature.The impact of Sc and Rs is examined on WF concentration, see Figures 10 and 11.Particularly, Figure 10 deals with the impact of Sc on WF concentration.Sc was carried out in the following way; Sc = 1.8, 2.2, and 2.6.The WF concentration decreased as we iterated Sc.This effect was comparable to the Pr effect on WF temperature.As the Sc value increased, the mass diffusivity reduced, and the concentration of WF fell.The flow was considered with chemical reaction, and the resultant flow variable was the chemical reaction parameter Rs.In this direction we iterated Rs = 0.0, 0.2, and 0.4.It was found that for Rs = 0.0, the WF flow was non-reactive, and one can note that here, the concentration magnitude was higher as compared to Rs = 0.2 and Rs = 0.4.Together, the WF concentration showed declining values towards Rs = 0.0, 0.2, and 0.4.
Appl.Sci.2021, 11, x FOR PEER REVIEW 15 of 20 porous magnetic surface.The WF temperature increased dramatically when we iterated Hr = 0, 0.2, and 0.3.When Hr = 0.0, the thermal flow regime has no heat-producing impact, and the temperature of WF is smaller in magnitude than when Hr = 0.2 and 0.3 are utilized.This is because energy is generated inside the flow regime when we iterate the Hr, resulting in a rise in WF temperature.The impact of Sc and Rs is examined on WF concentration, see Figures 10 and 11.Particularly, Figure 10 deals with the impact of Sc on WF concentration.Sc was carried out in the following way; Sc = 1.8, 2.2, and 2.6.The WF concentration decreased as we iterated Sc.This effect was comparable to the Pr effect on WF temperature.As the Sc value increased, the mass diffusivity reduced, and the concentration of WF fell.The flow was considered with chemical reaction, and the resultant flow variable was the chemical reaction parameter Rs.In this direction we iterated Rs = 0.0, 0.2, and 0.4.It was found that for Rs = 0.0, the WF flow was non-reactive, and one can note that here, the concentration magnitude was higher as compared to Rs = 0.2 and Rs = 0.4.Together, the WF concentration showed declining values towards Rs = 0.0, 0.2, and 0.4.porous magnetic surface.The WF temperature increased dramatically when we iterated Hr = 0, 0.2, and 0.3.When Hr = 0.0, the thermal flow regime has no heat-producing impact, and the temperature of WF is smaller in magnitude than when Hr = 0.2 and 0.3 are utilized.This is because energy is generated inside the flow regime when we iterate the Hr, resulting in a rise in WF temperature.The impact of Sc and Rs is examined on WF concentration, see Figures 10 and 11.Particularly, Figure 10 deals with the impact of Sc on WF concentration.Sc was carried out in the following way; Sc = 1.8, 2.2, and 2.6.The WF concentration decreased as we iterated Sc.This effect was comparable to the Pr effect on WF temperature.As the Sc value increased, the mass diffusivity reduced, and the concentration of WF fell.The flow was considered with chemical reaction, and the resultant flow variable was the chemical reaction parameter Rs.In this direction we iterated Rs = 0.0, 0.2, and 0.4.It was found that for Rs = 0.0, the WF flow was non-reactive, and one can note that here, the concentration magnitude was higher as compared to Rs = 0.2 and Rs = 0.4.Together, the WF concentration showed declining values towards Rs = 0.0, 0.2, and 0.4.Tables 7 and 8 offer the Nusselt number outcomes for various parameters in different frames of reference.The Nm variations in thermal/non-thermal slip flows are reported in Table 7.For the non-thermal case (Ts = 0), we confirmed that when Pr = 1.1, 1.2, 1.3, and 1.4 increased, the Nm upturns.The Nm increased as Pr increased when Ts = 0.5.In thermal/non-thermal cases, the transfer of heat normal to the permeable surface was directly related to Pr.The transfer of heat normal to both permeable and non-permeable surfaces was apparent, see Table 8.We found that for both surfaces, the rate was an increasing function of Pr.Tables 9-11 were constructed to examine the variation in ShD towards three different frames, namely permeable and non-permeable surfaces, magnetic and non-magnetic fields, chemically reactive and non-reactive flows.The variation in ShD was observed for both non-permeable and permeable sheets (Table 9).The Pm = 0.0 implies the porous surface assumption, and for this case, we observed that when we increased Sc, the ShD increased reasonably.Together in both cases, porous and non-porous mediums, the ShD had a direct relation with Sc.Table 10 offers the ShD variation in both magnetic and non-magnetic fields towards Sc.We noticed that for both Mg = 0.0 and Mg = 0.5, the ShD showed inciting values towards Sc = 2.1, 2.2, 2.3, and 2.4.The impact of iteration in Sc on ShD for both chemically reactive and non-reactive flows was investigated and offered in terms of Table 11.In detail, Rs = 0.0 implies the case of non-reactive flow, and in this case, we noticed that ShD showed inciting values towards Sc = 2.1, 2.2, 2.3, and 2.4.Further, it can be seen that this impact was the same for chemically reactive flow.The magnitude of ShD variation was higher in the case of reactive flow.
Key Outcomes
The group-theoretic analysis was performed to determine the specific scaling transformations of the heat transfer problem, and through these specific transformations, we narrate the whole description of chemically reactive thermally magnetized Williamson fluid toward a stretched heated porous surface.Owing to the numerical solution, we arrived at the following conclusions:
•
WF velocity had inverse relations with Pm, Mg, Vp, and Wb.
•
WF temperature showed a declining nature with Ts, Pr, and Hs.
•
Higher values of Hr resulted in inciting values of temperature.
•
WF concentration was found with decreasing functions of Sc and Rs.
•
Nm showed inciting values towards Pr for thermal no-slip and slip regimes.
•
For magnetized and non-magnetized flows, ShD showed higher values of Sc.
•
In both chemically reactive and non-reactive cases, ShD increased as Sc increased.
•
Skin-FC showed declining values towards Wb in both non-magnetic and magnetic fields.
as a line graph of WF velocity.WF velocity has an inverse relationship with positive Wb values, as shown in the graph.It is worth mentioning that raising Wb reduces WF velocity, and the drop is of average scale.Positive values of Wb increased the Williamson fluid relaxation time.The higher relaxation time increased the viscosity of fluid due to which fluid flow faced higher resistance, and as a result, the WF declined.Appl.Sci.2021, 11, x FOR PEER REVIEW 10 o 0.6 on WF velocity was investigated, and observations in this way are shown in Figur
Figure 4 .
We established non-slip Williamson fluid flow w Vp = 0, and the WF velocity magnitude was larger than with Vp = 0.2 and 0.4.We disc ered that WF velocity over a magnetic surface decreased as Vp increased.Variations in = 0.1, 0.3, and 0.7 are shown in Figure 5 as a line graph of WF velocity.WF velocity has inverse relationship with positive Wb values, as shown in the graph.It is worth menti ing that raising Wb reduces WF velocity, and the drop is of average scale.Positive val of Wb increased the Williamson fluid relaxation time.The higher relaxation time increa the viscosity of fluid due to which fluid flow faced higher resistance, and as a result, WF declined.
Table 6 .Table 6 .
Nm variation towards Pr in magnetic and non-magnetic fields.Nm variation towards Pr in magnetic and non-magnetic fields.
Figure 11 .
Figure 11.Rs versus WF concentration.Tables 7 and 8 offer the Nusselt number outcomes for various parameters in different frames of reference.The Nm variations in thermal/non-thermal slip flows are reported in
Figure 11 .
Figure 11.Rs versus WF concentration.Tables 7 and 8 offer the Nusselt number outcomes for various parameters in different frames of reference.The Nm variations in thermal/non-thermal slip flows are reported in
Table 1 .
Skin-FC variation towards Wb in slip and non-slip flows.
Table 2 .
Skin-FC variation towards Pm in magnetic and non-magnetic fields.
Table 3 .
Skin-FC variation towards Wb in porous and non-porous mediums.
Table 4 .
Skin-FC variation towards Wb in magnetic and non-magnetic fields.
w F
Table 5 .
Skin -FC variation towards Mg in porous and non-porous mediums.
Table 7 .
Nm variation towards Pr in thermal and non-thermal flow fields.
Table 8 .
Nm variation towards Pr in porous and non-porous mediums.
Table 9 .
ShD variation towards Sc in porous and non-porous mediums.
Table 10 .
ShD variation towards Sc in magnetized and non-magnetized flow fields.
Table 11 .
ShD variation towards Sc in reactive and non-reactive flow fields.
•
Skin-FC at both non-permeable and permeable sheets showed a direct relation with Mg.
F Skin friction coefficient | 8,209.6 | 2022-02-14T00:00:00.000 | [
"Engineering",
"Physics",
"Mathematics"
] |
A Technique for Cluster Head Selection in Wireless Sensor Networks Using African Vultures Optimization Algorithm
INTRODUCTION: Wireless Sensor Network (WSN) has caught the interest of researchers due to the rising popularity of Internet of things(IOT) based smart products and services. In challenging environmental conditions, WSN employs a large number of nodes with limited battery power to sense and transmit data to the base station(BS). Direct data transmission to the BS uses a lot of energy in these circumstances. Selecting the CH in a clustered WSN is considered to be an NP-hard problem. OBJECTIVES: The objective of this work to provide an effective cluster head selection method that minimize the overall network energy consumption, improved throughput with the main goal of enhanced network lifetime. METHODS: In this work, a meta heuristic based cluster head selection technique is proposed that has shown an edge over the other state of the art techniques. Cluster compactness, intra-cluster distance, and residual energy are taken into account while choosing CH using multi-objective function. Once the CHs have been identified, data transfer from the CHs to the base station begins. The residual energy of the nodes is finally updated during the data transmission begins. RESULTS: An analysis of the results has been performed based on average energy consumption, total energy consumption, network lifetime and throughput using two different WSN scenarios. Also, a comparison of the performance has been made other techniques namely Artificial Bee Colony (ABC), Ant Colony Optimization (ACO), Atom Search Optimization (ASO), Gorilla Troop Optimization (GTO), Harmony Search (HS), Wild Horse Optimization (WHO), Particle Swarm Optimization (PSO), Firefly Algorithm (FA) and Biogeography Based Optimization (BBO). The findings show that AVOA's first node dies at round 1391 in Scenario-1 and round 1342 in Scenario-2 which is due to lower energy consumption by the sensor nodes thus increasing lifespan of the WSN network. CONCLUSION: As per the findings, the proposed technique outperforms ABC, ACO, ASO, GTO, HS, WHO, PSO, FA, and BBO in terms of performance evaluation parameters and boosting the reliability of networks over the other state of art techniques.
2 communication, and storage capabilities [4]. Other limitations of sensor nodes are node failure and network failure [5]. WSNs are hypersensitive and their lifetime is vulnerable to energy depletion of sensor nodes [6]. Optimal energy consumption in WSN is necessary to increase lifetime and performance. It can be achieved by performing clustering which decreases the energy consumption and increases the scalability of the network.
Clustering divides a network into equal or unequal clusters. Each cluster has a Cluster head (CH). CHs gather local data from cluster member sensor nodes, aggregate it, and transfer it to a distant base station (BS) directly or through other CHs [7,8]. The BS is linked to the Internet. Figure 1. represents the architecture of the WSN.
Figure 1. A WSN Architecture
In clustering, the selection of CHs is critical for improving the network durability since it affects member sensor nodes energy. As discussed in [9], CH-selection is an NP-hard optimization issue. As a result, the procedure of selecting the CHs is to be carried out with the utmost care.
The rest of the paper is divided into the following sections. The related work is presented in Section 2. Section 3 defines the network and energy model. Section 4 describes the proposed cluster head selection technique using AVOA. Section 5 discusses the nine meta heuristic techniques used for comparison with the proposed technique. Section 6 shows the simulation results and finally, section 7 concludes the paper.
Related Work
The essential contributions of researchers in the field of energy efficient clustering techniques have been discussed in this section. Many studies have been conducted by the researchers in the field of energy efficient cluster heads selection using conventional techniques as well as evolutionary techniques. These techniques are discussed here: In the year 2000, Heinzelman et al. [10] proposed a technique called Low-energy adaptive clustering hierarchy (LEACH) which was a probabilistic technique that randomly selects CH in each round. LEACH attained a large energy consumption while lengthening lifetime of the network compared to static clustering method. In the year 2002, Lindsey et al. [11] proposed PEGASIS which was termed as Power-Efficient Gathering In Sensor Information Systems refers to an approach that was chain-based. PEGASIS arranged the sensor nodes (SNs) so that they formed a chain, with each SN communicating only with its immediate neighbours. In the year 2011, Liu et al. [12] proposed Genetic Algorithm based LEACH in which cluster head was selected based on optimal value of cluster head probability using Genetic-Algorithm. It gave optimal probability of nodes which could be selected as cluster heads with minimum energy consumption.
In the year 2014, Sharawi et al. [13] proposed a technique based on Bat Swarm Optimization algorithm to select optimized cluster heads by minimizing the intra-cluster compactness with minimum distance between nodes in same cluster. In the year 2015, Gupta and Sharma [14] proposed a clustering algorithm based on modified Ant Colony Optimization using residual energy as a parameter. Comparative analysis was performed taking average energy of network, number of live nodes respect to number of rounds as performance evaluation metrics. In the year 2016, Rao et al. [15] proposed an energy efficient Particle Swarm Optimization (PSO) based cluster head selection protocol. Parameters such as intra-cluster distance, residual energy and sink distance of all the CHs were used in the fitness function. In the year 2017, Sengottuvelan and Prasath [16] proposed an improved Breeding Artificial Fish Swarm Algorithm for optimal selection of Cluster head in the network. The multi objective function was based on end to end delay and energy was formulated. In the same year, an energy-efficient clustering scheme called New Chemical Reaction Optimization, proposed by Rao and Banka [17], was based on a recent variable population-based chemoinspired approach (nCRO). It considerably increased the network's lifetime. However, CHs connect directly with the BS, which could be impractical in a large-scale network.
In the year 2018, Yogarajan and Revathi [18] presented Ant Lion Optimization for Clustering (ALOC), a technique to improve the energy efficiency of the network. The fitness function that was utilized in the ALOC took into account the residual energy, the number of nodes that were close to each node, the distance that separated the nodes from one another and the distance that separated each node from the BS. In the year 2019, Ahmad et al. [19] presented an approach for CHS based on an optimization technique called Artificial Bee Colony (ABC) method. The ABC's fitness function was evaluated on the basis of three parameters intra cluster distance, sink distance and residual energy. In the same year, Dattatraya and Rao [20] introduced a CH selection scheme using Glow worm Swarm Optimization (GSO) and Fruit Fly Optimization Algorithm (FFOA) to select the best CH in WSNs. Fitness Function was designed considering energy, distance, delay and QoS as important parameters. In the same year, an energy-efficient clustering scheme called New Chemical Reaction Optimization, proposed by Rao and Banka [17], was based on a recent variable population-based chemo-inspired approach (nCRO). In the year 2020, Prahadeeshwaran and Priscilla proposed a hybrid elephant V. Kusla optimization algorithm called NIUS-HEHOA [21] to extend the lifespan of the network by selection energy balanced cluster heads. In the year 2021, Arunachalam et al. [22] introduced Squirrel Search Optimization-based Cluster Head Selection Technique (SSO-CHST) was presented for enhancing sensor network lifetime by using a gliding factor to determine cluster head selection during data aggregation and dissemination. The sensor node with the lowest fitness value was the cluster member. High-fitness sensor nodes were the possible cluster head.
From the literature, it has been deduced that choosing CHs for large-scale WSN is an NP-Hard problem, but that it can be addressed using optimization methods. Several methods for energy-aware sensor node clustering have been discussed in the literature as a result of the fact that energy efficiency plays an essential role in the WSN. It is generally agreed that one of the most essential aspects of a WSN is its ability to minimize the amount of energy that sensor nodes consumes. The researchers have focused the attention on the clustering and cluster head selection methods. Considering this as a motivation, in this work, we propose a cluster head selection technique using meta heuristic based African Vultures Optimization Algorithm (AVOA) to reduce the energy consumption of nodes in the network. The main contributions of this paper are given below: • A cluster head selection technique based on African Vultures Optimization Algorithm is proposed. • Various set of parameters are incorporated in this paper to evaluate a fitness function. • Comparison of proposed cluster head selection technique using AVOA with 09 state-of-the-art techniques in terms of average energy consumption, total energy consumption, network lifetime and throughput.
Preliminaries
In this section, network model and energy model, which are used in this paper, are discussed.
Network Model
The following are the characteristics of the WSN scenario considered in this paper. The sensors are distributed at random across the sensing field, and a node. The same methodology was used in the literature by [23]. As a result, no location-finding equipment, such as GPS, is required. After deployment, all sensor nodes are presumed to be stationary, and nodes can operate in cluster head and conventional sensor modes. Each node executes sensing on a regular basis and always has data to communicate to its CH or BS.
Energy Model
In this research, the classic Low-energy adaptive clustering hierarchy (LEACH) energy model [10,24] is used to calculate network energy consumption and exhausted energy for all network nodes. The same first order wireless communication model was used by [25]. Following assumptions have been made: • Initialize all the nodes with their attributes and establish their initial energies based on the first order radio energy model. • Node are transferring message with k bits via a distance d on symmetrical communication channels, and hence consume energy is expressed by equation (1) and equation (2). Based on the distance between the sender and receiver, the energy can be computed as follows: where ( , ) is the transmission energy consumption k bits data to a node, is the distance between sender and receiver nodes, is the energy dissipation per bit used to run the transmitter or receiver circuitry, _ is the amplifier parameter of transmission corresponding to the free space, _ is the amplifier parameter of transmission corresponding to the two ray model, 0 is the transmission distance threshold which is expressed by equation (3).
On other hand, the reception dissipation energy for message of bits for any node is expressed by the equation (4) due to running the receiver circuitry ( ).
( ) is the energy consumption in receiving k bits of data.
Proposed African Vulture Optimization Algorithm (AVOA) Based Cluster Head Selection Technique
In 2021, Abdollahzadeh et al. [26] proposed the AVOA meta-heuristic algorithm, which has since been used in a number of real-world engineering applications. Simulations and models based on the foraging behaviours and living habits of African vultures were used to develop the AVOA. Conversely, the best vulture right now is the strongest and most abundant vulture. In AOVA, vultures aim to be near the best and avoid the worst.
According to the aforementioned four norms, AOVA's problem-solving process may be broken down into five phases that mimic the actions of different vultures during the foraging phase.
Phase One: Identifying the best vulture in any group
After the formation of the initial population, the fitness of each solution is calculated, and the top and bottom performers are selected as vultures for the first and second groups, respectively. Populations are analysed comprehensively at each fitness iteration.
In equation (5), the chance that the chosen vultures will lead the others to one of the better solutions in each group is determined using 1 and 2 .
Phase Two: Vulture Hunger Rate They have high energy levels when they are full, so they can travel vast distances in quest of food, but when they're hungry, their energy levels are low and they can't walk as far as when they are full, so they become more aggressive. In order to model this phenomenon mathematically, equation (7) was applied in the process. For this reason, the rate at which vultures become full or hungry is taken into consideration while deciding whether or not to move from exploration to extraction. equation (7) has been used to describe the decline in the rate at which people become satiated.
1 is a random value between 0 and 1. When y is below 1, the vulture is starving; when it's 0, it's full.
Phase Three: Exploration
This stage examines AVOA's exploration phase. Vultures have excellent vision and can detect food and dying animals. Vultures meticulously inspect their surroundings and travel far to get food. In the AVOA, vultures can explore random areas using two distinct methodologies, and a parameter selects one. This option must be set between 0 and 1 before the search operation to determine which strategy is selected. A random integer between 0 and 1 is created when selecting a strategy in the exploration phase. Equation (9) is selected if the number is more or equal to the parameter . However, if number is smaller equation (11) is used. It is shown in equation (8).
A vulture's position vector in the following iteration is denoted by ( + 1), and its satiation rate in the present iteration is denoted by which is obtained using equation (7). ( ) is one of the best vultures selected by equation (5) in equation (10). Vultures move randomly in to guard food from other vultures. The formula = 2 × , where is a randomly generated number between 0 and 1, is used to create , which is then utilised as a coefficient vector to increase the random motion, which shifts with each iteration. The vulture's is the vector position. LB and UB show the variable bounds.
3 boosts randomness. If 3 is near to 1, similar solutions are distributed, adding a random motion to the LB.
Phase Four: Exploitation Stage-1
At this stage, the AVOA's efficiency stage is looked into. If the value | | is less than 1, the AVOA moves on to the exploitation phase. This phase also has two parts, each of EAI Endorsed Transactions on Scalable Information Systems 01 2023 -04 2023 | Volume 10 | Issue 3 | e9 A Technique for Cluster Head Selection in Wireless Sensor Networks Using African Vultures Optimization Algorithm 5 which uses a different strategy. Two parameters 2 and 3 , determine how likely it is that each strategy will be chosen in each internal phase. Parameter 2 selects first phase tactics, whereas 3 selects second phase strategies. Before running the search operation, both parameters must be set to 0 and 1.
| |between 1 and 0.5 begins the exploitation phase. During the first phase, two distinct rotational flight and siege-fight strategies are employed. 2 determines each strategy's choosing, which should be between 0 and 1 before searching operation performed. At the start of this phase, 2 is formed. If this number is larger than or equal to 2 , Siege-fight is implemented slowly. If the random number is less than 2 , rotating flight strategy is used. This procedure is shown in equation (12). where, ( ) is computed using equation (10), and is the vulture satiation rate, which is derived using equation (7). 4 is a number generated at random between 0 and 1.
Rotating Flight of Vultures
The model rotating flight pattern known as the spiral motion model is utilised frequently by vultures. With this strategy, a spiral equation is formed including every vulture and the top performer among the two. Equation (16) is used to describe the circular flight, and (17).
In (16) and (17), ( ) represents position vector of the two best vultures in the current iteration. 5 and 6 are random between 0 and 1. Equations (15) and (16) computes 1 and 2 . As a last step, the vultures' locations are revised using equation (17).
Phase Four: Exploitation Stage-2
The motions of the two vultures during the second stage of exploitation draw many vulture species to the food source, where siege and fierce competition for food occur. This phase is initiated when the | | value is less than 0.5. 3 , a value between (0,1) is generated during this stage. If 3 is more than or equal to 3 , numerous vulture species ought to congregate over the food supply. Alternately, the aggressive siege-fight strategy suggested in equation (18) is used if the generated value is lower than 3 .
Finally, all vultures are aggregated using equation (20), where 1 and 2 are from equation (19) and ( + 1) is the upcoming iteration's vulture vector. The top vultures in the first and second groups of the current iteration are respectively known as 1 ( ) and 2 ( ). The vector position of a vulture at any given moment is denoted by the symbol ( ), and the rate at which vultures become satisfied can be calculated using the equation (7). ( When | | is more than 0.5, the head vultures become starved, and they lack the strength necessary to contend with the other vultures. In order to model this motion, equation (21) is applied.
where ( ) reflects the vulture's distance from one of the two best vultures using equation (14).
Initialization and Solution Vector
To determine the ideal position of CH's in the network, a solution vector is employed to create an initial solution, which is processed using AVOA. The solution vector is composed of nodes (African Vultures) chosen at random as CHs from a total number of N sensor nodes. The position of each node is randomly assigned a node-ID from the total number of nodes i.e. N.
The dimensionality of each node agent is directly proportional to the number of CHs in the network. Assume that denotes the i th node in the network, and that the position of each node , is randomly assigned a node-ID between 1 to N. Solution encoding used for CH selection using AVOA optimization technique is illustrated in the figure below.
Fitness Function
The primary objective of proposed technique is to select energy efficient cluster heads using three parameters Intra Cluster Distance, Residual Energy, Cluster Compactness.
Since the three objectives are minimization problem in nature, so the ultimate fitness function F may be defined as a linear combination of 1 , 2 and 3 which is formulated as equation (22) : where, 1 , 2 3 are the weighted coefficients such that, 1 + 2 + 3 = 1 . The weight given are 1 =0.4, 2 = 0.3 and 3 = 0.3 respectively; 1 is Intra Cluster Distance, 2 is Residual Energy and 3 is Cluster Compactness.
Intra Cluster Distance:
The intra-cluster distance, or the distance that separates all of the sensor nodes in a cluster from the CH, is the first parameter that is evaluated for the objective function. Nodes depletes there energy when they communicate thus we must limit intra cluster distance. It means that CH may be selected to be closest to all cluster nodes. Thus, it is necessary to minimize the individual objective function 1 which is shown below in equation (23) : This function gives total distance for N number of sensor nodes for ℎ cluster heads. ℎ is the distance between cluster head and sensor nodes.
Residual Energy:
Residual energy of the cluster head is taken into consideration. The goal is to make the most use of all of the WSN's remaining energy of the sensor nodes. Since, the objective function is a minimization problem, so it is expressed as in equation (24) below: This function gives total residual energy of all the cluster heads.
Cluster Compactness:
Cluster Compactness represents measure of closer proximity among the normal nodes and CHs. It is third essential parameter that aids in generating clusters which comparably spend less energy. Cluster compactness of a node is determined as node degree divided by sum of distance to neighbour nodes. High cluster compactness value implies that the node can form compact clusters and lowest intra cluster communication cost is incurred. Hence, a node with high cluster compactness score has greater probability of getting elected as CH. It is expressed as in equation below : where, ( ) represents set of nodes at one hop distance from ℎ node. ND is node degree.
Evaluation of Fitness Function
In this stage, the formed fitness function is assessed for each location of the sensor nodes by inputting the values that are related with the solution vector (decision variable). The fitness function value for the solution vector is then represented in equation (26).
In this scenario, the fitness value of each sensor node depending on its position depicts how well they perform in terms of energy and minimal distances taken into account during the construction of the fitness function.
According to the solution vector, each sensor node in the network calculates its fitness value and arranges it in ascending order. The sensor node with the lowest fitness score will be selected as the network's cluster head. The cluster head in the following round may be chosen from among the remaining sensor nodes. This cluster head selection procedure, however, is reliant on the sensor nodes having enough energy because they must gather and transmit the data to the BS.
Meta-heuristic Techniques For Comparison
Meta heuristic algorithm, a method for solving optimization issues, begin by creating random response(s) and then advance to optimizing based on their operators and through modifying the generated random answers. In general, all meta heuristic algorithms use the similar approach to discover the optimal answer [27]. In the majority of these algorithms, the search process begins by producing one or more random solutions in a range of variables that is suitable. The primary generated solution in the population based algorithms is termed population, colony or group. and also each of solutions is named chromosome, particle, ant, and etc. [28] Then, utilizing operators and other methods of The new solution will also be selected from the pool of prior ones, and this process will carry on until the stop requirement is satisfied.
In this work, different meta heuristic techniques are used for comparative analysis with the proposed technique for CH selection i.e. based on African Vulture Optimization Algorithm. Meta-heuristic techniques used in this work are Artificial Bee Colony Optimization (ABC) [29], Ant Colony Optimization (ACO) [30], Atom Search Optimization (ASO) [31], Wild Horse Optimization (WHO) [32], Harmony Search Optimization (HS) [33], Gorilla Troops Optimization (GTO) [34], Firefly Algorithm (FA) [35], Particle Swarm Optimization (PSO) [36], Biogeography Based Optimization (BBO) [37]. Simulation parameters, average energy consumption, total energy consumption, network lifetime and throughput are used in this work for performance evaluation of the above mentioned techniques.
Result and Discussions
In this paper, we have proposed an energy-efficient CH selection technique based on African Vultures Optimization Algorithm (AVOA) and a fitness function considering intra-cluster distance, residual energy, and cluster compactness for the algorithm's energy efficiency. The algorithm compromises of two different scenarios namely WSN Scenario-1 for 100 nodes and WSN Scenario-2 for 200 nodes while considering same network area of 100 × 100 .
This section compares the performance of the techniques namely ABC, ACO, ASO, WHO, HS, GTO, FA, PSO and BBO based on the simulation parameters average energy consumption, total energy consumption, network lifetime and throughput. The simulation was run for 6000 rounds. However, 1000 rounds are used to evaluate the performance parameters of these techniques.
Simulation Parameters
Simulation parameters used for performance evaluation of the proposed cluster head selection technique are as follows: a) Average Energy consumption: It calculates the average energy difference between each node's original level and its remaining level [38].On a per-round basis, it is the amount of energy spent by a node to transmit the data in the WSN network. b) Total Energy Consumption: Network energy dissipation per round is a measure of how much power is used by the network's nodes [15]. c) Network Lifetime (FND): A WSN's life span is defined in this study as the number of rounds it goes through before its first node dies [39] . d) Throughput: The term 'throughput' refers to the total number of data packets that are successfully transferred to the sink [40]. Table 1. Parameter description
Total Energy Consumption Evaluation
As shown in Figure 4 and Figure 5, total energy consumption of the proposed technique with AVOA when compared with the rest of other techniques ABC [29], ACO [30], ASO [31], WHO [32], HS [33], GTO [34], FA [35], PSO [36], BBO [37] for 1000 rounds is the minimum. ABC technique consumes maximum energy while the proposed technique consumes lesser energy compared to rest of the techniques. Table 3 shows the comparative analysis in terms of total energy consumption. When the number of nodes in the network is doubled from 100 to 200 while the number of simulation rounds is kept the same, the overall energy consumption of the proposed technique AVOA is lower than that of ABC [29], ACO [30], ASO [31], WHO [32], HS [33], GTO [34], FA [35], PSO [36], BBO [37].The maximum energy consumption is by the technique with ABC.
Network Lifetime (FND)
First Node Die (FND), which specifies the first node that depletes its available energy throughout the network, is the metric that is used to determine the network lifespan in WSN. The results of comparing the network lifespan of the proposed technique AVOA with the rest of techniques with ABC [29], ACO [30], ASO [31], WHO [32], HS [33], GTO [34], FA [35], PSO [36], BBO [37] are shown in Figure 6. The AVOA has a FND of 1391 in Scenario-1 of the WSN and 1342 in Scenario-2 of the WSN, which is comparatively higher than the other techniques. It is shown that proposed technique has a higher network lifespan even if the nodes density increases from 100 to 200 nodes as compared with rest of the techniques.
Throughput
As shown in Table 4, throughput of the proposed technique with AVOA is maximum when compared with the rest of techniques with ABC [29], ACO [30], ASO [31], WHO [32], HS [33], GTO [34], FA [35], PSO [36], BBO [37]. It is observed that the proposed technique receives more data packets compare to the rest of the techniques. Also, when the node density is doubled, the proposed technique shows minimal decrease in throughput as compared with rest of the techniques.
Conclusion and Future Work
In conclusion, the lifespan of a network is an essential component of a WSN. Keeping track of how much energy is used is not an easy chore. We have proposed an energyefficient CH selection technique based on AVOA and a fitness function considering intra-cluster distance, residual energy, and cluster compactness for the algorithm's energy efficiency. Our proposed technique findings have been compared to well known techniques namely ABC, ACO, ASO, WHO, HS, GTO, FA, PSO, and BBO. This technique has been thoroughly tested with two different WSN scenarios. According to the experimental results, the suggested technique outperforms the traditional technique on the basis of average energy consumption, total energy consumption, network lifetime, and throughput. The results demonstrate that taking into consideration the aforementioned factors lead to improvements in both the average amount of energy used and the lifespan of the network. As future work, other factors effect in parallel with residual energy such as sensor range and transmission energy can be studied and evaluated. Our future research will be to create a routing algorithm utilizing a meta heuristic method. | 6,468 | 2023-01-11T00:00:00.000 | [
"Computer Science"
] |
Blockchain technology and internet of things: review, challenge and security concern
Blockchain (BC) has received high attention from many researchers recently because it has decentralization, trusted auditability, and transparency as its main properties. BC has contributed fundamentally to the development of applications like cryptocurrencies, health care, the internet of things (IoT), and so on. The IoT is envisioned to include billions of pervasive and mission-critical sensors and actuators connected to the internet. This network of smart devices is expected to generate and have access to vast amounts of information, creating unique opportunities for new applications, but significant security and privacy issues emerge concurrently because it does not contain robust security systems. BC provides many services like privacy, security, and provenance to the systems that depends on. This research includes analyzing and a comprehensive review of BC technologies. Moreover, the proposed solutions in academia with the methodologies that used to integrate blockchain with IoT are presented. Also, the types of attacks on blockchain are collected and classified. Furthermore, the main contributions and challenges that are included in the literature are explored, then the relevant recommendations for solving the explored challenges are proposed. In conclusion, the integration of BC with IoT could produce promising results in enhancing the security and privacy of IoT environment.
= ℎ ℎ ( + ) − Difficulty: It is a fractional hash value collision, and therefore, it based on the computational power of the mining node to compute the hash value that fulfills this fractional collision. The miner will modify the difficulty until reaching the fractional collision. − Transactions: They mean the transmission of data (i.e. data unit of the blockchain). − Merkle trees: It is a structure as binary tree that encapsulates the data and permits it to be tested securely and efficiently within a huge dataset. The transactions in Merkle trees are packed as shown in Figure 4 [13].
Mining
Mining is the process, which is responsible for updating the BC [11]. In a P2P network, the mem_pool is a space that is assigned in the full node memory, which saves and relays the transactions to other nodes. In order to update the status of the BC, certain nodes in the network, called the verifiers or miners, which verify the transactions and compute (cryptographic calculations that are very complicated and they need huge quantities of storage space and power) a block. The miners select certain transactions from the mem_pool that will be put in the blocks. Transactions pay a fee of mining, which can be regarded as an impulse for the miners to so that the transaction can be mined by it. Normally, priority is given by the miner to the transaction that pays a higher fee. Transactions that are not chosen by the miners remain in the mem_pool till another one chooses them for a new block. Otherwise, transactions will be discarded [14].
Chaining of blocks
When the block is filled up, it is broadcasted across the network by the sender's node and then mining must be achieved. After that, the block becomes attached concurrently to the former block to make a ledger. This process goes on for a countless number of epochs to make a never-ending series of blocks [15], as shown in Figure 5.
Addresses and address derivation
Each blockchain networks has an address, which is a short string of alphanumeric characters derived from the user's public key by a cryptographic hash function, along with certain additional information (i.e. checksums and version number) [10]. Most of the implementations of the BC make use of addresses as the "from" and "to" endpoints in a transaction. In general, addresses are shorter than the public keys and they are not secret. There is a method to generate an address, which is by creating a public key to which a cryptographic hash function is applied to it and the hash is converted to a text [10].
→ ℎ ℎ ℎ →
Each blockchain may implement a different method to derive the address. For public BC networks, which allow anonymous account creation, the user of the BC network can generate a considerable number of asymmetric-key pairs, and therefore addresses as desired. This allows a varying degree of pseudo-anonymity. Addresses could act as the public-facing identifier in the BC network for a user, and often an address will be converted into a QR code for the purpose of easier use by mobile devices [10].
Consensus algorithms
One of the key aspects of BC technology is determining which node publishes the next block. This is solved by implementing one algorithm amongst many possible consensus ones. For public BC networks, generally there are many publishing nodes, which compete at the same time to publish the next block. They usually do this to win cryptocurrency and/or transaction fees. They are generally distrusted nodes, who could recognize each other by their addresses [10].
When a node joins a BC network it should agree on the initial state of the system. This is recorded in the only pre-configured block, the genesis block (i.e. the first block in the blockchain). Every BC network has a published genesis block and every block must be added to the BC after it, based on the consensus algorithm agreed-upon. Regardless of the algorithm, however, each block must be valid and accordingly it can be validated in an independent way by each node of the BC. By combining the initial state and to verify every block, nodes can independently agree upon the current state of the BC. From the other hand, if there were two valid chains provided to a full node, the common mechanism almost all BC networks is that the 'longer' chain is regarded as the correct chain, which is to be depended; due to having been worked on more [10].
A major characteristic of the BC technology is that no third party is required to provide the status of the system. Each node in the system can test the integrity of the system. All nodes must be in a common agreement to add a new block to the BC. However, some temporary disagreement is sometimes permitted.
There are many consensus algorithms based on the BC implementations [3]. In Table 1, the basic consensus algorithms are briefly discussed. Every BC consensus attempts to achieved three important properties, which are (safety, liveness and fault tolerance) that can be implement efficiently [16]. In 1999, Markus Jakobsson proposed the PoW. Mining nodes that utilize this algorithm requires to resolve a complex-mathematical processes that is altered repeatedly and should have been decided by all the miners. The decision here depends on a common consensus. The problematic with this algorithm is that it wasted a high power of computation [9]- [11], [14]. Furthermore, it is characterized with a great latency to confirm the present transactions [3]. Proof of stake (PoS) PoS is essentially a generalized form of the PoW. The terms validators are used instead of miners, and they (the minors) are called to the nodes to confirm the transactions [3]. Unlike the PoW, the PoS does not need the mining to calculate the hash value. Instead, the next block creator is selected in a random manner. The chance of a node actually selected to build the new block is based on the stake of node [9], [10], [14], [17]. The PoS keeps great computational resources compared to the PoW [3]. Practical byzantine fault tolerance (PBFT) PBFT is used widely in the private Blockchains, when the network has a higher trust model different from the PoS and PoW. In practical byzantine fault tolerance, the network is rearranged into a cluster of active and passive replicas. A primary replica is specified from active replicas. PBFT process includes four phases: (the pre-prepare, prepare, commit and the reply phases), as shown in Figure 6. Compared to PoW and PoS, the PBFT has a greater message density [11], [14].
Blockchain platforms
There are many platforms related to the BC technology that could be used as primary approaches to construct a wide variety of software such as [18]: a. Bitcoin [19]: It was the leading BC that was theorized and applied and the cryptocurrency that works as a digital financial asset. The Bitcoin utilizes a P2P networking, public key cryptography and a PoW for the purpose of making transactions as well as verifying them. The system of the Bitcoin is programmed so that a fresh block can be created one time each ten minutes [18]. b. Ethereum [20]: It was proposed in 2013 by the developer of Bitcoin Vitalik Buterin. Ethereum [5] is an open-source, public, blockchain-based distributed computing platform featuring smart contract functionality. it utilizes the PoW as its consensus algorithm and mechanism, but it quickly switches to the PoS. The basic building of the proof of work algorithms is that the Ethereum currently used is a memory hard hashing algorithm called Dagger-Hashimoto. The time for creating the block is considerably less compared to the time consumed in several other systems and it is about 12 seconds. As the time of creating the lower block results in a higher rate of stale blocks, therefore the system uses the GHOST protocol to take the heaviest computational chain as the main blockchain. The heaviest chain in this case includes the stale blocks too [18]. As the main platform is a network, which is free for all, a software can be downloaded by anybody or use it in his/her computer. The incentive mechanism for running the software is represented by to getting ether (i.e. a digital currency). While the main platform of Ethereum is a free BC network, the software is an open-source and it permits the software developers to make the network a private one, where nodes participating are only those which were given permission [20]. The most important advantage, however, is the fact that they can be merged-mined with bitcoin and thereby making rootstock as secure as possible [18]. d. Hyperledger fabric [6]: it is the implementation of the private (permissioned) blockchain technology that is employed as a foundation for developing the BC applications that are hosted via the Linux for a wide variety of industries. Therefore, its architecture is modular, which allows components such as consensus and membership services to be plug-and-play. It leverages the container technology (docker) to enable smart contracts called "chaincode", which comprises the system application logic. The Hyperledger fabric is an open-source distributed ledger software, which is built and maintained by the Hyperledger community. It is a collaborative effort aimed at developing the cross-industry blockchain technologies [20]. Moreover, it is uses the PBFT as a consensus algorithm instead of the PoW algorithm. PBFT can process thousands of requests every second with a latency of increase of less than a millisecond [18].
Types
Blockchain could be classify to three main types. They are public blockchain, private blockchain and consortium blockchain. The details of these types are listed in Table 2 [11]. Decentralized ledger platforms can be used by anyone to broadcasting blocks in it without the need to obtaining the approval from the authority site. As permissionless blockchain is exposed to any participant, a hacker may try to broadcast blocks in a fashion that disrupts the system. To inhibit this, the permissionless blockchain repeatedly use a consensus algorithm [10] Private Blockchain (Permissioned) In the private blockchain, users who broadcast the blocks need be certified through a certain authority site [10]. Private organizations could use this type of Blockchains [4] Consortium Blockchain The greatest consortium blockchains are semi-decentralized. More than one party can access the blockchain instead of only one part by means of regulating protocol [4]
Pros and cons of the blockchain
Although blockchain technologies have many important characteristics such as decentralization [3], validity [16], transparency [3], anonymity and identity, redundancy [3], auditability [16] and immutability [3], it has many pros and cons, which are explained as Through the nature of the blockchain design, the pros that are contracted by implementing a blockchain solution are [3]: a. Distributed: The BC has many nodes that are distributed over the world (data availability). e.g., Ethereum. b. Transparency: Data are distributed on a public manner and other concerned node and manager can easily get it. c. Security: It is a main subject of the digital world of nowadays. The certified documents and the transactions are executed and constantly stored in the blocks, which cannot be altered or deleted by anyone (data integrity). d. Trust: Participants in this blockchain are the ones who decide the transactions to be added before inserting them in the blockchain. So, trust becomes higher in terms of altering, writing or even reading the information. e. Efficiency: In the blockchain technology, efficiency of a network can be improved when the financial groups collaborate. f. Resilience: If a huge number of nodes, the strength of information is improved with extended life.
On the other hand, the blockchain has some cons which are [3]: a. Block size: In the blockchain, each block that is inserted in to the blockchain increases the size of the database. b. Speed and cost of the network: It is hard to manage all the nodes in a BC once the node numbers become higher. c. Wasteful: Every node has to continue the consensus alg., which provides the mistake tolerance and ensures zero interruption. Then, they are totally wasteful since every node tracks the relevant task to grasp consensus. So, the chance of calculating a valid nonce rise according to the computing power that the faster computers have. d. Standards: Because blockchain is at its initial age, there are no definite standards. e. Performance: Compared to the centralized database, the blockchain is slower, because the blockchain performs all the operations of a traditional database system in addition to various additional loads like. Consensus algorithm, signature confirmation and others when it executes the transaction. In order to mitigate the drawbacks mentioned above, efforts were made to improve the protocol speed and the efficiency with a special care to the algorithms associated with the limited access and consensus algorithm [21].
Challenges
Based on the literature, the blockchain technology encountered many challenges. Some of these challenges are related to security, attacks, power consumption and so on [3]. The most common challenges of blockchain with their details are presented in Table 3. Currently, criminal activities are growing in BCs. As for bitcoin, customers are provided with addresses, which are not related to users' identities. So, the events that are made via bitcoin is greatly hard to be tracked. Attacks Many malicious users and attackers attempt to attack a node or a network by conducting several hacks, as we discussed in the sub section 2.7.
Standardization
Increasing the number of nodes from dissimilar networks, in the blockchain no standard to permit the customers to cooperate. The absence of standardization permits developers or coders to prepare everything as they hope and this makes issues for the IT. Environmental cost Implementation the blockchain technology in any company, needs definite software and applications that requires to be established via a software organization. Thus, it is acquisition is costly. Also, the company may have not the fixed hardware for the usage of the software.
Energy consumption
Allows the users to operate in the blockchain as some difficult algorithms need to be achieved and their outcome is a large depletion of power.
Slow and cumbersome
The blockchain technology is not fast to perform a transaction related to the traditional fee system (e.g., cash or debit card), because the blockchain achieves more complexity and encryption operations. Public perception People must identify the variances amongst blockchain technologies before adopting the blockchain technology as it can be so useful to eliminate the bad idea about blockchain technology in its own.
Blockchain's attacks
Blockchain is subjected to many attacks. Because the decentralized nature of its operational environment, hackers have conducted numerous multipurpose attacks against blockchain technology by exploiting the vulnerabilities of (structure of the blockchain, peer-to-peer system and applications) [4], [14], as it is shown in Figure 7. In the following, we summarized some attacks against to the structure of the blockchain, peer to peer system, and the applications of blockchain [14]. a. Forks [22]: It exploits the blockchain structure by chain splitting and revenue loss and this in turn affects the blockchain. b. Stale and orphaned blocks [23]: It exploits the blockchain structure by revenue loss that has an effect on the blockchain, miners and mining pools. c. Selfish mining [24]: It exploits the blockchain's peer-to-peer system by revenue loss and malicious mining that have an effect on the blockchain, miners and mining pools. d. Majority attacks (51% attack) [25]: It exploits the blockchain's peer-to-peer system by chain dividing, malicious mining and revenue loss that impact the blockchain, miners and application. e. Domain name system (DNS) hijacks [14]: It exploits the blockchain's peer-to-peer system by revenue loss, partitioning and information theft that have an effect on the miners, exchanges, mining pools and users. f. Border gateway protocol (BGP) hijacks [26]: It exploits the blockchain's peer-to-peer system by revenue loss, partitioning and theft information that effect on the miners, mining pools and users. g. Eclipse attack [27]: It exploits the blockchain's peer-to-peer system by partitioning that effect on the miners and users. A set of hacker nodes separates its bordering nodes utilize internet protocol (IP) addresses, thus compromising their received and leaving traffic. h. Distributed denial-of-service (DDoS) attacks [28]: It exploits the blockchain's peer-to peer system by malicious mining and information theft that have an effect on the blockchain, miners and mining pools. In bitcoin network, 51% attack could lead to denial-of-service (DoS). i. Block withholding [29]: It exploits the blockchain's peer-to-peer system by revenue loss and malicious mining that impact the miners and mining pools. j. Finney attacks [14]: It exploits the blockchain's peer-to-peer system by revenue loss, which affects the miners, mining pools and users, by creating an identical to the preceding transaction and drives it to a receiver. After the receiver gets the transaction and brings the result, the miner publishes the preceding block with the basic transaction in it. k. Consensus delay [29]: It exploits the blockchain's peer-to-peer system by the delay and info loss that have an effect on the miners, mining pools and users. A hacker might insert false blocks to increase the latency or to prevent peers from achieving the consensus success around the state of the blockchain. l. Timejacking attacks [30]: It exploits the blockchain's peer-to-peer system by delaying the malicious mining, chain splitting and revenue loss that affect the miners, mining pools and application. A hacker can calculate a new block and put its timestamp forward of network's timestamp with the values of 50 minutes. m. Blockchain ingestion [14]: It exploits the blockchain's applications by info loss that affects the blockchain. The examination of the public blockchain could expose beneficial information to an opponent. n. Double-spending [31]: It exploits the BC's applications that impacts the blockchain and the users, by using a one-time transactions several times. o. Cryptojacking [32]: It exploits the blockchain's applications by chain splitting and malicious mining that effect on the application and users. p. Wallet theft [33]: It exploits the blockchain's applications by revenue loss and theft that impact the exchanges, application and users. Associated keys with peers in the network are saved in a digital wallet, the "wallet theft" attack get up with sure associations on the applications. q. Smart contract DoS [14]: It exploits the blockchain's applications by revenue loss, delay and theft that affect the blockchain, application and users. r. Reentrancy attacks [14]: It exploits the blockchain's applications by revenue loss and theft and this in turn affects the application and users. s. Replay attacks [14]: It exploits the blockchain's applications by revenue loss and info loss, which impact the blockchain, mining pools, application and users, including the creation of one transaction and send it to dissimilar two blockchains. t. Overflow attacks [14]: It exploits the blockchain's applications by theft that affects the application and users, when the type adaptable value exceeds (2^256). u. Short address attacks [14]: It exploits the blockchain's applications by revenue loss and theft that impact the application. Abuses a bug in Ethereum's virtual machine to create additional tokens on boundless consumptions. v. Balance attacks [14]: It exploits the blockchain's applications by revenue loss and theft, which impact the application and users.
Blockchain applications
Nowadays, a great number of corporations earn their profits means of employing BC, as it could be the most suitable solution, if the following conditions are met [3]: i) when the shared common database is needed, ii) when trust is not reciprocated by participants, iii) when the database is common for various writers or parties, iv) when the system or the network is subjected to hackers and malicious, v) when the same rules are applied to all the participants in the system, vi) when there is transparency in the result of the decision making for all the participants, and vii) when the transactions are no more than 10,000 transactions/second. There are many sectors used a BC, some of them are: − Business: Recently, organizations are in no need to a third party or a host to safeguard their assets [3]. In particular, financial and healthcare are accustomed to encounter security problems because of the malicious users. By using the BC this issue can be settled. − Supply chain: Almost all the organizations possess enterprise resource planning (ERP) and supply chain administration software to ensure that the operations move smoothly [34]. But the restricted details and the visibility related to products are two essential elements, which become vital with the increase of the product number. Therefore, BC is a good solution that can follow every item in the organization by means of the process of the supply chain and can also fulfill a powerful security means. To enhance the product safety, several records should also be updated, like the ambient conditions at each stage that minimizes the loss or harm inflicted on products when shipping. The BC is also used to achieve an update and replacement can be performed during the lifetime of any device or product. − Copy rights: Because of the insufficient transparency, the information of multimedia like photos and music. Encounter the copyright problem when trying to specify the valid owners to use them properly. Authorized owners are not capable of controlling their documents on the internet. A great number of hackers merely by copying the contents of the documents in an unauthorized way and distribute them via the internet. The BC mitigate the issue mentioned earlier by enhancing the information availability concerning the ownership of copyrights. This sort of information is provided as "trusted timestamping". Hence, any timestamp will be identified as an encoded information, which show the time and date of occurrence. So, a trusted timestamping "is a process that takes place to track the modification securely in addition to the creation time of any document". − Electricity management: In the developed states, management of electricity is considered as an essential concern as the user 's information of electricity is often leaked. Since the number of electricity users is tremendous, it is hard to manage the system entirely. So, the BC is utilized to resolve the above problem by adopting private BC and smart contracts [35]. − Distributed storage: Currently, cloud data storage service is one of the popular services that is used by numerous users. However, one of the significant disadvantages of cloud-based service is that it is centralized and the cloud service provider (CSP) controls all the processes [36]. Occasionally, the CSP uses the users' confidential data illegally to obtain revenue even without notifying the user, and therefore, users' data may be at hazard. The BC data storage service CSP minimizes the above problem through its decentralized characteristic [37]. Hence, the users store their unmodifiable data. − Digital identity: In the present time, every state is considering the digital identity. so, the digital identity is used in national security, banking industry, healthcare services, citizenship documentation and online retailing. Several states spend considerable amounts of cash in the digital identity field. Occasionally, a digital identity is misused or hacked by malicious users. Therefore, BC can resolve this problem by managing and tracking the digital identity in a secure and efficient way. In this case, the identity is authenticated in a secure and an immutable way. Instead of using system that is based on the password, in BC, identity will be verified by using the digital signature that depends on the cryptography (public key). − Autonomous organizations: BC is utilized to create decentralized companies through making several smart contracts. These contracts adhere to an interaction in a specific protocol.
Integration blockchain with IoT
Cryptocurrency and financial transactions have firstly used blockchain where all nodes in the blockchain execute and store all the transactions. Also, the blockchain provides many benefits because it could be adapted with many domains and one of the common domains is IoT [38]. There are many networks smart devices, which construct the IoT such as Raspberry, ESP and so on. IoT interconnects heterogeneous objects and smart devices seamlessly to create a network, which is used for sensing, processing and communication processes. IoT smart devices are managed and controlled automatically without the need to human interventions and they consume low energy and have a lightweight process. According to Statista Com [39], the number of IoT objects in 2020 is estimated to be 31 billion devices worldwide. By the end of 2025, this number is predicted to increase to be 75 billion devices [40], as shown in Figure 8.
In IoT, the smart devices have to spend the largest portion of their energy and execute the process to achieve vital application tasks, which makes achieving the privacy and security tasks fairly challenging. Malignant attacks can prevent IoT services in addition to threatening the users' privacy, data security and the confidentiality of the entire network [13]. There are four main categories of attacks in IoT-based system, which are: physical attack, network attack, software attack and data attack [41]. In The first category, which is called the physical attack, the attacker will be physically near to the network and attempt to conduct the malicious processes in the entire system through many forms such as manipulating the IoT device, blocking the RF signals, injection of malicious code and performing the side-channel attack. The researcher used the physical unclonable function (PUF) to provide the authentication of IoT devices [42], and thus physical attacks are prevented. The PUF has a characteristic that is impossible to copy the accurate microstructure of the IoT device. In the second category; (network attack), the attackers attempt to manipulate the network of the IoT using many ways such as RFID spoofing, man-in-the-middle, Traffic analysis and Sybil attacks. To prevent this kind of attacks, authentication technique and the secure hash function are used [43]. From the other hand, in the third category, which is called software attacks, the attacker utilizes the current software advantages in the IoT system. The last category is called the data attack and it could be achieved by the unauthorized access to the data and data inconsistency. To prevent these type of attacks, the blockchain could be used by providing privacy-preserving tech. efficiently [44]. The traditional security approaches tend to be costly for IoT in each of processing overhead and consuming of energy [45]. In addition to that, a lot of the state-of-threat security contexts are extremely centralized and they are not suitable for IoT because of the many-to-one nature of the traffic, difficulty of scale [46] and the single point of failure [47].
Integrating the blockchain with IoT achieves many advantages, which are: − The distributed and decentralize attributes of the blockchain technology do not need a central server that provides a scalable method to manipulate the increasing number of IoT devices. − The blockchain provides more security and privacy because it uses complex cryptography algorithms such as timestamp and hash functions to ensure a secure environment [46]. − The blockchain provides an immutable ledger and tamper-proof to protect the data from malicious attacks. Consequently, the trusted system will be produced (i.e. only the trusted participants of the IoT devices could accept or reject transactions depending on their consent). − The blockchain has an important property, which is called Anonymity [8]. − The blockchain supplies a 160-bit address space, which offers 4.3 billion addresses that enable it to assign addresses for multiple objects. − The monitoring and tracking of ownership, trustworthiness, authorized identity registration could be provided by the blockchain. The capability of applying blockchain in IoT relies on many assumptions [48]. Firstly, the IoT application requires a decentralized peer-to-peer ecosystem. Secondly, the IoT application needs to keep payment operation for the available services between the two parties only. Finally, If the logs and traceability of the ordered transactions are required by IoT applications. However, Implementing the blockchain in IoT will require addressing the following challenges [8], [13]: − The mining process of blocks takes a great deal of time, while most of IoT applications require little latency.
729
− The mining process exhausts energy because of its high computation ability, while the common devices of IoT are resource-constrained. − The basic Blockchain protocols generate a lot of overhead traffic; therefore, it may be unwanted for a certain bandwidth-restricted IoT devices. − The blockchain scales is unwanted when the IoT networks are predictable to cover a huge number of nodes. − IoT sensors produce a huge amount of data, therefore, processing the transactions in blockchain will be very slow or will have a high latency. − The anonymity of transaction history cannot be ensured on public blockchain. So, the hackers can determine the identities of users or devices by examining the transaction style. The researching in the field of integrating the blockchain with IoT have seen a major renaissance after the interest rose in cryptocurrencies and mining process. Also, different research publications have provided an advanced solution for constructing decentralized social networking systems, telecommunications, voting, smart homes, and smart city [15] which were all suggested. Recently, there were many studies that investigated the integration of blockchain technology with IoT to solve the privacy and security challenges in the IoT domain. For instance, Dorri et al. [8] presented a lightweight architecture for IoT, which depends on the blockchain to produce IoT system with high privacy and omits the overhead of blockchain. Uddin et al. [13] suggested an efficient lightweight integrated blockchain (ELIB) model, which was established to meet the constraint of IoT. Polyzos and Fotiou [49] demonstrated the importance of blockchain technologies in examining the requirements of the IoT security and how the security challenges can be solved by combining the IoT with the blockchain technologies. Thakore et al. [50] provided a complete survey on the fundamentals of both technologies and the blockchain-based IoT Architecture. Karthikeyan et al. [51] presented a summary of IoT security problems and suggested the blockchain technologies to resolve these problems. They also explain the probability of integrating the blockchain with the IoT. Ramesh et al. [15] discovered the way to keep the IoT data on a mixture of a Blockchain with Ethereum swarm and inter planetary file system (IPFS) in an encrypted style. Fotiou et al. [52] proposed a smart contract-based solution to solve the privacy and security problems in the IoT system. Uddin et al. [13] investigated the latest state-of-the-arts improvements in Blockchain for cloud and IoT, Blockchain and IoT and B.C. and fog of IoT in various applications are analyzed. Tandon [53] presented a review of blockchain technology and the way in which it provides the suitable solution to resolve privacy and security challenges of the IoT system. In addition to that, he discussed the pros and cons of integrating IoT with blockchain. Minoli and Occhiogrosso [54] provided the places of interest in some of IoT environments where Blockchain Technology play an important role. Khan and Salah [55] presented a review of security challenges for IoT and then they presented blockchain technology to resolve the security problems of the IoT system. Sengupta et al. [41] presented a review about the security attacks and the problems associated with each of the IoT and industrial IoT (IIoT) and that organizing it depends on the vulnerability. Also, authors showed the methodology of using blockchain technology for detecting these attacks. Banerjee et al. [56] proposed a new method represented by using the blockchain to provide IoT dataset for solving the sharing of the IoT dataset problems.
Moreover, the importance of integrating the blockchain with IoT it has been applied on various domains. Tables 4 and 5 shows the most common domains and the recent studies, which depend on the integration of blockchain with IoT applications. Most of these studies were concerned with providing security and privacy services for the IoT environment by using blockchain. As a result, blockchain is considered an effective and active tool to provide these services. We note in Tables 4 and 5 that the articles within this review suffers from some challenges such as (privacy and scale). Thus, we introduce in Table 6 some recommendations for those challenges.
In addition to that, the blockchain can adapt to other technologies such as the following: a. Software-defined networks (SDN): In this technology, the resources of this network are managed via a centralized controller, which acts as the networking operating system (NOS) [19]. Yet, scalability is a big constraint in the single networking environments that are enabled by SDN and thus the adaptability of BC with SDN can facilitate multi-domain SDNs interconnection and communication [57]. b. Decentralized email: Nowadays, the security service of an email is dependent on an ongoing process of planning in addition to management. One of the solutions to address the vulnerabilities of the email can be in the form of a blockchain-powered decentralized and distributed email system. Email addresses can be allocated to the clients over BC. Most vitally, the communication of email by BC is not influenced by the authorities of government that might invest the centralized email providers such as ISPs and technology giants like Google, Facebook, and Amazon [17]. c. Blockchain-based content distribution: Content distribution networks (CDNs) are regarded as effective approach that enhances the quality of the service of Internet through the content replication at various geographic locations represented by data-centers. Blockchain technology can be the solution with the necessary ingredients to significantly resolve the challenges related to content distribution. It can stabilize the rights management related issues for studios and artists by providing a better way of content control [17]. d. Distributed cloud storage: Users and organizations encounter and storage management of data problems resulting from the huge growth of data on non-volatile storage systems. the security, privacy and control of data are still important concerns [58]. The solutions of cloud storage that are based on BC inherit characteristics such as anonymity, decentralization and the trusted execution of transactions for the trusted members and can level the path for a cloud computing era characterized with verification and trust. e. Smart cities: There are inclusive major ingredients related to the smart cities, such as smart healthcare (BC is the well-known method that delivers a major level of democratization in the sector of healthcare and thus enhance their status), supply chain management (SCM), smart transportation (BC can improve information exchange, support the performance of vehicle and enhance the dependability of the network lifetime. Furthermore, BC invigorates the transportation industry by making less turnaround times, faster security detection, swifter data management and inspections), smart grid and financial system [17]. Homomorphic encryption and proxy re-encryption technique have been investigated by several studies of BC and IoT to resolve the issue of user's privacy on the BC network. In addition, Federated learning can be integrated with Blockchain technology to ensure the privacy-preserving computation on users' data [13]. Maintaining security in BC Federated learning allows a machine-learning algorithm to be trained by the participants of the Blockchain without exchanging their data where the Blockchain can guarantee the security of the trained algorithm in the form of a smart contract [13]. Resource and power constraints Energy effective consensus algorithms are introduced to save the transactions conducted recently only (e.g., mini-BC [84], PoS and delegated proof-of-space). Xu et al. [85] suggested the management of smart resource for cloud data-centers by utilizing the BC technology.
Scalability and availability
Sharma et al. [57] also submitted an architecture of cloud that depends on integrating the BC with software-defined networks (SDN) and fog-computing.
RESULTS AND DISCUSSION
In this paper, after examining 49 review literatures, it was found that 50% of those literatures are related to the security of internet of things, 25% related to the privacy of IoT, 25% of them to both security and privacy of IoT. The rest of the literatures were related to other fields such as health, agriculture and supply chain, as shown in Table 7. However, the objective of this paper is to present a general reference guide for researchers and practitioners in the fields which are mentioned.
CONCLUSION AND FUTURE WORKS
Blockchain Technology is a fresh tool for many applications in various organizations, which allows to secure transactions in a decentralized authority. In this paper, an overview on blockchain technology is presented. Generally, fundamentals of blockchain are discussed. Also, some types of attacks on blockchain are demonstrated and summarize. Based on the literatures, the integration of Blockchain with another technology such as IoT could provide a better result in some possible domain. This integration shows the features of Blockchain that makes it an attractive technology to solve some IoT challenges such as privacy and security issues.
In future works, concern should be focused on investigating new tendencies for security and privacy services by using blockchain technology in particular to design intrusion detection systems (IDS) which work in the IoT environment. The main goal includes the reduction of the numbers of the fields in the blocks and also developing a lightweight mining and consensus algorithms. | 8,859.2 | 2023-02-01T00:00:00.000 | [
"Computer Science"
] |
uShuffle: A useful tool for shuffling biological sequences while preserving the k-let counts
Background Randomly shuffled sequences are routinely used in sequence analysis to evaluate the statistical significance of a biological sequence. In many cases, biologists need sophisticated shuffling tools that preserve not only the counts of distinct letters but also higher-order statistics such as doublet counts, triplet counts, and, in general, k-let counts. Results We present a sequence analysis tool (named uShuffle) for generating uniform random permutations of biological sequences (such as DNAs, RNAs, and proteins) that preserve the exact k-let counts. The uShuffle tool implements the latest variant of the Euler algorithm and uses Wilson's algorithm in the crucial step of arborescence generation. It is carefully engineered and extremely efficient. The uShuffle tool achieves maximum flexibility by allowing arbitrary alphabet size and let size. It can be used as a command-line program, a web application, or a utility library. Source code in C, Java, and C#, and integration instructions for Perl and Python are provided. Conclusion The uShuffle tool surpasses existing implementation of the Euler algorithm in both performance and flexibility. It is a useful tool for the bioinformatics community.
Background
Randomly shuffled sequences are routinely used in sequence analysis to evaluate the statistical significance of a biological sequence. For example, a common method for assessing the thermodynamic stability of an RNA sequence is to compare its folding free energy with those of a large sample of random sequences. It is known that the stability of an RNA secondary structure depends crucially on the stackings of adjacent base pairs; therefore the frequencies of distinct doublets in the random sequences are important considerations in such analysis [4,25]. Besides, natural biological sequences often manifest certain nearest-neighbor patterns: both eukaryotic and prokaryotic nucleic acid sequences show a consistent hier-archy in the doublet frequencies; in coding regions, the codon usage can also be markedly nonuniform. In many cases, biologists need sophisticated shuffling tools that preserve not only the counts of distinct letters but also higher-order statistics such as doublet counts, triplet counts, and, in general, k-let counts.
Methods for random sequence generation
Several methods are commonly used to generate random sequences. The basic permutation method works as follows: for a sequence S [1, n], pick a random number i between 1 and n, swap the two elements S [i] and S [n], then recurse on the subsequence S [1, n -1]. The random sequence generated by the basic permutation method pre-serves the exact count of each distinct letter in the alphabet, but does not preserve the higher-order statistics of klet counts. The Markov method [12], which is based on the Markov chains, generates random sequences that preserve the k-let counts only on average: the counts of the individual sequences may deviate from the input distribution. The swapping method [15], a popular method which is now folklore, generates random sequences by repeatedly swapping disjoint subsequences flanked by the same (k -1)lets; it does preserve the k-let counts exactly, but produces random sequences that are only uniform asymptotically and may need a large number of swapping steps.
The Euler algorithm preserves exact k-let counts
The Euler algorithm is a less-known but very efficient algorithm for generating truly uniform random k-let-preserving sequences [2,12,15]. We briefly review its history. Fitch [12] first noticed that a doublet-preserving permutation is related to an Eulerian walk of a directed multigraph; however, the algorithm he proposed does not generate all permutations with equal probability. Altschul and Erickson [2] presented the first algorithm (also based on Eulerian walks in directed multigraphs) for generating truly uniform random sequences that preserve either the doublet counts or the triplet counts or both; however, a crucial step of their algorithm for generating random arborescences depends on a trial-and-error procedure, which is a potential bottleneck in performance. This bottleneck was eliminated by Kandel et al. [15], who replaced the trial-and-error procedure with a simple and efficient procedure based on random walks in directed multigraphs. They also generalized the Euler algorithm to preserve the k-let counts for arbitrary k, and suggested a simple data structure for implementation. This data structure is based on look-up tables and requires O(σ 2k-2 ) space and time; it quickly become inefficient as the alphabet size σ and the let size k increase. Since the work by Kandel et al. [15], a better algorithm has been proposed by Wilson [19,23] for generating random arborescences, which is the crucial step of the Euler algorithm that Kandel et al. [15] improves upon Altschul and Erickson [2]. The superiority of Wilson's arborescence generation algorithm to the two previous algorithms by Altschul and Erickson [2] and by Kandel et al. [15] is both proved in the theoretical sense by Wilson [19,23], and demonstrated in the practical sense by a comparison of our implementation with a previous implementation (to be discussed later).
Implementations of the Euler algorithm
We are aware of two previous implementations of earlier variants of the Euler algorithm. The dishuffle program by Clote et al. [6] implements the original version of the Euler algorithm by Altschul and Erickson [2]. The shufflet program by Coward [11] implements the improved version of the Euler algorithm by Kandel et al. [15]. In this paper, we present a sequence analysis tool (named uShuffle) for shuffling biological sequences while preserving the k-let counts. The uShuffle program is based on the latest variant of the Euler algorithm [2,15] and uses Wilson's algorithm [19,23] in the crucial step of arborescence generation. Our goal is to provide a versatile tool that is as efficient and as flexible as possible:
Arbitrary alphabet size and let size
In specific applications, the alphabet size σ and the let size k are often fixed: for biological sequences, typical alphabet sizes are 4 (for DNAs or RNAs) and 20 (for proteins), and typical let sizes are 2 (for dinucleotides) and 3 (for codons). While it is tempting to implement the Euler algorithm just for the fixed alphabet and let sizes at hand, we believe the flexibility of arbitrary alphabet and let sizes is useful. The dishuffle program by Clote et al. [6], for example, is hard-coded for shuffling RNA sequences preserving dinucleotide counts (with alphabet size σ = 4 and let size k = 2). It is apparent that such an implementation cannot be used easily in other applications with different alphabet and let sizes.
Efficiency
When the alphabet size and the let size are both small constants, the running time of the Euler algorithm (with any of the three variants of arborescence generation [2,15,23]) is linear in the sequence length. So it may appear that the efficiency of the shuffling program would not an issue since any conceivable downstream analysis of the randomized data would much slower than the shuffling. However, we note that the linear running time has been proved only for the case that the alphabet and let sizes are constant [15]. It is not at all clear whether the linear performance of the Euler algorithm is scalable for arbitrary alphabet and let sizes. As mentioned earlier, the "standard" data structure suggested by Kandel et al. [15] has time and space complexities O(σ 2k-2 ), which can become exponential when the alphabet size σ and the let size k become large, approaching the order of the sequence length. Indeed, as we will discuss later, we have reason to believe that this very data structure has been used in the shufflet program by Coward [11].
Furthermore, the implementation of the Euler algorithm (in particular, the crucial step of arborescence generation) is non-trivial because of its heavy use of graph-theoretical concepts such as directed multigraphs and Eulerian walks. Although Wilson's celebrated algorithm [19,23] dates back to 1996, and is well-known in the theoretical computer science community, Coward's implementation of shufflet in 1999 [11] still uses the old arborescence algorithm by Kandel et al. [15]. We are not aware of any implementation of Wilson's algorithm in bioinformatics applications. By careful choices of algorithms and data structures, and by scrupulous algorithmic engineering, we strive for the most efficient implementation.
Multiple forms and programming languages
The dishuffle program by Clote et al. [6] is written in Python; the shufflet program by Coward [11] is a web application in C. To reach the widest audience, we have made our uShuffle program available in several forms. It can be used as a command-line program, a web application, or a utility library. Source code in C, Java, and C#, and integration instructions for Perl and Python are provided.
Implementation
This section consists of four subsections. In the first two subsections, we discuss, at a conceptual level, the Euler algorithm and its crucial step of arborescence generation, in preparation of the discussion of implementation details. In the third subsection, we present the algorithmic engineering details of our implementation. In the fourth subsection, we describe the software organization and user interfaces of the uShuffle tool. To justify our algorithm choices and to explain our optimization techniques, the discussions in the first three subsections are necessarily technical. The readers who are not particularly interested in the theoretical discussion of graph algorithms or the technical details of algorithmic engineering can safely skip to the fourth subsection for the software organization and user interfaces.
The Euler Algorithm
In this subsection, we review some basic concepts of the Euler algorithm.
Directed multigraph
A k-let is a subsequence of k consecutive elements in a sequence. Let S be a sequence to be permuted. Let T k be a uniform random sequence that preserves the k-let counts of S. (For example, T 1 is a simple permutation of S, and T 2 is a permutation of S with the same dinucleotide counts.) To generate T k for k ≥ 2, the Euler algorithm [2,15] first constructs a directed multigraph G. We refer to Figure 1 for an example. For each distinct (k -1)-let in S, G has a vertex. For each k-let L in S, which contains two (k -1)-lets L 1 and L 2 such that L 1 precedes L 2 , G has a directed edge from the vertex for L 1 to the vertex for L 2 . Duplicates of k-lets may exist in S, so there may be multiple edges between the vertices.
Correspondence between permutations and Eulerian walks
As we scan the k-lets in S one by one, we also walk in the directed multigraph G from vertex to vertex. When all the k-lets are scanned, each edge in G is visited exactly once: the walk is Eulerian. On the other hand, given an Eulerian walk in G, we can recover a sequence by spelling out the (k -1)-lets of the vertices along the walk (and discarding the overlaps). Since each k-let in S corresponds to an edge in G, every Eulerian walk in G corresponds to a sequence with the same k-let counts as S. Kandel et al. [15] showed that, as long as an Eulerian walk starts and ends at the same two vertices s and t that correspond to the starting and the ending (k -1)-lets of S, the i-let counts for all 1 ≤ i ≤ k are preserved. Therefore, generating a uniform random sequence T k reduces to generating a uniform random Eulerian walk in G from s to t.
Correspondence between Eulerian walks and arborescences
For an Eulerian walk in G, each vertex v of G except the ending vertex t has a last edge e v that exits from v for the last time. The set of last edges for all vertices except t forms an arborescence rooted at t: a directed spanning tree in which all vertices can reach t. Given an arborescence A rooted at t, a random Eulerian walk from s to t with the last edges conforming to A can be easily generated [2,15]: 1. For each vertex v, collect the list of edges E v exiting from v. Permute each edge list E v separately while keeping e v the last edge on the list.
2. Walk the graph G in accordance with the edge lists {E v }: start from s (set u ← s), take the first unmarked edge (u, v) from the list E u , mark the edge, then move to the next vertex v (set u ← v); continue until all edges are marked and the walk ends at t.
In directed multigraphs, there is a nice correspondence between Eulerian walks and arborescences: every arborescence rooted at t corresponds to exactly the same number of Eulerian walks [3,15]. Therefore, generating a uniform random Eulerian walk in G from s to t reduces to generating a uniform random arborescence in G rooted at t. In the next subsection, we discuss algorithms for generating random arborescences, some of which are based on, quite amusingly, random walks again.
Generating Random Arborescences
In this subsection, we review the existing algorithms for arborescence generation, and explain our choice of Wilson's algorithm [19,23]. There are two major approaches Directed Multigraph for the Sequence AATAT Figure 1 Directed Multigraph for the Sequence AATAT.
A T
to generating random arborescences and spanning trees: determinant algorithms and random-walk algorithms.
Determinant algorithms
Determinant algorithms are based on the matrix tree theorem [3, Chapter II, Theorem 14]. For a graph G, the probability that a particular edge e appears in a uniform random spanning tree is the ratio of two numbers: the number of spanning trees that contain the edge e, and the total number of spanning trees. The matrix tree theorem allows one to compute the exact number of spanning trees of a graph by evaluating the determinant of the combinatorial Laplacian (or Kirchhoff matrix) of the graph. A random spanning tree can be generated by repeatedly contracting or deleting edges according to their probabilities.
The first determinant algorithms were given by Guénoche [14] and Kulkarni [16]: for a graph of n vertices and m edges, a random spanning tree can be generated in O(n 3 m) time. This running time was later improved to O(n 3 ) [7]. Colbourn, Myrvold, and Neufeld [8] simplified the O(n 3 ) time algorithm and showed that the running time can be further reduced to O(n 2.376 ), the best upper bound for multiplying two n × n matrices [9].
Random-walk algorithms
Random-walk algorithms use an entirely different approach to generating random spanning trees. Aldous [1] and Broder [5] (after discussing the matrix tree theorem with Diaconis) independently discovered an interesting connection between random spanning trees and random walks: Simulate a uniform random walk in a graph G starting at an arbitrary vertex s until all vertices are visited. For each vertex v ≠ s, collect the edge {u, v} that corresponds to the first entrance to v. The collection T of edges is a uniform random spanning tree of G.
For a graph G and a vertex v in it, define the cover time C v (G) as the expected number of steps a random walk starting from v takes to visit all vertices of G. The running time of the Aldous-Broder algorithm [1,5] is clearly linear in the cover time. In the context of shuffling biological sequences, Kandel et al. [15] extended Aldous-Broder algorithm [1,5] to generate uniform random arborescences of Eulerian directed graphs in the cover time. Wilson and Propp [24] then presented an algorithm for generating uniform random arborescences of general directed graphs in 18 cover times.
Wilson's algorithm
Wilson [19,23] showed that random arborescences and spanning trees can be generated more quickly than the cover time by a cycle-popping algorithm which simulates loop-erased random walks. For a graph G and two vertices u and v in it, define the hitting time h u,v (G) as the expected number of steps a random walk takes from u to v. The running time of Wilson's algorithm [19,23] is linear in the maximum or mean hitting times of the corresponding stochastic graphs. As Wilson [19,23] noted, the mean and maximum hitting times are always less than the cover time, and the differences can be quite significant in certain graphs. Therefore, for generating uniform random arborescences, Wilson's algorithm [19,23] is superior to Kandel et al.'s algorithm [15].
For completeness of presentation, we include in the following the pseudocode of Wilson's algorithm [19,23]: 12
return Next
Let E u be the set of directed edges exiting from the vertex u. The function RandomSuccessor(u) chooses a uniformly random edge (u, v) from E u , then returns the vertex v.
Unlike the Aldous-Broder algorithm [1,5], which simulates a single random walk from the root to visit all vertices, Wilson's algorithm [19,23] simulates multiple random walks: starting from each unvisited vertex, a random walk continues until it joins a growing arborescence which initially contains only the root. A random walk follows the Next [·] pointers; whenever a previously visited vertex is encountered again, a loop is formed and immediately erased because the Next [·] pointer is overwritten (in the first while loop). As soon as a walk reaches the growing arborescence, all vertices in the walk join the arborescence as one more branch.
A comparison of the two approaches
We now give a comparison of the two approaches to generating random arborescences. Kandel et al. [15] proved that the cover time of an Eulerian directed multigraph of n vertices and m edges is O(n 2 m). From our preceding discussion on the cover time and the hitting time, it follows that the expected running time of Wilson's algorithm [19,23] on the same multigraph is at most O(n 2 m) too, neglecting the log n factors.
For a multigraph, the number m of edges can be arbitrarily larger than the number n of vertices. So it might appear that the determinant algorithm by Colbourn et al. [8], which runs in deterministic O(n 3 ) time or even O(n 2.376 ) time, would be a better alternative than the random walk algorithms [15,19,23]. However, we note that when m is large the intermediate values of the determinant computation can be large too. On the typical computer systems today, the arithmetic operations on floating-point numbers do not have enough precision to guarantee the accuracy and stability in the numerical computation of the determinant algorithms. The random walk algorithms [15,19,23], on the other hand, require only basic operations on small integers, and do not have these numerical problems. Therefore, we have decided to implement Wilson's random-walk algorithm [19,23] for arborescence generation.
Implementation Details
In this subsection, we describe the details of our implementation of the Euler algorithm [2,15,19,23] for generating k-let-preserving random sequences. On the other hand, the typical length of a protein sequence is below 1000. Even though a sequence itself may be stored in only 1 kilo-bytes, the permutation algorithm still requires hundreds of times more space regardless. The situation becomes even worse when k is further increased: even for the rather innocent-looking parameters σ = 20 and k = 5, the space requirement σ 2k-2 = 20 8 > 16 8 = 2 32 exceeds all 4 giga-bytes of memory that can be accommodated by a 32-bit computer! We note that the two sets of parameters that Coward [11] used for experiments on his shufflet program were only σ = 4, k = 6, σ 2k-2 = 1, 048, 576 and σ = 20, k = 3, σ 2k-2 = 160, 000.
We will discuss more about this in our comparison of uShuffle and shufflet in the Results and Discussion section.
Representing directed multigraph in linear space
To make the uShuffle program scalable, it is clear that careful algorithmic engineering are necessary in the implementation. As we discussed in the previous subsection on the Euler algorithm, the directed multigraph G contains a vertex for each distinct (k -1)-let in S. Since the number of (k -1)-lets in S is exactly l -k + 2, G has at most l -k + 2 vertices, and hence exactly l -k + 1 directed edges between consecutive (k -1)-lets. This implies that the size of G is in fact linear in the length l of the sequence S to be permuted. With suitable data structures, uShuffle needs only linear space.
In the following, we first explain the construction and representation of the directed multigraph G, then explain the random sequence generation after the graph construction. The graph construction consists of two steps: determine the set of vertices, then add the directed edges.
Determining vertices
We use a hashtable to determine the set of vertices. The hashtable consists of a bucket array of size b = l -k + 2, the number of (k -1) lets in S, and a linked list at each bucket to avoid collision by chaining [10]. Each (k -1)-let x = x 1 x 2 ʜx k-1 has a polynomial hash code Initialize the hashtable to be empty, then try to insert the (k -1)-lets into the hashtable one by one. If a (k -1)-let is the first of its kind, it is assigned a new vertex number then inserted into the hashtable; its starting index to the sequence S is also recorded. If a (k -1)-let has been inserted before, it is not inserted to the hashtable: its vertex number and index to the sequence S are copied from those of the first (k -1)-let of its kind. After insertions, we can deduce the total number of vertices in the directed multigraph from the largest vertex number assigned. The memory for vertices are then allocated.
Adding directed edges
To add the directed edges, we use an adjacency-list representation to avoid the excessive memory requirement of an adjacency-matrix. In an adjacency-list representation, two edge lists need to be maintained at each vertex: a list of incoming edges and a list of outgoing edges. The outgoing edge lists are necessary for generating Eulerian walks [2]. The incoming edges lists are necessary for generating arborescences when Kandel et al.'s algorithm [15] is used (as in the implementation by Coward [11]). We use Wilson's algorithm [19,23] for generating arborescences. As we discussed in the previous section, Wilson's algorithm [19,23] is faster than Kandel et al.'s algorithm [15]. Furthermore, we note here that Wilson's algorithm [19,23] has another advantage over Kandel et al.'s algorithm [15] in terms of the ease of implementation. Instead of one backward random walk from the ending vertex t to reach all other vertices as in Kandel et al.'s algorithm [15], Wilson's algorithm [19,23] uses multiple forward random walks from each unvisited vertex to join the arborescence rooted at t: the outgoing edge lists alone are sufficient for generating both the Eulerian walks and the arborescences.
Representing edge lists and managing memory
For maximum efficiency, we implement each edge list as an array of vertices. The numbers of outgoing edges differ from vertex to vertex; if we allocate a fixed-size array for each vertex, then we would have to make each array large enough to hold all edges in the worst case, and the resulting space requirement would become quadratic in the length l of the sequence S. We could of course first count the number of outgoing edges for each vertex, then allocate a separate array just large enough for each vertex. However, this would require us to call the relatively expensive memory allocation function once for each vertex.
In our implementation, we allocate one large array for all edges (the total number of edges is l -k + 1), then parcel out pieces to individual vertices. To achieve this, we first scan the sequence S to count the number of outgoing edges for each vertex, then point the array (outgoing edge list) of each vertex to successive offsets of the large array.
With this optimization, the number of memory allocations is reduced to only 4: one for the hashtable bucket array, one for the array of (k -1)-lets as hashtable entries, one of the array of vertices, and one for the array of edges. The memory for the bucket array and the hashtable entries can be freed as soon as the directed multigraph is constructed.
Sequence generation after graph construction
After the construction of the directed multigraph, we can generate a random sequence in three steps. As discussed in the previous section, we need to first simulate the looperased random walks [19,23] to generate an arborescence, next permute the individual edge lists while maintaining the last edges, then simulate an Eulerian walk guided by the edge lists and output the sequence along the walk.
Since each edge list is implemented as an array, the permutation can be executed very efficiently. To output the random sequence along the walk is also easy, since each vertex keeps the starting index of its first occurrence in the input sequence.
Software Organization and User Interfaces of the uShuffle Tool
In this subsection, we describe the software organization and user interfaces of the uShuffle tool.
C Library and command-line tool
Our initial implementation of uShuffle is in the C programming language. The C version of uShuffle consists of two components: a uShuffle library (ushuffle.c and ushuffle.h) and a command-line tool (main.c).
In a typical scenario, multiple k-let-preserving random sequences are generated for each input sequence. The graph construction stage of the uShuffle program needs to be done only once for the multiple output sequences. To give the users an option for optimization, we export three interface functions in the uShuffle library: void shuffle(const char *s, char *t, int l, int k); void shuffle1(const char *s, int l, int k); void shuffle2(char *t); The function shuffle accepts four parameters: s is the sequence to be permuted, t is the output random sequence, l is the length of s, and k is the let size k. The ( ) 5 1 − function shuffle simply calls shuffle1 first and shuffle2 next: shuffle1 implements the construction of the directed multigraph; shuffle2 implements the loop-erased random walks in the directed multigraph and the generation of the random sequence. The statistical behavior of a random permutation depends heavily on the random number generator.
Coward [11] noted that the default implementations of random number generators on various platforms are often unsatisfying, so he implemented his own generator using an arguably better algorithm. We note that there are numerous algorithms for random number generation, and new algorithms are continuously being proposed: whether one algorithm is superior to the other can be quite subjective. Instead of limiting the users to a particular implementation, we set the default generator to the random function from the standard C library, then export an interface function to allow sophisticated users to customize the generator: typedef long (*randfunc_t)(); void set_randfunc(randfunc_t randfunc); The command-line uShuffle tool is a minimal front-end of the uShuffle library that illustrates a typical use of the library. It has the following four options: -s <string> specifies the input sequence, -n <number> specifies the number of random sequences to generate, -k <number> specifies the let size, -seed <number> specifies the seed for random number generator.
Java applet
The uShuffle program is ported to the Java programming language. Beside having a library and a command-line tool, the Java version of the uShuffle program can also run as an applet in a web browser. We refer to Figure 2 for a screenshot of the uShuffle Java applet 1 : The interface of the applet is minimal and consists of three parts: an input text area at the top, an output text area at the bottom, and a control panel in the middle. The control panel contains two text fields and a button. The maximum let size k and the number n of output sequences can be set in the two text fields. When the "Shuffle" button is clicked, the applet takes the input sequence from the input text field, strips away the white spaces, generates n random sequences that preserve the k-let counts, then outputs the sequences in the output text area. The output is in the Fasta format when n > 1: each output sequence is preceded by a comment line containing a sequence number ranging from 1 to n.
The uShuffle Java applet keeps all the output sequences in memory for display in the output text area. When the number n of output sequences and the input sequence length l are exorbitantly large, for example, n = 10, 000, 000 and l = 100, the total memory required to hold the output sequences may exceed the maximum heap size of the Java virtual machine (JVM) and the applet may hang. This is not a bug in our program but is due to the limit of JVM; nevertheless, we prepared a web page to instruct the users how to increase the maximum heap size of JVM.
C#/Perl/Python versions
The uShuffle program is also ported to the C# programming language. Perl and Python are popular programming languages for bioinformatics; they allow easy integration with programs written in C. Instead of porting the uShuffle program to Perl and Python at the source code level, we prepared two web pages to instruct the users how to extend the Perl and Python environments with the uShuffle library.
Results and Discussion
We have performed two sets of experiments to test the performance of two major forms of the uShuffle tool: we first benchmark the performance of the uShuffle C library, then compare the performance of the uShuffle Java applet with the shufflet program by Coward [11].
Performance of uShuffle C Library
We tested the uShuffle C library on a desktop PC 2 with test data consisting of both real biological sequences and artificially generated random sequences.
Experiment on real biological sequences
The real biological sequences were acquired from two sources: first, 152 protein sequences (with a total of 91262 amino acids) were sampled from the Human Protein Reference Database 3 , one sequence from each of the 152 molecular classes; second, 69 micro RNA precursor sequences (with a total of 4773 nucleotides) of Mus.musculus (house mouse) were extracted from the supplementary data 4 of Bonnet et al. [4].
Our experiments on these real biological sequences showed that the uShuffle library is extremely efficient: in just one second, it can generate either (i) 700 doublet-preserving random sequences for each of 152 protein sequences, or (ii) 12000 doublet-preserving random sequences for each of the 69 RNA sequences.
Experiments on artificially generated random sequences
In order to analyze the performance of uShuffle with various sets of parameters, we also performed a systematic test of uShuffle on artificially generated random sequences. For simplicity, the sequence lengths were exact powers of two from 2 12 to 2 24 , that is, from around 4, 000 to around 16, 000, 000. These numbers are somewhat arbitrary; nothing prevents a user from running uShuffle on very long sequences, even at the genome scale, as long as the computer has enough memory to store the input sequence and has some additional (very minimal, as discussed in the Implementation section) memory required by our implementation.
For each sequence length, 64 uniform random sequences over the English alphabet [a-z] were generated as test sequences; for each test sequence, 64 k-let-preserving random sequences were then generated by uShuffle. The total running time for uShuffle to generate the 64 × 64 = 4096 k-let-preserving random sequences was recorded for each sequence length. Two getrusage system calls were placed in the test program to sandwich the code region being benchmarked; the differences of the two timestamps were used to calculate the running times.
We refer to Figure 3 for a log-log plot of the total running times of the uShuffle program for k = 2 and k = 3 at various Screenshot of uShuffle Java Applet Figure 2 Screenshot of uShuffle Java Applet.
sequence lengths. The plot shows that the running time of the uShuffle program is essentially linear in the length of the sequence to be shuffled.
The absolute running times are not very effective in demonstrating the extreme efficiency of the uShuffle program. We refer to Figure 4 for a ratio plot that is more illustrative. For k = 2 and k = 3, and for each sequence length, the plot shows not the absolute running time of the uShuffle program but the ratio of two running times: 1. the running time for the uShuffle program to generate the k-let-preserving sequences, and 2. the running time for the simple permutation method [10] (reviewed in the Background section) to shuffle the same number of random sequences without preserving the k-let counts.
The ratio plot shows that the running time of the uShuffle program is on average only 1.5 times that of the simple permutation method for k = 2, and only 2 times for k = 3. The simple permutation method is minimal: for each position of the input sequence it executes only one random function call plus one swap. The uShuffle program, on the other hand, performs a lot more work; although the 64 k-let-preserving random sequences of each test sequence are generated by one shuffle1 and 64 shuffle2 function calls to avoid redundant multigraph construction, each shuffle2 function call still includes the generation of an arborescence by loop-erased random walks [19,23] and the generation of an Eulerian walk guided by the individual edge lists shuffled by simple permutations.
In light of the contrasting complexities of the uShuffle program and the simple permutation method, the small ratios of their running times are remarkable. A careful reader will notice an interesting fact from Figure 4, when the sequence length increases to 2 24 (about 16 millions), the running time of the uShuffle program for k = 2 is even less than the simple permutation method! The "strange" phenomenon had kept us puzzling for a long time until we eventually convinced ourselves that this is not a bug but a feature. We note that, in each step, the simple permutation method randomly swaps two elements scattered in a large array of 2 24 elements. On the other hand, the uShuffle program performs random walks in small multigraphs (at most 26 vertices for k = 2 and over the [a-z] alphabet) and permutes the individual edge lists (each with approximately 2 24 /26 elements) separately. The memory references of the uShuffle program are much more local than those of the simple permutation method. Computers with modern memory architectures aggressively optimize code with local memory references by sophisticated caching schemes, which promotes the performance of the uShuffle program.
We refer to Figure 5 for the running times of the uShuffle program at various values of the parameter k, where the test sequence length is fixed at 1024. The running time of the uShuffle program peaks at k = 4, which is about three times its running time for k = 2, then gradually decreases as k increases, and finally drops to zero at k = 1024 because, with a sequence length of 1024, the only 1024let-preserving random sequence is the input sequence itself. This plot shows that the uShuffle program is efficient for all possible values of k.
Comparison of uShuffle Java Applet with shufflet
There exist two other implementations of the Euler algorithm. The dishuffle program by Clote et al. [6] implements the original version of the Euler algorithm by Ratios of Running Times for k = 2 and k = 3 Figure 4 Ratios of Running Times for k = 2 and k = 3. Ratios of running times of uShuffle and simple permutation method at various sequence lengths for k = 2 and k = 3. Altschul and Erickson [2]. Hard-coded for shuffling RNA sequences preserving dinucleotide counts, dishuffle is not a general tool for arbitrary alphabet and let sizes. Another program, shufflet by Coward [11], implements the improved version of the Euler algorithm by Kandel et al. [15] for arbitrary let size k. As we have explained in the Implementation section, the arborescence generation algorithm by Kandel et al. [15], while superior to the algorithm by Altschul and Erickson [2], is still inferior to Wilson's algorithm [19,23]; besides, its look-up table data structure is inefficient for large alphabet and let sizes.
In terms of functionality, the shufflet implementation [11] is closer to our uShuffle implementation. Shufflet was written in the C programming language, and had been hosted as a web application (but has been taken offline). We were unable to perform a comprehensive comparison of uShuffle and shufflet. However, Coward [11] mentioned two experiments performed on a Digital DEC/ Alpha 2100 web server: 1. 100 shufflings of a DNA sequence of 10000 nucleotides with k = 6 take about 2.5 seconds; 2. 100 shufflings of a protein sequence of 1000 amino acids with k = 3 take less than 1 second.
We performed similar experiments with the uShuffle Java applet on an Apple iMac computer (2 GHz PowerPC G5 running MacOS 10.4.9, Firefox 2.0.0.3, and Java 1.5.0): 1. 1000 shufflings of a DNA sequence of 10000 nucleotides with k = 6 take about 1.5 seconds; 2. 4000 shufflings of a protein sequence of 1000 amino acids with k = 3 take less than 1 second.
Assuming comparable performances of the two computers, we estimate that our uShuffle Java applet is about 15-20 times faster than shufflet in the experiment on nucleotides (k = 4), and about 40 times faster than shufflet in the experiment on amino acids (k = 20).
We certainly understand the difficulty of such a comparison: a web server in 1999 versus a desktop computer in 2005; a C program in a native machine versus a Java applet in a virtual machine. Nevertheless the comparison illustrates the better scalability of our uShuffle Java applet for large let sizes. The difference between the two performance ratios, 15-20 versus 40, suggests that uShuffle remains efficient even for large let size, while shufflet becomes more inefficient as the let size increases, due to (very likely) the use of the inefficient look-up table data structure by Kandel et al. [15].
Conclusion
The uShuffle tool is based on superior graph algorithms and is carefully engineered to be extremely efficient. It achieves maximum flexibility by allowing arbitrary alphabet size and let size, and is available in many forms for different kinds of users. We believe uShuffle is a useful tool for the bioinformatics community. | 8,929.2 | 2008-04-11T00:00:00.000 | [
"Biology"
] |
Auramine O UV Photocatalytic Degradation on TiO 2 Nanoparticles in a Heterogeneous Aqueous Solution
: Amongst the environmental issues throughout the world, organic synthetic dyes continue to be one of the most important subjects in wastewater remediation. In this paper, the photocatalytic degradation of the dimethylmethane fluorescent dye, Auramine O (AO), was investigated in a heterogeneous aqueous solution with 100 nm anatase TiO 2 nanoparticles (NPs) under 365 nm light irradiation. The effect of irradiation time was systematically studied, and photolysis and adsorption of AO on TiO 2 NPs were also evaluated using the same experimental conditions. The kinetics of AO photocatalytic degradation were pseudo-first order, according to the Langmuir– Hinshelwood model, with a rate constant of 0.048 ± 0.002 min − 1 . A maximum photocatalytic efficiency, as high as 96.2 ± 0.9%, was achieved from a colloidal mixture of 20 mL (17.78 µ mol L − 3 ) AO solution in the presence of 5 mg of TiO 2 NPs. The efficiency of AO photocatalysis decreased nonlinearly with the initial concentration and catalyst dosage. Based on the effect of temperature, the activation energy of AO photocatalytic degradation was estimated to be 4.63 kJ mol − 1 . The effect of pH, additional scavengers, and H 2 O 2 on the photocatalytic degradation of AO was assessed. No photocatalytic degradation products of AO were observed using UV–visible and Fourier transform infrared spectroscopy, confirming that the final products are volatile small molecules.
Introduction
Water is indispensable for sustaining the environment, keeping entire ecosystems regulated. Water is also an important natural resource and a vital asset for daily human life, as it is used for drinking, hygiene, and cooking, as well as in agriculture and fisheries. Although accessible clean water is crucially important over all regions, about one-sixth of the global population have difficulties in accessing clean water [1]. Additionally, it has also been claimed that four billion people are now facing a severe scarcity of clean water due to extinction, depletion, and pollution in major rivers of the world [2]. The presence of sediments, soil, and aquatic organisms, which are naturally produced by erosion of rock and soil, and the breakdown and rotting of organic matters, is related to water quality [3]. However, the presence of organic pollutants in water systems is even more dangerous, as they contaminate entire ecosystems and endanger human health. With this in mind, water The efficiency of dye photocatalytic degradation depends on their relative redox potentials with respect to those of the catalyst, allowing electron and hole transfer to generate O − 2 • and OH • radicals [31]. In this sense, photocatalytic degradation of AO on TiO 2 NPs and its kinetics have been reported by Montazerozohori [33]. Photocatalysis of AO on semiconductor oxides has been intensively investigated [26,34], but several aspects of the photocatalytic degradation of the dye are still deficient. In particular, photochemical systems are complicated and it takes time to elucidate systems as the literature about them builds up. In general, a lot of works are required to generate a consensus as to what is actually going on.
Therefore, in this study, photocatalytic degradation of AO on anatase TiO 2 NPs under 365 nm light irradiation was investigated. The objective was to systematically evaluate the effect of irradiation time, the initial concentration of AO, and catalyst dosage on the photocatalytic degradation of the dye. The efficiency and rate constant of the photodegradation were estimated based on absorption spectra of AO before and after irradiation. The photocatalytic degradation data were analyzed with standard empirical models. The thermodynamics of the AO photodegradation process were assessed by monitoring the effect of temperature. The photocatalytic degradation mechanism was further assessed by observing the effect of pH and additional scavengers as well as H 2 O 2 .
This work should provide a baseline for future works, which may include using doped and sensitized TiO 2 in order to shift the absorbance further to the visible to improve catalytic efficiency [35][36][37][38][39]. It is important to highlight that there are several differences between this study and those in the literature, including the use of NPs, which should give better photocatalytic degradation rates. There are also several similarities and some agreements between this current study and the reported works, which give affirmative verification of many of the conclusions from the earlier work, which is a general duty of the traditional scientific approach.
Photolysis and Adsorption of AO
The photolysis of 35.06 µmol L −1 of AO in aqueous solution in the absence of TiO 2 NPs under 365 nm light irradiation is shown in Figure S1A. The absorption spectra of AO have two main peaks at 432 nm and 370 nm. The spectrum shows that the absorbance of AO solution gradually and slightly decreased over time when the AO solution was exposed to the UV irradiation, confirming that the dye was slowly decomposing. From these absorption spectra, the concentration of AO was extracted and plotted against the irradiation time, as shown in Figure S1B. It is clearly seen that the concentration of AO after 180 min of irradiation is only slightly lower than it was before irradiation, and hence, the efficiency of the noncatalytic photolysis was determined to be less than 8% even after such a prolonged irradiation time. This is conclusive evidence that the direct photolysis of AO by the 365 nm light irradiation is not efficient.
In comparison, adsorption of AO onto TiO 2 NPs was revealed by a gradual decrease in the absorbance of AO with contact time, as shown in Figure S2. Based on this decrease in absorbance up to 180 min of contact time, the adsorption efficiency of AO was found to be less than 3%, indicating that the adsorption of AO by TiO 2 NPs is also inefficient. Figure S3 shows a colloidal mixture of 20 mL AO solution with 2.5 mg TiO 2 NPs before and after UV light irradiation for 100 min. It can clearly be seen that the colloidal mixture was completely decolored, suggesting that 100% efficient photocatalytic degradation of AO had occurred within this time.
Photocatalytic Degradation of AO
The degradation of the dye in the heterogeneous aqueous solution was then monitored at different irradiation times from 0 to 150 min, and the reduction in the color with increasing irradiation time is shown in Figure S4. The absorption spectra of the heterogeneous aqueous solutions of AO and TiO 2 NPs, after being exposed to the UV light, were measured after centrifugation. These spectra are shown in Figure 1. The concentration of AO in the heterogeneous aqueous solutions was determined based on its absorbance at 432 nm, at which value the molar decadic extinction coefficient of the dye is 25,300 L mol -1 cm -1 . The photodegradation efficiency (η), also known as color removal rate, was then calculated as where C 0 and C t are the initial and remaining concentration of AO in the mixtures at irradiation time, t.
perimental data using a single exponential decay function. Here, the degradation rate constant is considered to be linearly related to concentration of the dye, according to the Langmuir-Hinshelwood (L-H) model [40,41]. The L-H equation is expressed as [40][41][42] = exp(− ) (2) where is the observed degradation rate constant, and is determined from the single exponential decay of as a function of irradiation time, . As shown in Figure 1B, the data fit well with the L-H kinetic model, suggesting that the heterogeneous photocatalytic degradation of AO is a pseudo-first-order reaction. This is unambiguously supported by the linear correlation between ⁄ as a function of irradiation time. From this best fit, the degradation rate constant, , of AO was estimated to be 0.048 ± 0.002 min -1 . In comparison, under the same experimental conditions, the value of AO is much slower than those of RHB (0.115 ± 0.005 min -1 ) and MB (0.173 ± 0.019 min -1 ) [28]. It was found that the dye was almost completely degraded within 150 min of irradiation, with the η value being 96.2 ± 0.9%. This is slightly higher than reported for methylene blue (MB) (93.1%) and rhodamine B (RhB) (96.1%) [28]. Considering that, alone, the nonphotocatalytic photolysis and dark-adsorption of AO onto TiO 2 NPs were inefficient, the enhanced degradation of the dye in Figure 1 can be assigned to a photocatalytic process on the catalyst surface.
The photocatalytic degradation kinetics of AO were evaluated by simulating the experimental data using a single exponential decay function. Here, the degradation rate constant is considered to be linearly related to concentration of the dye, according to the Langmuir-Hinshelwood (L-H) model [40,41]. The L-H equation is expressed as [40][41][42] where k obs is the observed degradation rate constant, and is determined from the single exponential decay of C t as a function of irradiation time, t. As shown in Figure 1B, the data fit well with the L-H kinetic model, suggesting that the heterogeneous photocatalytic degradation of AO is a pseudo-first-order reaction. This is unambiguously supported by the linear correlation between lnC t /C 0 as a function of irradiation time. From this best fit, the degradation rate constant, k obs , of AO was estimated to be 0.048 ± 0.002 min −1 . In comparison, under the same experimental conditions, the k obs value of AO is much slower than those of RHB (0.115 ± 0.005 min −1 ) and MB (0.173 ± 0.019 min −1 ) [28]. The photocatalytic degradation of AO must be proportional to the external mass transfer of the dye onto the catalyst surface. In this sense, the mass transfer behavior of AO was analyzed using the intraparticle diffusion model, given by [43] C 0 − C t = k i t 1/2 + C Here, k i is the diffusion rate and C is the boundary layer thickness on the catalyst surface. The simulation plot shown in Figure 2A demonstrates that mass transfer occurred in three diffusion steps. There was slow diffusion with a k i of 0.89 µmol L −1 min −1/2 which occurred within 1 min of irradiation, and is associated with early diffusion of AO onto the catalyst surface. This was followed by a fast and effective diffusion with a k i of 5.69 mmol L −1 min −1/2 . Finally, another slow diffusion step occurs, with a k i of 0.89 µmol L −1 min −1/2 at irradiation times longer than 40 min until the complete degradation of AO in the solution is achieved. It is noteworthy that extrapolation of the simulation plot at early irradiation times passed the origin, implying that boundary layer thickness can be assumed to be negligible. In other words, the diffusion rate of the dye in the solution was comparable to that on the catalyst surface. This also provides an interpretation that photodegradation byproducts of AO did not disturb the diffusion of the dye onto the catalyst surface.
The photocatalytic degradation of AO must be proportional to the external transfer of the dye onto the catalyst surface. In this sense, the mass transfer behavi AO was analyzed using the intraparticle diffusion model, given by [43]
− = / +
Here, is the diffusion rate and is the boundary layer thickness on the catalys face.
The simulation plot shown in Figure 2A demonstrates that mass transfer occurr three diffusion steps. There was slow diffusion with a of 0.89 μmol L -1 min -1/2 w occurred within 1 min of irradiation, and is associated with early diffusion of AO ont catalyst surface. This was followed by a fast and effective diffusion with a of 5.69 m L -1 min -1/2 . Finally, another slow diffusion step occurs, with a of 0.89 μmol L -1 m at irradiation times longer than 40 min until the complete degradation of AO in the tion is achieved. It is noteworthy that extrapolation of the simulation plot at early ir ation times passed the origin, implying that boundary layer thickness can be assum be negligible. In other words, the diffusion rate of the dye in the solution was compa to that on the catalyst surface. This also provides an interpretation that photodegrad byproducts of AO did not disturb the diffusion of the dye onto the catalyst surface.
Effect of Temperature
The photocatalytic degradation of AO depends on diffusion and immobilizati the dye onto the TiO2 NPs; hence, it should be affected by temperature. Figure S5 s absorption spectra of AO solutions before and after UV light irradiation for 30 min a ferent temperatures. The effect of temperature was then further analyzed based o of AO photocatalytic decomposition. These results demonstrated that creased with the temperature, suggesting that diffusion and immobilization of the d the catalyst surface were accelerated at higher temperature. Additionally, electron recombination is also believed to accelerate with increased temperature [27,[44][45][46].
The activation energy (Ea) of the photocatalytic degradation of the dye was then uated based on the effect of temperature (15-
Effect of Temperature
The photocatalytic degradation of AO depends on diffusion and immobilization of the dye onto the TiO 2 NPs; hence, it should be affected by temperature. Figure S5 shows absorption spectra of AO solutions before and after UV light irradiation for 30 min at different temperatures. The effect of temperature was then further analyzed based on the k obs of AO photocatalytic decomposition. These results demonstrated that k obs increased with the temperature, suggesting that diffusion and immobilization of the dye on the catalyst surface were accelerated at higher temperature. Additionally, electron-hole recombination is also believed to accelerate with increased temperature [27,[44][45][46].
The activation energy (E a ) of the photocatalytic degradation of the dye was then evaluated based on the effect of temperature (15-40 • C) on k obs by using the Arrhenius equation; where A is the pre-exponential factor, R is the gas constant, and T is the temperature. Based on the Arrhenius plot of lnk obs as a function of 1/T shown in Figure 2B, the E a of photocatalytic degradation of AO on the TiO 2 NPs was estimated to be 4.63 kJ mol −1 . For comparison, under the same experimental conditions, the E a value of the photocatalytic degradation of MB was 37.3 kJ mol −1 [27]. Thus, the potential barrier of the photocatalytic degradation of AO on the catalyst surface is much lower than that of MB. Therefore, it can be concluded that the oxidation reaction between AO and the generated O − 2 • and OH • radicals on the catalyst surface is much more energetically favorable than it is for MB.
Effect of Various Parameters on the Photocatalytic Degradation of AO
As this photocatalytic degradation is an oxidation reaction of the dye on the catalyst surface, at a certain temperature, the reaction should depend on various parameters, including the dye concentration and catalyst dosage. Figure S6 shows the spectra of AO with different initial AO concentrations (C 0 ) before and after photocatalytic degradation. Based on these spectra, the concentration of AO that was degraded during the photocatalysis, and the η value, increased and reached an optimum condition with respect to concentration (C 0 ). This was followed by a nonlinear decrease, as seen in Figure 3. This finding highlighted that the photocatalytic activity of the dye is related to the number of dye molecules in the heterogeneous colloidal mixture, and the low η value at high initial concentration is attributed to the well-known screening effect [47][48][49].
fore, it can be concluded that the oxidation reaction between AO and the generate and OH • radicals on the catalyst surface is much more energetically favorable tha for MB.
Effect of Various Parameters on the Photocatalytic Degradation of AO
As this photocatalytic degradation is an oxidation reaction of the dye on the ca surface, at a certain temperature, the reaction should depend on various paramete cluding the dye concentration and catalyst dosage. Figure S6 shows the spectra o with different initial AO concentrations ( ) before and after photocatalytic degrad Based on these spectra, the concentration of AO that was degraded during the ph talysis, and the value, increased and reached an optimum condition with resp concentration ( ). This was followed by a nonlinear decrease, as seen in Figure 3 finding highlighted that the photocatalytic activity of the dye is related to the num dye molecules in the heterogeneous colloidal mixture, and the low value at high concentration is attributed to the well-known screening effect [47][48][49].
As shown in Figure S7, the catalyst dosage also affects the photocatalytic degrad of AO. Based on the absorption spectra of AO before and after irradiation in the pre of different dosages of TiO2 NPs, the degradation efficiency was found to decrease linearly with the catalyst dosage. This phenomenon is assigned to the inefficient p catalytic degradation of the dye at high catalyst dosages. The photocatalytic degradation of AO was also followed in the presence of a amount (1-5%) of benzoquinone (BQ) and tert-butanol (t-BuOH) which scavenge O OH • radicals, respectively. It was found that the value of AO decreases abruptly the addition of BQ and t-BuOH, as shown in Figure 4A. This result confirms that th radation mechanism by UV/TiO2 NPs depends on the oxidation reaction of the dye both O • and OH • radicals, as has been described in several studies [50,51]. The form of O • , by reduction of solvated oxygen in the aqueous solution, is an important s prevent the recombination of the photogenerated electrons and holes [52]. High co trations of oxygen in the solution should reduce the recombination process and hen sist the formation of both O • and OH • radicals. To explore this possibility, the ef adding a small amount of H2O2 on the of the photocatalytic degradation of AO evaluated, as presented in Figure 4B. The dissociation of H2O2 enhances the concent of oxidants. This accelerates the generation of O • and OH • radicals [53], leadin As shown in Figure S7, the catalyst dosage also affects the photocatalytic degradation of AO. Based on the absorption spectra of AO before and after irradiation in the presence of different dosages of TiO 2 NPs, the degradation efficiency was found to decrease nonlinearly with the catalyst dosage. This phenomenon is assigned to the inefficient photocatalytic degradation of the dye at high catalyst dosages.
The photocatalytic degradation of AO was also followed in the presence of a small amount (1-5%) of benzoquinone (BQ) and tert-butanol (t-BuOH) which scavenge O − 2 • and OH • radicals, respectively. It was found that the η value of AO decreases abruptly with the addition of BQ and t-BuOH, as shown in Figure 4A. This result confirms that the degradation mechanism by UV/TiO 2 NPs depends on the oxidation reaction of the dye with both O − 2 • and OH • radicals, as has been described in several studies [50,51]. The formation of O − 2 • , by reduction of solvated oxygen in the aqueous solution, is an important step to prevent the recombination of the photogenerated electrons and holes [52]. High concentrations of oxygen in the solution should reduce the recombination process and hence assist the formation of both O − 2 • and OH • radicals. To explore this possibility, the The effect of the pH of the medium on the value of photocatalytic degradation of AO is shown in Figure S8. At pH lower than 9, the value increased with pH. The value reached a maximum value at pH 8-9, and then abruptly decreased at pHs above 10.
FTIR Analysis
Steady-state FTIR spectroscopy was used to search for large molecular fragmentation of the products from the photocatalytic degradation of AO. For this analysis, the AO solution after irradiation (see Figure S3) was collected and dried. The vibrational spectrum was then measured in the spectral range of 4000 to 450 cm -1 , as shown in Figure 5. For comparison, the spectrum of AO before irradiation is also presented. The main vibrational bands of AO before irradiation were observed at 3407, 3004, 1691, 1602, 1374, 1156, 941, and 821 cm -1 , which are assigned to NH stretching of dimethyl amine, C=N stretching, CH of aromatic rings, C=C stretching of aromatic rings, CH bending of aromatic rings, C-N stretching, C-C stretching, and CH out-of-plane bending vibrations of the dye, respectively. Similar spectral features of AO were reported by Mallakpour et al. [54]. The effect of the pH of the medium on the η value of photocatalytic degradation of AO is shown in Figure S8. At pH lower than 9, the η value increased with pH. The η value reached a maximum value at pH 8-9, and then abruptly decreased at pHs above 10.
FTIR Analysis
Steady-state FTIR spectroscopy was used to search for large molecular fragmentation of the products from the photocatalytic degradation of AO. For this analysis, the AO solution after irradiation (see Figure S3) was collected and dried. The vibrational spectrum was then measured in the spectral range of 4000 to 450 cm −1 , as shown in Figure 5. For comparison, the spectrum of AO before irradiation is also presented. The main vibrational bands of AO before irradiation were observed at 3407, 3004, 1691, 1602, 1374, 1156, 941, and 821 cm −1 , which are assigned to NH stretching of dimethyl amine, C=N stretching, CH of aromatic rings, C=C stretching of aromatic rings, CH bending of aromatic rings, C-N stretching, C-C stretching, and CH out-of-plane bending vibrations of the dye, respectively. Similar spectral features of AO were reported by Mallakpour et al. [54].
It is important to note that the FTIR spectrum of AO after irradiation is similar to that before irradiation. No new additional bands are clearly observed, except a broad band at 600-900 cm −1 which could be assigned to the symmetric stretching vibrations of O-Ti-O of anatase TiO 2 NPs [55] remaining after the photocatalysis. This confirms that the steady-state FTIR spectroscopy did not detect any photoproducts of AO; instead, it detected the remaining AO and TiO 2 NPs. This provides an interpretation that either the photocatalytic products have low infrared cross sections or they are volatile and evaporated during the photoirradiation or drying process, so that none of them were detected by the steady-state measurement. In support of this latter argument, UV-visible spectra of AO (and many other reported dyes) only show a reduction in absorption of AO, with no new peaks being identifiable as coming from any large fragment degradation products. bands of AO before irradiation were observed at 3407, 3004, 1691, and 821 cm -1 , which are assigned to NH stretching of dimethyl amin of aromatic rings, C=C stretching of aromatic rings, CH bending o stretching, C-C stretching, and CH out-of-plane bending vibratio tively. Similar spectral features of AO were reported by Mallakpou
Discussion
In this study, the TiO 2 NPs used as photocatalyst were pure anatase crystals, which was confirmed based on FTIR spectra (not shown) and XRD patterns (see Figure S9). The particle size of TiO 2 NPs is approximately 100 nm with a BET surface area and pore volume of 12.791 m 2 g −1 and 0.05733 cm 3 g −1 , respectively [27]. With the bandgap energy being 3.20 eV, the 365 nm light irradiation easily excites the TiO 2 NPs, generating electron-hole pairs [56]. This is an advantage in the degradation of AO, because AO absorbs mainly in the visible region. In this sense, the UV irradiation mostly excites the catalyst and, in any case, the photolysis of the dye is inefficient.
As has been discussed in many studies, separation and migration of the photogenerated charge carriers onto the catalyst surface is essential for the photocatalysts to generate the OH • and O − 2 • radicals. With this in mind, the anatase phase of TiO 2 has been theoretically and experimentally revealed to possess high charge-carrier mobility and low charge resistance [55,57], and hence it has high potential as photocatalyst. The photocatalytic degradation of organic dyes is not only governed by the formation rate of OH • and O − 2 • radicals on the catalyst surface. It should also be governed by the diffusion and immobilization of dyes onto the catalyst surface as well as by the potential energy barrier of the oxidation reaction of the dyes.
The diffusion of a dye in the colloidal solution depends on its hydrodynamic size (related to its molecular structure) as given by the Einstein-Stokes relation. Although there is no report of this for AO in the literature, the structure and size of MB and RhB are approximately comparable to AO. Therefore, the diffusion constant (D) value of AO in aqueous solution can be expected to be close to those of MB (6.74 × 10 −6 cm 2 s −2 ) [58] or RhB (4.50 × 10 −6 cm 2 s −2 ) [59]. The D value is positively related to the diffusion-limited rate constant (k D ) by the generalized Smoluchowski equation, as given by where σ is the encounter distance. As k obs is a proportionally related to k D , the k obs value of AO can be considered to be comparable to those of MB and RhB. In fact, it was at least two or three times lower in this case compared to those of MB and RhB. A plausible reason for the lower than expected k obs value measured for AO is its nonplanar structure, which is due to the few degrees of rotation of both N,N-dimethylaniline moieties with respect to their C−C bond. In order to examine the planarity of AO, ab initio structural optimization was performed using Gaussian basis sets in Chem3D. The structure was optimized for 500 iterations, and the minimum RMS gradient was 0.1. As shown in Figure 6, the MM2 force field suggests that the structure of AO in the gas phase is nonplanar with a torsion angle between the two aromatic ring systems being~40 • . A similar result was also observed using an MMF94 force field, but the torsion angle of the optimized structure was larger (~65 • ). Although the polar environment in aqueous solution might suppress the rotation of aromatic rings and alter the charge redistribution on AO, the calculation suggests a relatively nonpolar structure of AO. The torsional dynamics could cause intramolecular charge redistribution on AO, leading to strong to medium intermolecular friction, thus slowing the dynamics of the photocatalytic reaction.
The photocatalytic degradation behavior of AO should also be considered based on the driving force to immobilize the dye on the catalyst surface. Considering that AO is nonplanar and that the dimethylamino groups attached to the aromatic rings are not favorable for hydrogen bonding interactions, AO could only possibly approach the surface of TiO2 NPs through the methaniminium (=NH2 + ) group through hydrogen bonding or electrostatic interactions. This description is supported by the observed pH dependence. It is well known that the solution pH is effective in modifying the net charge on the surface of TiO2 NPs which is known to be amphoteric [26]. The net surface charge turns from positive to negative charge at pH 6.1 [60,61]. At a solution pH lower than 6.1, the positive charge on the catalyst surface is not effective to support immobilization of AO towards the catalyst surface. On the other hand, for pH higher than 6.1, there is electrostatic attraction of the surface to AO, enhancing the photocatalysis of AO [62].
A similar observation was reported for the photocatalytic degradation of MB and RhB in the presence of TiO2 NPs [27,28] or ZrO2 NPs [63]. The trend of photocatalytic degradation efficiency was, therefore, that it increased from pH 7 to pH 10, as the ionic state of AO was unchanged, until the solution pH reached a pKa value (pKa 9.8-10.7) [64]. The change of ionic state of AO above pKa is also inferred by the decrease of the photocatalytic degradation efficiency at pH higher than 10. To obtain an accurate description of the photocatalytic degradation of dyes, the reaction should be followed by liquid chromatography-mass spectroscopy or ultrafast spectroscopy [65,66], but such a detailed study of AO has not been reported. In fact, from this work, it is not clear that significant degradation products remain in the solution, as they may gasify. It is important to recall that AO has a methaniminium and two N-dimethylamino groups attached to its aromatic rings. Based on steady-state vibrational spectroscopy (FTIR Figure 5) and the photodegradation mechanism of related organic compounds, such as MB and RhB, and crystal violet [65], it is proposed that oxidation of AO should form Michler's ketone, which further undergoes N-demethylation by successive oxidation reactions to form various intermediates, followed by destruction of the conjugated structure into small compounds, such as CO2 and NH4 + , as shown in Figure 7. The fact that small volatile molecular products are the final form of the photodecomposition products is supported by FTIR and UV-visible spectroscopy, because no spectroscopic evidence for larger degradation products is seen. This is a null result and yet is has significance. The assignment of the final product to gaseous small molecules is also backed up by reference [26,67].
All of these oxidation steps, which are mediated by OH • and O • radicals, would occur on or close to the immobilized dyes, where direct interactions are possible between the organic molecules and the photochemically generated radicals on the catalyst surface. The photocatalytic degradation behavior of AO should also be considered based on the driving force to immobilize the dye on the catalyst surface. Considering that AO is nonplanar and that the dimethylamino groups attached to the aromatic rings are not favorable for hydrogen bonding interactions, AO could only possibly approach the surface of TiO 2 NPs through the methaniminium (=NH 2 + ) group through hydrogen bonding or electrostatic interactions. This description is supported by the observed pH dependence. It is well known that the solution pH is effective in modifying the net charge on the surface of TiO 2 NPs which is known to be amphoteric [26]. The net surface charge turns from positive to negative charge at pH 6.1 [60,61]. At a solution pH lower than 6.1, the positive charge on the catalyst surface is not effective to support immobilization of AO towards the catalyst surface. On the other hand, for pH higher than 6.1, there is electrostatic attraction of the surface to AO, enhancing the photocatalysis of AO [62].
A similar observation was reported for the photocatalytic degradation of MB and RhB in the presence of TiO 2 NPs [27,28] or ZrO 2 NPs [63]. The trend of photocatalytic degradation efficiency was, therefore, that it increased from pH 7 to pH 10, as the ionic state of AO was unchanged, until the solution pH reached a pKa value (pKa 9.8-10.7) [64]. The change of ionic state of AO above pKa is also inferred by the decrease of the photocatalytic degradation efficiency at pH higher than 10.
To obtain an accurate description of the photocatalytic degradation of dyes, the reaction should be followed by liquid chromatography-mass spectroscopy or ultrafast spectroscopy [65,66], but such a detailed study of AO has not been reported. In fact, from this work, it is not clear that significant degradation products remain in the solution, as they may gasify. It is important to recall that AO has a methaniminium and two N-dimethylamino groups attached to its aromatic rings. Based on steady-state vibrational spectroscopy (FTIR Figure 5) and the photodegradation mechanism of related organic compounds, such as MB and RhB, and crystal violet [65], it is proposed that oxidation of AO should form Michler's ketone, which further undergoes N-demethylation by successive oxidation reactions to form various intermediates, followed by destruction of the conjugated structure into small compounds, such as CO 2 and NH 4 + , as shown in Figure 7. The fact that small volatile molecular products are the final form of the photodecomposition products is supported by FTIR and UV-visible spectroscopy, because no spectroscopic evidence for larger degradation products is seen. This is a null result and yet is has significance. The assignment of the final product to gaseous small molecules is also backed up by reference [26,67].
Catalysts 2022, 12, x FOR PEER REVIEW 10 of A similar degradation mechanism was proposed in the electrochemical degradation AO by Hmani et al. [67], where the final product of the oxidation was CO2 gas.
Materials and Reagents
The chemicals used in the present experiment were analytical reagent grade of Ti NPs and AO chloride salt (C17H21N3.HCl; 303.83 g mol −1 ; CAS: 2465-27-2) which were p chased from Sigma-Aldrich Co. (St.Louis, MO, USA) and were used without any furth purification. A stock dye solution was prepared by dissolving 100 mg of powdered A chloride in distilled water to obtain a concentration of 100 mg L −1 . Experimental solutio of a desired concentration were obtained by suitable dilutions.
Characterization of TiO2 Catalyst
In this study, the commercial TiO2 NPs were characterized in a previous study Suhaimi et al. [28]. The crystalline phase of the TiO2 NPs was determined based on th X-ray diffraction (XRD) pattern which was measured using an XRD-7000 (Shimad Kyoto, Japan) with collimated Cu Kα radiation (λ = 0.15418 nm). As seen in Figure S9 the XRD pattern indicated a typical pure anatase phase with the main peak being observ at 2θ = 25°. This is in good agreement with the standard XRD pattern of anatase Ti (#JCPDS 84-1286). An SEM image scanned with an SEM-JSM-7600D (JEOL, Tokyo, Japa indicated that the TiO2 NPs have regular spherical shapes with little agglomeration. Th size was approximately 100 nm (see Figure S9B), which is similar to the report of Am and Ashrafi [68]. With this loose agglomeration, the TiO2 NPs in a colloidal solution c be considered to have a high surface area to interact with AO molecules, thereby impro ing its photocatalytic activity [66]. All of these oxidation steps, which are mediated by OH • and O − 2 • radicals, would occur on or close to the immobilized dyes, where direct interactions are possible between the organic molecules and the photochemically generated radicals on the catalyst surface. A similar degradation mechanism was proposed in the electrochemical degradation of AO by Hmani et al. [67], where the final product of the oxidation was CO 2 gas.
Materials and Reagents
The chemicals used in the present experiment were analytical reagent grade of TiO 2 NPs and AO chloride salt (C 17 H 21 N 3 .HCl; 303.83 g mol −1 ; CAS: 2465-27-2) which were purchased from Sigma-Aldrich Co. (St.Louis, MO, USA) and were used without any further purification. A stock dye solution was prepared by dissolving 100 mg of powdered AO chloride in distilled water to obtain a concentration of 100 mg L −1 . Experimental solutions of a desired concentration were obtained by suitable dilutions.
Characterization of TiO 2 Catalyst
In this study, the commercial TiO 2 NPs were characterized in a previous study by Suhaimi et al. [28]. The crystalline phase of the TiO 2 NPs was determined based on their X-ray diffraction (XRD) pattern which was measured using an XRD-7000 (Shimadzu, Kyoto, Japan) with collimated Cu Kα radiation (λ = 0.15418 nm). As seen in Figure S9A, the XRD pattern indicated a typical pure anatase phase with the main peak being observed at 2θ = 25 • . This is in good agreement with the standard XRD pattern of anatase TiO 2 (#JCPDS 84-1286). An SEM image scanned with an SEM-JSM-7600D (JEOL, Tokyo, Japan) indicated that the TiO 2 NPs have regular spherical shapes with little agglomeration. Their size was approximately 100 nm (see Figure S9B), which is similar to the report of Amini and Ashrafi [68]. With this loose agglomeration, the TiO 2 NPs in a colloidal solution can be considered to have a high surface area to interact with AO molecules, thereby improving its photocatalytic activity [66].
Photocatalysis Setup
The experimental setup for photocatalysis was reported previously by Suhaimi et al. [28]. Briefly, AO solution (20 mL) was mixed with a few milligrams of TiO 2 NPs in a Petri dish with a diameter of 7.5 cm and covered with a UV-transparent glass. They were gently stirred on a temperature-controlled stage. The colloidal mixtures were irradiated from above 10 cm distance using a UV fluorescent lamp (Vilber Lourmat, 6 W, 211 mm; Marne-la-Vallée cedex 3, France). The light power was reduced using an ND filter (6.25%), so that the light power on the solution was 0.28 mW/cm 2 . After selected irradiation times, the mixtures were centrifuged at 3000 rpm for 15 min. The filtrates were collected and analyzed by a UV-visible absorption measurement in a 1 cm cuvette cell. All absorption measurements were performed using a UV-1900 spectrophotometer (Shimadzu, Kyoto, Japan).
Prior to the photocatalysis experiments, photolysis (in the absence of TiO 2 NPs) and adsorption (in the dark) of AO on TiO 2 NPs were evaluated otherwise using the same experimental conditions. The direct photolysis of AO was monitored by irradiating the dye solution, in the absence of catalyst, with 365 nm UV light using the same irradiation geometry and power described above. After a desired irradiation time, the solution was analyzed using a UV-Vis absorption spectroscopic measurement. "Dark" adsorption of AO onto the surface of TiO 2 NPs was evaluated by keeping the colloidal mixture in the dark to equilibrate. After a selected contact time, the mixture was centrifuged using an Eppendorf 8504 Centrifuge (Hamburg, Germany) at 3000 rpm for 15 min. The filtrate was collected and analyzed using a UV-visible absorption measurement.
The effect of contact time was evaluated for the photocatalysis of AO in a colloidal mixture of 20 mL (9.5 ppm or equivalent to 35.06 µmol L −1 ) of the dye with 5 mg TiO 2 NPs. The effect of the initial AO concentration was studied by adjusting the concentration to be within 9.3 µmol L −1 and 110.8 µmol L −1 at a constant mass of TiO 2 NPs (5 mg). On the other hand, the effect of the catalyst dosage was evaluated by adjusting the mass of TiO 2 NPs (0.5-20 mg) in the mixture with a constant initial concentration of AO (35.06 µmol L −1 ). Finally, the effect of temperature was investigated from the photocatalytic degradation of AO in a mixture of 20 mL AO (35.06 µmol L −1 ) and TiO 2 NPs (5 mg) at different temperatures from 15 • C to 40 • C.
Fourier transform infrared (FTIR) spectroscopy was used to search for large-fragment photoproducts potentially formed during the photocatalytic reaction. Here, after the photoirradiation, the colloidal mixture was centrifuged, and the precipitated solid was collected and dried in an oven at 40 • C. The vibrational spectrum of the dried solid then was recorded on an FTIR (IRPrestige-21, Shimadzu, Kyoto, Japan) in a KBr disc.
Conclusions
In this study, the photocatalytic degradation of toxic cationic Auramine O (AO) in aqueous solution on TiO 2 nanoparticles (NPs) under 365 nm light irradiation was investigated. Prior to the photocatalysis experiments, photolysis-alone and dark-adsorption of AO on TiO 2 NPs were evaluated using the same experimental conditions. From this it was found that both photolysis-alone and dark-adsorption were inefficient at reducing the aqueous burden of AO. The effects of irradiation time, initial AO concentration, and catalyst dosage on the photocatalytic degradation were evaluated in detail. The results revealed that the photodegradation kinetics of AO can be described using the Langmuir-Hinshelwood model, emphasizing that the oxidation reaction is pseudo-first order. The photodegradation rate constant is 0.048 ± 0.002 min −1 , which is slower than that of MB (0.173 ± 0.019 min −1 ), due to the nonplanar structure of AO. At ambient pH, the photo-catalytic efficiency depends on the initial concentration of AO, and a maximum efficiency as high as 96.2 ± 0.9% was achieved from a colloidal mixture of 20 mL (17.78 µmol L −1 ) AO solution in the presence of 5 mg of TiO 2 NPs. The photocatalytic efficiency of the dye decreases nonlinearly with increasing the initial concentration and catalyst dosage. The activation energy of the photocatalytic degradation of AO on the TiO 2 NPs was estimated to be 4.63 kJ mol −1 . The photocatalytic degradation of AO was assessed by observing the effect of pH, additional scavengers and H 2 O 2 , from which it was confirmed that the degradation is due to an oxidation reaction of the immobilized dyes on the catalyst surface, where they have direct interactions with photochemically generated OH • and O − 2 • . The nature of the degradation products of photocatalytic removal of AO was evaluated using Fourier transform infrared (FTIR) spectroscopy. The steady-state FTIR spectroscopy did not show any detectable byproducts of photocatalytic degradation of AO. This implies that, although the catalytic reaction may involve many organic intermediates, through N-demethylation and successive oxidation reactions, the final photocatalytic products were volatile compounds, such as CO 2 and NH 4 + , which escaped from the solution. The overall results provide detailed description of photocatalytic degradation of toxic cationic AO, a dimethylmethane fluorescent dye, in an aqueous heterogeneous solution on TiO 2 NPs irradiated using 365 nm light. Finally, we confirmed that using TiO 2 in the form of NPs greatly enhanced the rate of AO removal from solution compared to the use of micro-sized powders.
Supplementary Materials: The following supporting information can be downloaded online at: https://www.mdpi.com/article/10.3390/catal12090975/s1, Figure S1: UV-Vis absorption spectra following photolytic decomposition of AO as a function of irradiation time; Figure S2: Absorption spectra of AO in aqueous colloidal solution in the presence of 5 mg TiO2 NPs in the dark at different contact times; Figure S3: Images of a colloidal mixture of 20 mL of AO solution before irradiation and after irradiation; Figure S4: AO solution before irradiation and after irradiation before and after centrifugation; Figure S5: Absorption spectra of AO in aqueous colloidal mixture with 5 mg TiO2 NPs after irradiation for 30 min at different temperatures; Figure S6: Absorption spectra of different concentrations AO in an aqueous colloidal mixture with 5 mg TiO2 NPs before and after irradiation; Figure S7: Absorption spectra of different concentrations AO in aqueous colloidal mixture with different masses of TiO 2 NPs, and a plot of the remaining AO concentration C t ) as a function of the catalyst dosage; Figure S8: The plot of η value of photocatalytic degradation of AO in aqueous colloidal mixture with 5 mg TiO 2 NPs at different pHs; Figure S9: XRD patterns of anatase TiO 2 NPs, with comparison to standard data (#JCPDS 84-1286) and SEM image of TiO 2 NPs at ×50,000 magnification. | 9,991.8 | 2022-08-30T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
ClusTRace, a bioinformatic pipeline for analyzing clusters in virus phylogenies
Background SARS-CoV-2 is the highly transmissible etiologic agent of coronavirus disease 2019 (COVID-19) and has become a global scientific and public health challenge since December 2019. Several new variants of SARS-CoV-2 have emerged globally raising concern about prevention and treatment of COVID-19. Early detection and in-depth analysis of the emerging variants allowing pre-emptive alert and mitigation efforts are thus of paramount importance. Results Here we present ClusTRace, a novel bioinformatic pipeline for a fast and scalable analysis of sequence clusters or clades in large viral phylogenies. ClusTRace offers several high-level functionalities including lineage assignment, outlier filtering, aligning, phylogenetic tree reconstruction, cluster extraction, variant calling, visualization and reporting. ClusTRace was developed as an aid for COVID-19 transmission chain tracing in Finland with the main emphasis on fast screening of phylogenies for markers of super-spreading events and other features of concern, such as high rates of cluster growth and/or accumulation of novel mutations. Conclusions ClusTRace provides an effective interface that can significantly cut down learning and operating costs related to complex bioinformatic analysis of large viral sequence sets and phylogenies. All code is freely available from https://bitbucket.org/plyusnin/clustrace/ Supplementary Information The online version contains supplementary material available at 10.1186/s12859-022-04709-8.
pandemic that has caused numerous deaths and human suffering, delivery and workforce shortages, travelling limitations, and many other disturbances to both business and normal life activities [4].
All virus genomes change over time due to mutations introduced in the viral genome, primarily by errors made by viral polymerases during replication [7]. However, most changes have minor effect on the phenotype of viruses. However, some mutations may affect the key pathogenic properties of the virus, such as transmissibility and disease severity, or the performance of vaccines, therapeutic agents or diagnostic tools [7].
The rapid progress in sequencing technologies has provided an opportunity to study viral molecular epidemiology and evolution in nearly real-time [8]. The current COVID-19 is the first pandemic with the pathogen being under surveillance using full genome sequencing on a global scale and over an extensive time period [9]. Surveillance of the pandemic creates demand for fast and scalable sequencing, genome assembling, viral strain assignment, phylogenetic analysis, variant calling and molecular epidemiology to inform contact tracing and non-pharmaceutical interventions. Although bioinformatics offers an abundance of methods and tools for sequence analysis, their employment in virology and epidemiology can be hindered by the developer-user gap between bioinformatics and other fields [10]. This gap can be bridged by pipelines tailored specifically for the analysis of viral sequences and equipped with intuitive interface and output reporting.
SARS-CoV-2 is the causative agent of coronavirus disease 2019 (COVID-19) [11]. The SARS-CoV-2 pandemic has already infected more than 437 million people in 224 countries, causing nearly 6 million deaths globally as of 1st of March 2022 (https:// www. world omete rs. info/ coron avirus/). SARS-CoV-2 is a global challenge, which is further complicated by the continuous emergence of new Variants of Concern (VOCs) or Variants of Interest (VOI). Variants that have carried VOC status include Alpha (B.1.1.7) [12], Beta (B1.351) [13], Gamma (P.1) [14], Delta (B.1.617.2) [15] and, as of writing this, we are experiencing the spread of Omicron variant (B.1.1.529) [16]. These VOCs pose an increased public health risk due to having one or more of the following characteristics: higher transmissibility [17], immune escape properties for antibodies from previous infection [18], lower response towards current vaccines compared to the original wild type strains these vaccines were based on [19]. Detecting and monitoring these novel variants is essential in SARS-CoV-2 surveillance.
A number of bioinformatic software packages are already available to help with detection, tracking and tracing of SARS-CoV-2 variation e.g. Pangolin [20], Nextstrain [21], Nextclade [22], Jovian [23], HaVoC [24] and Lazypipe [25]. Such tools are certainly helping the global effort for COVID-19 surveillance, but they are not void of limitations. Tools like Pangolin and Nextclade are primarily designed for tracking large accumulations of mutation events that are rare and may be preceded by the less visible sub-lineage genetic changes. Nextstrain offers a comprehensive analysis, but is heavily dependent on sequence metadata and dataset pre-filtering. Here we introduce ClusTRace (https:// www2. helsi nki. fi/ en/ proje cts/ clust race), a novel bioinformatic pipeline for Unix/Linux environments that complements the existing toolkits with unsupervised clade or cluster analysis, intuitive visualizations and reporting. ClusTRace can help with surveillance of the current ongoing COVID-19 pandemic and for any upcoming future epidemic or pandemic.
Implementation
ClusTRace is a bioinformatic software package implemented primarily in Perl. Clus-TRace supports several tasks that can be executed one by one or combined into pipelines (Fig. 1).
The analysis starts with consensus genomic sequences output by a given sequencing platform (e.g., Illumina). In the first step, ClusTRace assigns genomic sequences to a dynamic Pango lineage classification with Pangolin [20]. Then, ClusTRace collects sequences assigned to different lineages into separate multi-fasta files, so that each multi-fasta contains all sequences assigned to a given Pango lineage. Although we use Pangolin as the default lineage assigner, classification file can be produced with any method preferred by the user (the pipeline will accept any csv-file that conforms to Pangolin output format). All downstream analyses are performed separately for each lineage represented by a multi-fasta file.
Multi-fasta files are then pruned from outliers with SeqKit [26]. By default, we remove all sequences that deviate more than 10% from the median length of the sequence set or that have more than 10% gaps (these parameters can be modified on the command line with -minlen, -maxlen and -maxgap).
In the next step, filtered sequence sets for each lineage are aligned with MAFFT v7 [27]. Multiple sequence alignments (MSAs) are then trimmed for gaps with trimAl [28]. Trimmed alignments are used to construct phylogenetic trees with IQ-TREE 2 [29]. IQ-TREE 2 supports a wide range of substitution models and will, by default, use Mod-elFinder to determine the best-fitting model [29]. The user can choose to create bootstrapped consensus trees with IQ-TREE 2 Ultra-Fast Bootstrapping (ClusTRace-ufboot option) [30]. For very large sequence sets, the user can choose to run VeryFastTree [31] with GTR model (ClusTRace-tree vftree option). By default, ClusTRace will use COVID-19 reference genome (NCBI acc NC_045512.2) as an outgroup sequence to reroot all output phylogenetic trees. There is also an option to specify a separate outgroup sequence for each run.
In the next step, sequence clusters are extracted with TreeCluster [32]. Clusters are extracted with MaxClade-method at several pairwise distance cut-offs. We use two cut-off thresholds that are scaled to the size of the input reference genome (e.g. SARS-CoV-2) and roughly correspond to twenty and thirty mutations between pairs of sequences. MaxClade-method and cut-off thresholds (0.0007 and 0.001) were selected ad hoc based on our previous work with SARS-CoV-2 phylogenies [33]. These values can be easily modified by the user. Next, ClusTRace creates custom nexus trees in which sequences are assigned labels and colours according to the assigned cluster.
ClusTRace can read date annotations from sequence ids and will accept common date formats (e.g. "|YYYY-MM-DD|"). For date annotated sequences ClusTRace will trace the speed of growth for the extracted clusters. This is done by assigning sequences to time periods (calendar months or weeks) and by tracing the number of sequences that are assigned to each cluster and that are dated up to the given time period. For each lineage ClusTRace will print a separate cluster summary file with detailed information on the extracted clusters. These spreadsheet summaries include clustSeqN, clustSeqId and clustGR data sheets. The first and second data sheets report the number and ids of sequences in each cluster for each time period, while the third reports cluster size, median and maximal growth rates, and support value for the corresponding sub-phylogeny for each cluster. Separate clustGR data sheets are printed for each cluster cut-off threshold (by default twenty and thirty). Median and maximal growth rates are measured based on absolute increment in sequence number assigned to each cluster between consecutive time periods.
In the last step, ClusTRace extracts MSA(s) and runs variant calling for the extracted clusters. Nucleotide mutations are called from these against a reference genome with MsaToVcf [34]. Nucleotide variants are filtered to exclude 100 nucleotides (nt) from the start and the end of the genome (to avoid noise related to sequencing errors commonly seen in terminal regions), as well as any regions that have over 30 nt continuous stretches of below 75% coverage (these are also assumed to represent sequencing errors) using trimAl [28]. We also exclude variants with support below 50%. These filtering options are specified in the pipeline default options and can be modified. Amino acid (aa) variants are called with snpEff [35]. Finally, aa variants in all clusters are parsed and added to the cluster spreadsheet summaries as clustMutations and clustMutationTable data sheets. The clustMutations sheet reports nt and aa mutations for each cluster, reference aa mutations and non-reference aa mutations. Reporting reference and non-reference mutations requires supplying reference mutations in a separate file. For genes of interest non-reference mutations can be reported separately (current version reports mutations for the S-gene). The clustMutationTable sheet reports aa mutations for the fastest growing clusters in a binary matrix. The top row lists aa mutations in genomic order with non-reference mutations highlighted in bold.
ClusTRace also supports extracting nt and aa, reference aa and non-reference aa mutations for lineage MSA(s) or for any other collection of MSA(s). Lineage mutations are reported with spreadsheet summary tables similar to the cluster mutation summaries.
ClusTRace also offers an interface to g3viz R library [36]. Using this interface in R, the user can generate interactive mutation plots for both cluster and lineage vcf-files. These interactive plots can be saved in the form of simple html files to complement spreadsheet reports.
Results
To illustrate the intended use of ClusTRace we analyzed a dataset of SARS-CoV-2 full genome sequences from patient samples collected in Finland from January to June 2021. We started by running ClusTRace Pangolin mapping to obtain 5379 sequences assigned to Alpha and 1051 sequences assigned to Beta variants of concern (VOC) (GISAID accessions are available in Additional file 1: Table S1). We then run ClusTRace multifasta construction, outlier filtering, alignment, phylogeny with ultrafast bootstrapping (-ufboot option), default clustering and variant calling for these two lineages. As our outgroup sequences we used EPI_ISL_601443 for the Alpha variant and EPI_ISL_660190 for the Beta variant. All files output by ClusTRace for this analysis are available in Additional file 2.
To get a quick summary on the lineage mutations, we start with g3viz visualisation (Fig. 2, interactive version available in Additional file 2). For Alpha we see that most high frequency aa mutations follow mutations that have been reported as characteristic for this lineage [37] (Fig. 2A). These include T1001I, A1708D, I2230T, 3675_3677del and P4715L in ORF1ab, 69_70del, N501Y, A570D, D614G, P681H, T716I, S982A and D1118H in S, D3L and S235F in N. For Alpha, there are just five aa variants specific for Finnish data with frequency 10% or higher: K5784R and E6272G in ORF1ab, N119H in ORF3a and G96S and RG203KP in N. Plotting all mutations found in at least ten sequences in Alpha (5379 sequences) and Beta (1051 sequences). Mutations that have been reported as characteristic for a given lineage [37,38] are plotted in purple, all other mutations are plotted in green. Numbers in cirles indicate the number of sequences with the given mutation. Graphics were created with the ClusTRace interface to g3viz [36] For Beta, approximately half of mutations with frequency 10% or higher were not covered by mutations that have been reported as characteristic for this lineage [38] (Fig. 2B). Mutations matching characteristic mutations for Beta were: T265I, K1655N, K3353R and P4715L in ORF1ab, D80A, D614G and A701V in S, Q57H and S171L in ORF3a, P71L in E, T205I in N, while the non-characteristic aa mutations with at least 10% frequency were: T3058I, A3209V, A3235S, D4459A, T5912I and A6976V in ORF1ab, T19I and I896L in S, M24V, I26V and I27V in ORF7b, K44R and I121L in ORF8. Note that Beta has non-characteristic mutations in Spike protein, which may potentially affect their receptor binding: T19I in 789 (75%) and I896L in 175 (16.7%) sequences.
Cluster analysis with TreeCluster [32] yielded 108 clusters for Alpha and nineteen clusters for Beta (Figs. 3 and 4, full consensus trees with clusters highlighted are available in files B.1.1.7.con.tree.mr = 30.nex and B.1.351.con.tree.mr = 30.nex in Additional file 2). We used the MaxClade method with a cut-off set to 0.001. Here we take a closer look at the ten clusters for Alpha and Beta that had the highest per month growth rate peaks over the analysed time period.
We start by discussing Alpha clusters. The ten fastest growing clusters covered 3,146 (58.5%) of all Alpha sequences. Cluster size varied in these ten clusters between 100 (1.9%) and 479 (8.9%) sequences (Fig. 5). Maximal growth rates ranged between 74 and 310 sequences per month and peak growth was during February and March. Number of non-characteristic aa mutations introduced in these clusters ranged from one to six. Solitary non-characteristic mutations in S-gene were found in clusters 56 (D80Y), 38 (D287G) and 22 (A701V) ( Table 1).
Fig. 3
Consensus tree for Finnish Alpha dataset with clusters collapsed. Bar plots on the right indicate the number of sequences in each cluster. For clarity, clusters with less than ten sequences and singletons were removed. Inner nodes with no large cluster descendants are plotted in grey The ten fastest growing clusters covered 979 (94.5%) of Beta sequences. Cluster size was between fourteen (1.3%) and 259 (24.6%) sequences (Fig. 6). Maximal growth rates ranged between 11 and 148 sequences per month and maximal growth was during February (clusters 3 and 8), March (clusters 1, 4, 7, 10, 17, 18 and 19) and April (cluster 9). Number of non-characteristic aa mutations introduced in these clusters ranged from three to eight. Several clusters had non-characteristic mutations in S-gene: L18F (cluster 1), T19I (clusters 8-10, 17 and 19) and I896L (cluster 9) ( Table 2). 23:196 Here, aa mutations with frequencies exceeding 50% are listed in genomic order *The first row depicts mutations characteristic for B.1.1.7 according to the lineage report [37]
Benchmarking time and memory efficiency
We benchmarked ClusTRace performance on two datasets with default settings on a Red Hat Enterprise Linux Server 7.9 on a single node with 32 × 2.1 GHz cores. The first dataset included 6,430 SARS-CoV-2 genomic sequences from patient samples collected in Finland from January to June 2021 (GISAID accession ids are available in Additional file 1: Table S1). This run completed in 48 h and 6 min and consumed 83.26 GB of memory. The second dataset included 3,568 genomic sequences for Delta variant sequenced from Finnish patient samples later the same year (GISAID accession ids are available in Additional file 3: Table S2). This run completed in 14 h and 16 min and consumed 75.44 GB of memory. Most time was spent within IQ-Tree calls. We see that execution time does seem to scale nonlinearly with dataset size but is kept within acceptable limits for moderately large datasets. The required memory usage for these datasets was well below available limits.
Discussion
The years 2020 and 2021 could arguably be referred to as a turning point in the history of global health. The COVID-19 pandemic has demonstrated that emerging pathogens can cause havoc in our globalised world. On the other hand, the pandemic has also accelerated the development of better sequencing technologies, bioinformatic tools, diagnostic tests, vaccines and many other fields. The ongoing pandemic has emphasised the need for fast, scalable and, ideally pipelined, analysis of viral genomic sequences. For health authorities, it is important to be able to streamline processing large amounts of genomic sequence data into various summaries and reports that can help to make rational decisions concerning e.g. restrictions, non-pharmaceutical interventions and border control measures to minimize further spread of SARS-CoV-2. Researchers also struggle with the continuous inflow of SARS-CoV-2 23:196 Annotation as in Table 1 Table 2 (continued) sequences that need to be organized into lineages, alignments and phylogenetic trees in order to make sense of the constantly evolving pandemic.
Here, we have presented ClusTRace, a novel bioinformatic pipeline for fast and scalable analysis of large collections of SARS-CoV-2 sequences. ClusTRace supports many types of relevant analyses. These include assigning sequences to lineages, collecting sequences by lineage, filtering outliers, creating multiple sequence alignments, creating phylogenetic trees, extracting phylogeny-based sequence clusters, estimating cluster growth rates, calling nt and aa variants for both lineages and clusters, as well generating a number of table-based and interactive reports. Although most of these steps can be performed separately with designated bioinformatic tools, pipelining with a high-level interface helps to cut down on the learning and operating costs of complex bioinformatic analysis. Several authors have commented on the developer-user gap between bioinformatics and other fields in biology and biomedical research [10]. In this context, high-level pipelines that are tailored to the need of virus research are an important way to bridge this gap.
Popular pipelines for tracking viral outbreak phylodynamics include Augur, Auspice, Nextstrain, Nextclade and Pangolin [20][21][22]39]. Here, we reflect on key similarities and differences of ClusTRace to these toolkits. Pangolin and Nextclade are primarily concerned with classifying viral genomes into lineages or clades, while ClusTRace is designed to track mutations within lineages. Nextclade also offers mutation calling for large clades, which is similar to ClusTRace mutation calling for lineages. Nextstrain is an integration of several toolkits, including Augur for analysing sequence and phylogeographical data, and Auspice for visualising results. Like ClusTRace, Augur offers functionalities for filtering, aligning, phylogenetic reconstruction, re-rooting and refinement of the obtained phylogenies, and offers functionalities to estimate mutation frequencies.
Unlike ClusTRace, Augur also infers sequences and ancestral traits for the ancestral tree nodes. Auspice is designed to visualise phylogenetic and phylogeographic data output by Augur in an interactive webpage format. In ClusTRace, we provide different visualizations, namely spreadsheet summaries and interactive g3viz plots for high growthrate and/or mutation-rate clades. Unlike Nextstrain/Auspice visualizations, ClusTRace focuses directly on parts of the phylogeny that are picked out by the unsupervised cluster analysis and provides no details on the likely origin of the mutations in the tree. However, this approach has its advantages, such as simplicity and speed; unlike Nextstrain/Augur, ClusTRace has no need for down sampling the sequence sets. ClusTRace analysis is also largely unsupervised, i.e. clades are selected and examined for mutations and growth-peaks automatically, in effect filtering clades with alarming features that can then be checked manually more in detail.
In this work, we illustrated the intended scenario for ClusTRace usage on Finnish Alpha and Beta variants of concern. Presented approach can be described as an unsupervised phylogeny-based cluster analysis and variant calling. ClusTRace uses automated unsupervised clustering coupled with cluster growth rate analysis and variant calling to scan through the phylogeny. Clusters that display elevated growth rates, elevated non-reference mutation content or mutations in genomic regions that are of accentuated concern, such as the S-gene, can then be flagged for downstream analysis. In this paper we focus on describing the method and do not attempt to link identified cluster to epidemiologic seeding events. However, in our other work on monitoring SARS-CoV-2 spread in Finland we have appleid identical clustering with some success. For example, in [33] we monitored clusters for Alpha and Beta lineages and in that work clustering suggested that these lineages have spread to Finland via multiple seeding events. In our analysis of Finnish Omicron sequences we were able to identify a single large cluster that most likely corresponded to a super-spreading event (n = 236, which is 27.1% of all Finnish cases) as well as numerous smaller clusters that indicate multiple seeding points [40].
The current SARS-CoV-2 pandemic might endure to the foreseeable future, and new viral variants will likely continue to emerge. Therefore, the global response must continue to adapt and improve to mitigate the costs of the pandemic. The progress made since the start of the pandemic in early 2020 with the global implementation of full genome sequencing can be consolidated by developing efficient and scalable bioinformatic tools that are specifically tailored for genomic surveillance of viral pathogens. These tools must deliver fast, scalable and, ideally, unsupervised analysis and reporting on the pandemic events of concern. Our pipeline, ClusTRace, adds to the available toolbox the option for fast, scalable and unsupervised screening and reporting of the within or local lineage events of concern, such as elevated growth and mutation rates. Clus-TRace can also be adapted for the surveillance of viral pathogens other than the SARS-CoV-2, which may prove useful in future epidemic emergencies. | 4,816 | 2021-12-12T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling
Olkin [1] proposed a ratio estimator considering p auxiliary variables under simple random sampling. As is expected, Simple Random Sampling comes with relatively low levels of precision especially with regard to the fact that its variance is greatest amongst all the sampling schemes. We extend this to stratified random sampling and we consider a case where the strata have varying weights. We have proposed a Multivariate Ratio Estimator for the population mean in the presence of two auxiliary variables under Stratified Random Sampling with L strata. Based on an empirical study with simulations in R statistical software, the proposed estimator was found to have a smaller bias as compared to Olkin’s estimator.
Introduction
Auxiliary variables have been used to increase precision of estimators especially in regression and ratio estimators [2].This is particularly so in cases of complex surveys, more so in situations where some information on the survey variable might be missing [3].
These classical methods of estimation are based on direct estimators, i.e., those which use the response variable, y and information provided by an auxiliary variable, x, highly correlated with the main variable [4].
Review of Multivariate Ratio Estimators
Olkin [1] proposed a multivariate generalization of the ratio estimator.Olkin proposed an estimator for the population total, denoted by ˆMR Y , and defined as which in other context can also be written as; is the component of the population total ratio estimate affiliated to the auxiliary variable are the weights which maximize the precision of .This estimate of population total also will be accurate if i is a straight line going through the origin.The population totals for the auxiliary variables X must be explicitly known.
The Proposed Estimator
Consider a population which has been divided into L strata, with the strata being disjoint, the sample elements from each stratum are sampled and when the measurement hi is done, measurement for the unit in the stratum, two auxiliary variables, say, where the individual components are defined as follows: This can further be represented in a single equation as follows; are the various strata.where
Variance of the Proposed Estimator
To compute the values of the weights, the general Equation (2.4) is used and this will cater for each stratum by just changing the value of h in respective strata.Subtracting h to the right hand side and left hand side of equation (2.4) yields But it is known that the sum of the weights in each stratum is 1, so .This implies that (2.6) Replacing Equation (2.6) to the right hand side of Equation (2.5), yields Collecting the like terms with respect to weights yields
ˆ2
(2.7) Squaring each side and taking Expectation on either side, assuming negligible bias, Equation (2.7) leads to Equation (2.8) can be written in notation as follows, We then proceed to find the values of the weights 1 h and that minimize the variance To achieve this, we form a function which has the variance and the linear constraint mentioned above.
with being the Lagrange's Multiplier.
2 22 h h h h h h h
To minimize this function with respect to the weights 1 h and 2 h W , we differentiate partially the function with respect to these weights each at a time. (2.11) (2.12) For optimization, we equate the partial derivative Equations (2.11) and (2.12), each to zero.These yields; The 2 is common and can be cancelled out.We proceed to collect like terms with respect to the weights and this yield Then it follows, by making W the subject of the formula, Opening the brackets in the denominator yields To get the value of weight , we use the linear constraint which may be written as, (2.17) Equations (2.16) and (2.17) give the weights that mini- , and y x x pulation total.The ten strata were again joined together to form one huge stratum, index-wise sample of size 1000, was selected and then using Olkin's model, the population total was estimated.The procedure above was repeated for 1000 samples and the population totals using each model was recorded.
These weights can now be substituted in the proposed model to get the population total.
Empirical Study
An empirical study was carried out to estimate the population total of a simulated population and compare the performance of the proposed model to that of Olkin [1].
Description of the Study Population
The population total estimates of the two methods were compared to that of the true population (simulated) total.The True population total is 28,235,645.Table 1 summarizes the statistics corresponding to each estimator.Figures 1 and 2 show the plotted values of the population total estimates of proposed model and Olkin's model, respectively, repeated for 1000 simulations each.
In this section we simulated a population (y i , x 1i and x 2i ), which has 10 strata in which each stratum differs from others.This difference was achieved by using different error terms i while generating the using The coefficients i and i are randomly generated from a uniform distribution while
, and x y x
In order to show the difference in variability between the two methods, the two plots above are now combined into one graph using a common scale in the Figure 3.
Computational Procedure 9. Conclusions
A sample of size 300 was selected randomly from the simulated population index-wise, that is if index i is selected then the sample elements will have 1 2 i i i .This was repeated for all the ten strata, the selected sample was used in the proposed model to estimate the po-From the summary table above, it can be seen that the proposed estimator gives a total with a very small bias as compared to the Olkin's.Also, the proposed model can be seen to have a small Root Mean Square Error (RMSE) as compared to Olkin's estimator.The combined graph also shows that the population total estimate is more variable in Olkin's as compared to the proposed model.
The limiting condition to allow the use of this estimator is the requirement of existence of linear relationship through the origin between the variable of interest, y, and the auxiliary variables.
i MR , subject to a linear constraint 1 2 p
i
are randomly gene-rated from normal distribution with different parameters.y a x bx
Figure 1 .
Figure 1.Plot of the population totals with proposed model for the 1000 samples.
Figure 2 .
Figure 2. Plot of the population totals without stratification for the 1000 samples.
Figure 3 .
Figure 3. Figures 1 and 2 plotted on a common scale. | 1,548.4 | 2012-06-27T00:00:00.000 | [
"Mathematics"
] |
Leveraging Machine Learning Techniques and Engineering of Multi-Nature Features for National Daily Regional Ambulance Demand Prediction
The accurate prediction of ambulance demand provides great value to emergency service providers and people living within a city. It supports the rational and dynamic allocation of ambulances and hospital staffing, and ensures patients have timely access to such resources. However, this task has been challenging due to complex multi-nature dependencies and nonlinear dynamics within ambulance demand, such as spatial characteristics involving the region of the city at which the demand is estimated, short and long-term historical demands, as well as the demographics of a region. Machine learning techniques are thus useful to quantify these characteristics of ambulance demand. However, there is generally a lack of studies that use machine learning tools for a comprehensive modeling of the important demand dependencies to predict ambulance demands. In this paper, an original and novel approach that leverages machine learning tools and extraction of features based on the multi-nature insights of ambulance demands is proposed. We experimentally evaluate the performance of next-day demand prediction across several state-of-the-art machine learning techniques and ambulance demand prediction methods, using real-world ambulatory and demographical datasets obtained from Singapore. We also provide an analysis of this ambulatory dataset and demonstrate the accuracy in modeling dependencies of different natures using various machine learning techniques.
Introduction
The accurate prediction of the daily ambulance demands across different regions of a city is of great importance to emergency service providers and its residents. Through the lens of emergency service operators, such information is valuable for a rational and dynamic deployment of ambulances of different types and increases operational effectiveness in fleet management. This in turn ensures that patients have shorter waiting times through location planning and increased ambulance availability when the need arises. This is an important goal for pre-hospital emergency medical services [1][2][3][4] and especially necessary for patients in critical conditions [5,6]. Furthermore, it helps in the efficient staffing for shifts in the hospitals, as well as early identification of any surges in ambulance demands. With the growth in focus towards data collection and analysis over the years, massive datasets of ambulance records are increasingly available for use by healthcare professionals. This encourages a stronger understanding of ambulance demands and the efficient planning of healthcare resources.
However, estimating ambulance demand through human efforts has nevertheless been challenging due to various multi-nature considerations [7]. First, ambulance demand is affected by spatial-related characteristics, such as the region at which demand is estimated. For example, a region is likely to experience a higher demand for ambulance than another region due to a larger elderly population. Moreover, a city is often demarcated by the local government into various development regions, with each having a particular purpose, e.g., financial district, or residential district. Demand is likely to be different depending on the region type. Second, ambulance demand is also affected by high-level temporal attributes, such as day of week and day of month, since the demand may often experience periodicity. Third, it is often correlated with short-term and long-term historical demands in that region. For example, if a region experiences a sudden outbreak of a disease and requires more ambulatory interventions or a sporadic mass sports event that lasts for a few days, the demand for ambulances in that region is likely to be similar over those consecutive few days. The cumulation of these multi-nature features renders ambulance demand a nonlinear dynamical system. While it may be straightforward to infer demand information from historical demands due to their temporal periodicities, it may not be as easy to do so for other features like the region identifier (ID) or day of the week. Given the inherent complexity and chaos within such systems, advanced machine learning methods will be useful for extracting insights that support the prediction of ambulance demand.
Specifically, machine learning methods have been gaining momentum over the years due to their capabilities of modelling complex patterns within data, encouraged by the advancement of computational hardware. They have demonstrated success in various areas of emergency medicine [8], such as predicting in-patient admission [9], postsurgical mortality, and intensive care unit admission [10], and in-hospital mortality of emergency department patients [11][12][13][14], all of which are complex non-linear dynamical systems. In the domain of ambulance-related research, machine learning has been considered for ambulance travel time estimation [15,16], location selection for ambulance stations [17], and demand prediction [17,18]. Despite the progress that has had been made, there is generally a lack of studies that consider such methods for the ambulance demand prediction problem.
Furthermore, existing work may consider machine learning methods for ambulance demand prediction, they generally do not incorporate in a sufficiently comprehensive manner the various types of dependencies affecting ambulance demand. They either consider prediction of the whole city [19] or predict demand at equally sized square grids [7,20], which is not reflective of the actual regionalization by the local government. In other instances, the focus is only on prediction of demands at some, but not all, regions of a city [18]. In this paper, we propose an original and novel approach that leverages a massive dataset of historical ambulance demand records to model the multi-nature dependencies of ambulance demand for predicting the next-day demand at all regions of a city-state. Our approach elicits useful insights that represent each of the different types of dependencies. Then, it utilizes a machine learning model to learn these dependencies for prediction. We evaluate the performance across several state-of-the-art machine learning techniques, using real-world ambulance demand datasets recorded by Singapore Civil Defence Force (SCDF).
Data Sources
In this study, we make use of a dataset obtained from the SCDF that includes all the ambulance calls in the city-state, Singapore, from 2006 to 2016. SCDF is the single national emergency ambulance provider which manages a fleet of around 60 ambulances in 2016 [21]. SCDF activates these ambulances by a centralized "995" dispatching system and does not charge for any emergency cases it conveys to hospitals [22]. Each ambulance call in the dataset corresponds to an incident, which has the following characteristics: time of incident, ambulance origin station, incident classification, incident subclass, patient incident subclass, patient's emergency status, patient's year of birth, ambulance destination hospital, patient location common name, patient location postal code, patient location street, incident location latitude, incident location longitude, and gender. To obtain the regions and map of Singapore, we make use of the 2010 Planning Area Census by the Singapore government, which consists of every Development Guide Plan (DGP) region (similar conceptually to census tracts). We also leverage datasets obtained from polyclinics in Singapore to extract useful demographical information such as the population count of people above a certain age at each region of the city-state. Finally, we consider additional socioeconomic information from The Census of Population conducted by the Singapore Department of Statistics in 2010, which is the most recent one available. Such census is conducted once every ten years and is based on a person's place of usual residence.
Model Overview
Using the above-mentioned datasets, we design an approach that involves a Feature Engineering stage and a prediction stage using a Machine Learning Predictor. Figure 1 shows an overview schematic of the approach. In the following sections, we elaborate each of these stages in details.
Feature Engineering
Data processing is first carried out on the SCDF dataset to generate the aggregated demand, i.e., number of calls, and several associated features of each region of each day from 2006 to 2016. We denote the engineered dataset as SCDF-Engineered. Each data sample in SCDF-Engineered consists of the aggregated ambulance demand at a particular region of a day within 2006 to 2016, which is the outcome of interest. It also includes several features associated with that region on the particular day. Specifically, these features fall under three classes: Attributes, Short-term Historical Demands, and Long-term Historical Aggregated Demands.
The description of the features in each of the three classes are as follows: • Attributes. These are categorical features that provide high-level information about the record. These features are multi-nature and can be further classified into (1) spatial, (2) temporal, and (3) demographic attributes. Specifically, the spatial attributes consist of the region ID, which is a number that uniquely identifies each DGP region. The rationale behind its inclusion is to differentiate the regions within Singapore, since different regions may have different demand characteristics. For example, a region with more elderly people may experience a higher demand than another region with mostly young people. The temporal attributes consist of the following features: day of week, day of month, and month of year. These are included to account for the periodicity of ambulance demands. Finally, since the demand at a region may be higher if it has more people who are older in age, we also consider a demographic attribute: the total no. of people in that region who are aged 50 and above on that particular year • Short-term Historical Demands. These features are demands at a region over each of the previous 7 days. These 7 continuous features are considered to account for the correlations between the demands of a particular day with that of the previous days. For example, a sudden spike in the dengue mosquitos' population at a region may result in the rise of dengue-related cases over a few consecutive days.
•
Long-term Historical Aggregated Demands. These features consist of the total demand at a region over the past 30 days, the total demand over the past 7 days, the total demand of the week up until the sample date, and the total demand of the month up until the sample date. These aggregated demands are included to account for the demand on the broader scale without the higher variances present in short-term demands. For example, a region may experience a high short-term historical demand solely due to a recent occurrence of a large-scale traffic accident but does not typically have high demands as it is not a populous area.
Apart from SCDF-Engineered, we also further build a dataset SCDF-Engineered-Socio. The rationale behind building this dataset is to explore whether ambulance demand has any correlations with the socio-economic characteristics of the people in a region. Similar to SCDF-Engineered, each data sample in SCDF-Engineered-Socio contains all the features in Attributes, Short-term Historical Demands, and Long-term Aggregated Demands. However, additional socioeconomic features of each region obtained from The Census of Population is also included in this dataset. Specifically, SCDF-Engineered-Socio also considers the following additional features: number of residents who travel by buses, number of residents who travel by cars, number of residents who travel by taxis, number of residents who travel by trains, number of residents who are in active employment, number of residents who are unemployed, number of residents who are tenants, and number of residents who are home owners. Since the socioeconomic information is only available for a subset of regions in Singapore, SCDF-Engineered-Socio contains only data samples from these regions.
As observed, the features considered so far are in their entirety, multi-nature. However, each feature represents a piece of information of only a single nature and does not consider the impacts of mixed features. For example, the feature day of week only reveals temporal information about a record, but it does not reveal its relationship with the region at which the record corresponds to. In order to study the impact of mixed features, we further engineer composite features based on the existing features generated. Specifically, we consider spatiotemporal features, and create the following composite features: unique ID that represents (region ID, day of week, day of month, month of year), unique ID that represents (region ID, day of week), unique ID representing (region ID, day of month), and unique ID that represents (region ID, month of year). For evaluation purpose of such composite features, we create a separate dataset SCDF-Engineered-Spatiotemporal (SCDF-Engineered-ST). This dataset includes the same features present in SCDF-Engineered, as well as the engineered spatiotemporal composite features.
Key Implementation Details of Feature Engineering
The key component of our feature engineering lies in the extracting of the short and long-term demands, since other features can be obtained either directly from the raw dataset in the case of features like the day of week, or mapped easily using a third-party Application Programming Interface (API) in the case of other features like region ID. To extract these demand features, we first use simple aggregations to transform the SCDF dataset into a dataset that records the total daily demand of each region and sort these demands in a chronological order.
Then, we use a sliding-window-based approach to obtain the relevant demand variations for each data sample. Figure 2 demonstrates an example of such a process. The red box represents a sliding window that essentially contains the historical ambulance demand values over each of the past 30 days. Within this window, the relevant short-term historical demand features are extracted. The green box represents a sliding window that considers the historical demand over each of the past 7 days. Within this window the long-term historical and aggregated demand features are extracted. As mentioned, the time frames chosen are 7 and 30 days to account for the weekly and monthly demand periodicities respectively. Once this is completed, the two sliding windows move on to the next time step to extract the similar features for the next day. This process is then conducted for all regions of the city to get all the data samples used in this study.
Primary Outcome
The outcome of interest is the next-day aggregated ambulance demand at a DGP region of Singapore. This demand may arise from incidents of different emergency statuses, i.e., Dead on Arrival, Emergency Critical, Emergency (Non-ambulatory), Emergency (Ambulatory), and Non-emergency. It is also agnostic to trauma and medical incidents and incidents where assistance were not required. Hence, it is a regression task from the machine learning point of view.
Machine Learning Methods Considered
Given the engineered dataset, we want to train a machine learning model that predicts the demand by leveraging the above-mentioned features. To this end, we make use of the hold-out evaluation technique. Specifically, the data samples in SCDF-Engineered from 2006-2015 are used for model training, while the samples from 2016 are used for model validating. The similar separation is done for SCDF-Engineered-Socio. Several methods are considered, and they are chosen because they are either typically effective for regression problems or had been previously considered in existing work. The methods are as follows: • Regional Moving Average. This method estimates the next-day demand at a region simply by taking the average of the daily demand values over the past 7 days at this region. • Linear Regression. This method is a popular regression method that finds the best-fit hyperplane across the multi-feature data samples [23]. It assumes a linear relationship between the dependent variable, i.e., demand, and the independent variables, i.e., features. To model this relationship, the mean square error function is first considered as a loss function. Then, the gradient descent algorithm is used to iteratively find the minimum of this function and also the resulting hyperplane. The coefficients of this hyperplane represent the degree of impact each feature has on the predicted value. To accurately represent the categorical features, i.e., Attributes, one-hot encoding is used during preprocessing. Min-max scaling is also applied on the continuous features. This method is applied using the Python Scikit-Learn library [24]. • Support Vector Regression (SVR). SVR is a support-vector machine that performs regression by finding a hyperplane, i.e., support vector, that fits as many points as possible within a space that is bounded by two boundary hyperplanes parallel to this support vector. Unlike Linear Regression, SVR typically finds the best-fitting hyperplane in the higher dimensions. To this end, it utilizes a kernel, which is a function that maps lower-dimensional data points to higher-dimension data points. The advantage of doing so is that it allows the method to capture certain non-linear relationships, which may not be possible with Linear Regression. SVR has been demonstrated to be one of the more effective machine learning approaches for predicting ambulance demand in [18]. Similar to Linear Regression, we apply this method using the Python Scikit-Learn library [24] and process the categorical features with one-hot encoding.
•
Multi-layer Perceptron (MLP). This method is an artificial neural architecture that has been explored and demonstrated in [19] to be an improvement over the traditional ambulance demand prediction method. The MLP is a standard neural architecture that is essentially made up of a sequence of linear layers. In this baseline, the size of the hidden layer is equal to that of the input layer, and 3 hidden layers are considered in total. Furthermore, the loss function used for the training of the model is the squared loss function. The learning rate used is 0.01, and the activation function used is the ReLU function. This method is also applied from the Python Scikit-Learn library [24]. • Radial Basis Function Network (RBFN). We also consider the Radial Basis Function (RBF) network, a variant of the artificial neural network (ANN), for comparison. Unlike a typical MLP network, a RBFN consists of three layers: an input layer, a linear output layer, and a hidden layer that uses the non-linear radial basis function as the activation function. It has been demonstrated to be more effective than traditional MLPs in certain problems [25].
•
Light Gradient Boosting Machine (LightGBM). LightGBM [26] is one of the most efficient and high-performing gradient-boosting decision tree methods. The key idea behind such gradient-boosting methods is that they consider the ensemble of various individual regression trees to fine-tune the accuracy of prediction. This is achieved by sequentially combining the trees such that each tree fits to the residual of the previous tree it is extended from. The input for this method is similar to that of the previous methods, with the exception that attributes are specified as categorical features in the program. Furthermore, the specific key settings considered in this work are as follows. (1) feature fraction, 0.8. The boosting approach considered is gradient boosting decision tree. This method can be applied by using the LightGBM library [26] in Python.
The error metrics used in the experiments are weighted absolute percentage error (WAPE), mean absolute error (MAE), and mean squared error (MSE) [27,28]. WAPE is used as an error metric instead of the mean absolute percentage error (MAPE). This is because the ground-truth demand at a region may sometimes be zero, which results in the zero-division error if MAPE is used. Specifically, the formulation for WAPE is as follows: where A denotes a ground-truth demand, and F denotes its corresponding predicted value. In our implementation, feature engineering is carried out using Python (version 2.7.16, Python Software Foundation, Delaware, USA). To map an incident to its corresponding region, the Shapely library is used [29]. The above-mentioned data-mining regression methods are built using Python Scikit-Learn library (version 0.20.0) [24], and LightGBM library (version 2.2.3) [26]. We also make use of QGIS (Open Source Geospatial Foundation, Beaverton, Oregon, USA) for spatial-related visualizations. Table 1 shows the key characteristics of the SCDF demand dataset. Specifically, it shows different compositions of the dataset, over the following category types: incident year, incident classification, incident subclass, patient incident subclass, patient's birth year, and patient's gender. As observed, there is a general increasing trend for ambulance demands from 2006 to 2016. The median age (based on the age of the patient by the year-end of the incident) is 55 and largely between 34 and 73. This reveals that more than half of the incidents occurred to middle-aged and elderly people and that most of the incidents happened to people who were at least young adults. On the biennial level, the patient ages generally increase from 2006 to 2016. In terms of incident classification, the majority of the incidents were trauma in nature. The analysis of patient incident subclass shows that the majority of calls were due to problems associated with the nervous system. However, there is also a large proportion of calls where the patient was uninjured or did not have any medical complaints. Other major sources of calls were problems associated with the bone/connective tissue, respiratory system, and cardiovascular system. This is in line with the idea that the increasing demand may be due to an increasingly aging population since problems at these parts of the body tend to be associated with the elderly.
Results
Preprocessing and feature engineering are conducted on the SCDF dataset to build SCDF-Engineered. Table 2 shows some of the characteristics of this engineered dataset. The overall mean daily regional demand is 6.33 and ranges largely from 0 to 10. The mean of the total past-7-days regional demand is 44 and largely ranges from 4 to 69. This is in contrast to the total past-30-days demand, where the mean, the first quartile, and the last quartile are 190, 20, and 294, respectively. Within SCDF-Engineered, each record consists of the Attributes, Short-term Historical Demands, and Long-term Historical Aggregated Demands features, as per the descriptions in Section 2. Since each record in SCDF-Engineered is specific to a day and a region; the ground-truth values associated with this record is simply the demand on that day and in that region. These ground-truth values are used as the target variables during model training (resp. validation), using records from 2006-2015 (resp. 2016) in SCDF-Engineered.
Characteristics Value
Daily Regional Demand 6.33 (0-10) Total Regional Demands over Past 7 Days 44. (4-69) Total Regional Demands over Data is presented in means and interquartile ranges. Figure 3 shows the variance of the daily demands of each region over the days of 2006-2016. As observed, the demand variances vary across the regions. This highlights differences in demand behaviors across different regions and the importance of considering region ID as a feature to account for such differences. Table 3 shows the accuracies of the five methods compared on SCDF-Engineered, with the best results highlighted in bold. As observed, the performance of both Linear Regression and LightGBM are the best and comparable with each other, with the former having a slight edge in terms of the MSE metric. The regional moving average is the worst performing method, while the performances of SVR, MLP, and RBFN are somewhere in the middle of all methods compared. Comparing MLP and RBFN, the former also demonstrates a stronger performance for the problem we are solving. Although Linear Regression is one of the best-performing methods, it may be subjected to overfitting, since according to analysis, the mean coefficient value is 3.9 × 1011, and the interquartile range is between −1.55 and 2.7 × 1011. As such, Linear Regression may not be a suitable model due to the instability introduced through the largely varying coefficients that arise from overfitting. This implies that it may not perform as well on other datasets. Since Gradient-boosting Decision Tree is also highly effective for structured data, e.g., table of features as in our case, such methods are preferred in our context. Due to its effectiveness, LightGBM is specifically chosen. Table 4 shows the gain-based importance of the features derived from the training process of LightGBM, as well as the mean absolute SHapely Additive exPlanations (SHAP) value of each feature. The SHAP value essentially assigns each feature an importance value for each prediction [30]. To obtain the overall importance for each feature, the mean absolute SHAP value is considered, where a larger value represents a greater feature importance. In terms of the relative importance of a feature among all considered features, both the LightGBM's gain-based importance and mean absolute SHAP value are observed to be in agreement with each other. The most important features are the total demand over the past 30 days and the total demand over the past 7 days in that region. This highlights the importance of considering long-term historical aggregated demands. What follows is the ID of the region at which the demand is predicted. This demonstrates the importance of differentiating a region from other regions, since they may have vastly different demand characteristics, as shown in Figure 2. The total number of people aged 50 and above in the region of the particular year is also considered important. This is in line with our intuition that people who are older in age are more likely to require emergency assistance than younger ones. The day of month, day of week, and month of year are also significant features, since there are periodicities within the ambulance demands. Finally, the demand at the region on each of the past 7 days contributes to the estimation by a fair extent.
To demonstrate the effects of regional socioeconomic data, we evaluate how the best-performing model of Table 3 performs when these regional socioeconomic features are included. Table 5 compares the accuracy of the prediction when LightGBM is applied on SCDF-Engineered-Socio, with the accuracy when these socioeconomic features are excluded from SCDF-Engineered-Socio. However, as observed, including additional socioeconomic features does not improve the prediction. A reason may be because these features are constant throughout all the years within the dataset. Any insights that these regional socioeconomic features provide may have already been represented by the region ID, which is one of the most important features according to Table 3. This is unlike the regional demographic feature present in the Attributes, where the total number of people aged 50 is different every year. To evaluate the impact of adding spatiotemporal composite features, we also apply LightGBM on SCDF-Engineered-ST. The resulting accuracies are WAPE = 24.7%, MAE = 2.11, and MSE = 10.4. As observed, these accuracies are worse than when no composite features are used. The reason may be because even though the features consider the spatiotemporal characteristics of a record, they may be noise to LightGBM, which algorithmically considers the mixed effects of different features in a finer-grained manner, by merits of the algorithm. This further highlights the benefits of using gradient-boosting machines for such problems.
Discussion
This study analyses a large city-scale ambulance demand dataset using machine learning algorithms to further develop a daily regional demand prediction tool. Our work is novel because it is the first reported study in Singapore to leverage machine learning in the development of tools that assist in the planning of emergency response resources. To this end, it considers various multi-nature dependencies of ambulance demands. This motivates future work in conducting machine learning-based analysis on datasets of similar types.
Our solution considers the engineering of various attributional features, short-term historical demands, and long-term historical aggregated demands. LightGBM is then applied on these features for the prediction of demand. Other methods either do not perform as well, or encounter problems like overfitting, as in the case of Linear Regression. As such, LightGBM remains the top choice in our solution. The reason why an ensemble model like LightGBM performs better than individual ones like linear regression may be that it combines various independent models via the gradient boosting approach. Specifically, each model within LightGBM is a regression tree, which in itself is more suitable than models like Linear Regression in capturing the non-linear dependencies of ambulance demand.
The key idea behind gradient boosting is that prediction can be refined by adding these trees one at a time while using a gradient descent procedure.
The proposed features contribute in varying degrees to the model training in LightGBM. The most important features are the ID of the region at which demand is predicted, long-term historical aggregated demand features, day of month, and number of people aged 50 and above. With the results obtained from this study, it provides emergency healthcare resource planners additional insights on how different features affect the demand at a region for effective ambulatory resource planning in the future. For example, understanding that the region of the city-state is one of the greatest determinants of demands allows the planners to dispatch ambulances in a finer-grained manner. Furthermore, understanding that the demand at a region also strongly depends on its long-term historical demand and number of people aged above 50 encourages planners to focus suitable amount of resources to regions based on the historical incidents that occur at the region. It also encourages paying more attention to the demographical changes in each region.
While the accuracy of around 25% is considered satisfactory, it may not be as high when compared to the prediction of other vehicles like taxis/on-demand vehicles [31,32]. We note that the reason for this may be that the regional demand for vehicles like taxis is typically much larger than that of ambulances to begin with, which in our case is only around 6 per day per region. As such, the prediction percentage error in our case is more likely to be larger, due to the relatively much smaller size of the target outcome used in the machine learning training and prediction. The periodicity of ambulance demand is not as strong as other types of vehicles like taxis. While the demands for taxis or private hires may be highly dependent on the days of the week, similar results cannot be inferred for ambulances. This motivates us to consider various external data sources in our future work, e.g., weather conditions, to model the other possible dependencies that may affect the ambulance demand. Furthermore, our solution is a preliminary take on this problem in Singapore, and it predicts the demands only under certain typical conditions. Although there are peaks and troughs in demands every now and then, these values are in no way near the extremes that happen during very large-scale incidents, e.g., epidemics, haze [33][34][35], and diurnal temperature changes. A potential area of improvement is to make use of historical data to model the demand at such extreme cases of large-scale incidents.
We have seen applications of artificial intelligence and machine learning techniques across different disciplines [36][37][38][39]. Our work here focuses on health services research, which has not gained much attention until now. Other than predicting the daily demand, future work involves further optimization to investigate finer-grained demands, e.g., hourly. However, as we look at increasing the granularity of analysis to identify "micro-trends" and pockets of demand, which may potentially be matched with better optimized placements or additional ambulance staffing, the operational limitations of block scheduling need to be considered. It may not be practical to call up a person to work for just 1-2 h instead of the typical 8 to 12 h shifts. The emergency medical services (EMS) systems may also have more rigid shift patterns, and this may limit the flexibility for optimization. Furthermore, given that the mean daily regional demand is low, considering a finer-grained timescale may result in the issue of data sparsity that inhibits accurate model trainings. These considerations are beyond the scope of the current study and will form part of our future investigation.
This study demonstrates the usage of a single-source vehicle, i.e., ambulance, dataset for building of a solution that models and predicts the ambulance demand in the regions of a city-state. For future work, we may additionally consider the insights obtainable from other vehicle datasets, e.g., taxi trajectories or public transportation smart card data. For example, a potential direction is to further consider the accessibility of each region to its respective nearest hospitals or clinics using certain metrics, e.g., average travel distance/duration of trips originating from a region and ending at a hospital. The idea is that if accessibility by other forms of transportation is higher, it gives people more alternatives for traveling to the hospitals instead of focusing solely on ambulance, especially for non-critical incidents. This may in turn affect the demand of ambulances in that particular region. Furthermore, leveraging geospatial datasets from other vehicles also allows us to understand the medical demand of people within a region. If a particular region sees on average a larger number of people traveling to the hospitals/clinics via public transportation or taxis than another region, an assumption can possibly be made that the former region tends to house more people who may require medical care than the latter. While this does not necessarily imply a higher ambulance demand, which focuses on more urgent cases, a potential exploration on the correlations between these two pieces of information can also be considered for future work.
Conclusions
In this study, we have utilized a 10-year city-wide emergency ambulance dataset to predict ambulance demand. The forecasting capability presented here is important because it enables informed resource and ambulance demand and is applicable across hospitals and general medical facilities. Several machine learning techniques are compared: Regional Moving Average, Linear Regression, Support Vector Regression, Multi-layer Perceptron, and LightGBM. Based on the preliminary work carried out here, LightGBM is found to perform the best. The most important features are the total demand over the past 30 days and the total demand over the past 7 days in that region. | 7,670.6 | 2020-06-01T00:00:00.000 | [
"Computer Science",
"Economics"
] |
Evaluation of Augmented Reality Frameworks for Android Development
—Augmented Reality (AR) is the evolution of the concept of Virtual Reality (VR). Its goal is to enhance a person's perception of the surrounding world. AR is a fast growing state of the art technology and a variety of implementation tools thereof exist today. Due to the heterogeneity of available technologies, the choice of the appropriate framework for a mobile application is difficult to make. These frameworks implement different tracking techniques and have to provide support to various constraints. This publication aims to point out that the choice of the appropriate framework depends on the context of the app to be developed. As expected, it is accurate that no framework is entirely the best, but rather that each exhibits strong and weak points. Our results demonstrate that given a set of constraints, one framework can outperform others. We anticipate our research to be the starting point for testing of other frameworks, given various constraints. The frameworks evaluated here are open-source or have been purchased under Academic License.
INTRODUCTION
Augmented Reality is a very promising emerging technology, growing in popularity on mobile devices. A number of research studies were published in late 2013, forecasting the future of the Augmented Reality market [1]. For instance, Juniper Research has estimated that the number of mobile AR users worldwide will steadily grow to 200 million by 2018 [2]. The AR technology has made great progress on mobile phones, and Juniper Research further predicted in 2012 that over 2.5 billion mobile AR apps will be downloaded annually to smartphones and tablets by 2017 [3].
These premises lead to the research question -"Which is currently the best open framework for developing an Augmented Reality mobile application?" Towards this goal, we evaluated open AR frameworks for Android mobile development, due to the fact that the iOS platform has been evaluated over the years and clear results can be found in Dominik Rockenschaub's Master's Thesis [4]. Another incentive for researching the Android specific open AR frameworks is the recent job market growth registered in Android development [5].
Various AR systems are available today. Based on the hardware and technical capabilities of the testing device, three main tracking techniques are defined: marker-based tracking, markerless tracking, and GPS tracking. Our research evaluates the AR frameworks using visual tracking methods, specifically markerless tracking. Marker-based tracking is also an optical method, but has become obsolete [6]. Special fiducial marks have to be created in order to be tracked. They have to be maintained over time, which is also a costly operation. Markerless based tracking on the other hand, can target any part of the surrounding environment. Any image or object can be used as a target without being tied to a specific marker. Markerless tracking is gaining more and more ground against the marker-based tracking, as it does not require generating of invasive markers. These methods extract information and characteristics about the environment, which may be useful again later.
This publication is organized as follows: the methodology is presented in section II; section III describes the setup of the testing environment and the implementation process; the results per criterion are given in section IV while the discussion analyzing these results can be found in section V; and finally we conclude the paper in section VI.
II. METHODOLOGY
Six AR frameworks have been researched and evaluated after a careful evaluation of 35 frameworks. The 6 chosen frameworks support Android development, as well as markerless tracking. An important criterion in choosing the frameworks to be evaluated is the availability of the library. Several are available for commercial use only (for the purpose to sell and profit) or are simply not available to the general public. Others are accessible only for a trial period. Some are depreciated, like in the case of Popcode, or even present language barriers as Koozyt, where information is provided only in Japanese. Table I provides the six frameworks, the development country and start year as well as availability. For a better visualization of the results, a test app has been developed. As a result, different frameworks can be tested in real time, and the results are saved in a local database. The results are then presented in different visualizations.
III. SETUP AND IMPLEMENTATION
For each criterion, we simulated a testing environment and each framework was evaluated under the same conditions. Several tests have been performed per framework for each criterion, thus the average testing time is used to compare the frameworks.
A. Criteria Categories
Two main criteria categories have been identified and actively tested: environmental criteria, and target criteria. Environmental criteria represent those constraints found in the immediate neighbourhood, which are conditioned by the environment and influence the recognition of the target image. Tested here are the influence of different light intensities or that of the frameworks' performance when dealing with dark backgrounds versus bright ones. Other considered criteria include differences in viewpoint, visible target area, the mirroring effect and various distances between the testing device and the target image, as well as the noise present in the target image due to its deterioration over time.
• The evaluation of the light intensity criterion has been executed in special lightning conditions.
The darkness criterion has been tested inside a room, in natural light at sunrise. The second test is also performed inside a room, by overexposing the target image to the direct light of a desk lamp. The third simulated event happens by closing and opening the window shades. This experiment is meant to replicate sudden changes in light intensity. The mirroring effect is reproduced by tracking the target images on a computer screen. • The evaluation of the viewpoint criterion has been performed in four different perspectives, from a 45° angle to the left, right, upwards and downwards of the target image as depicted in Fig. 1. • Visibility is tested by uncovering 10% of the visible area at every step. A white sheet of paper is used to cover the target image and uncover it in incremental steps as shown in Fig. 2. When the target is recognized and the view augmented, the step increasing stops. • The evaluation of the noise criterion has been performed by digitally altering the target images and adding different noise levels. The test sequence includes five noisy target images as given in Fig. 3. The noise is incrementally added starting from a 10% noisy target image in steps of 20% up to 90%. • The evaluation of the distance criterion has been performed by starting testing up close, at a 10 cm distance from the target image. The distance is increased at every step by 10 cm until the framework cannot recognize the target anymore. • The evaluation of the background criterion has been performed in two background test sequences, dark versus bright contrasts. This reproduction is meant to simulate the placement of the target image against highcontrast backgrounds. Target criteria, on the other hand, are those target image attributes that can be configured and directly influence the detection of the target. While an image can be a target image for one framework, it might not be supported by another. For example, some frameworks are robust to aspect ratio changes or various contrast ratios. Moreover, different target images sizes and a number of special printed materials are tested. We also evaluated whether or not a considerable difference in testing time exists between detecting the original target and its grayscale.
• The evaluation of the grayscale criterion has been performed on a printed default size target image in grayscale. An example can be seen in Fig. 4. • The evaluation of the contrast ratio is carried out on digitally modified target images as depicted in Fig. 5. Four target images are tested with a contrast value set to -50 and at each step it is incremented by 50. • The evaluation of the size criterion has been executed on four different target images. Each of the six tested target images has been PAPER EVALUATION OF AUGMENTED REALITY FRAMEWORKS FOR ANDROID DEVELOPMENT reduced to 5cm, 10cm, 15cm and 20cm as shown in Fig. 6. The distance between the testing device and the target image has been constant at a default of 30cm.
• The evaluation of the aspect ratio requires the target images to be digitally modified. Thus, each of the six original target images is shrunk either vertically or horizontally by reducing the height, respectively the width to one third (1/3), a half (1/2) and two thirds (2/3) as given in Fig. 7. • The evaluation of the material criterion has been performed in three different stages. The first case depicts the target image behind a glass window, as it would be if a poster were placed inside the side panels of a bus shelter. The second case simulates a restaurant menu scenario where the menu is laminated to protect the paper against deterioration and mishaps like spilled drinks. The third target image is printed on a glossy photo printing paper as it might be used in a scenario for displaying promotional ads or as packaging of different promotional products. In addition to the two mentioned categories, performance and usability criteria are defined. Each framework description makes a point to state that some basic graphical problem is resolved and the tracker is optimized. These basic issues are collected under performance criteria and it is determined whether or not they were actually solved. Here we mention the constant flicker, a visible motion blur, and the ability to deal with fast moves such that the virtual content is not lost. Registration (the accurate alignment between the real world and the virtual object) is an important factor to overcome, as well as the capability to occlude the virtual object when necessary to create the feeling that the virtual object belongs in the scene, and is part of the scene.
Furthermore, special features are implemented within each SDK for presenting a more comprehensive product. Not all features are supported by every framework. Therefore they are categorized by usability and it is determined which are endorsed. The considered features include face tracking, text detection, flash and usage of the front camera. Moreover, the ability to display the virtual content even when the target image is not in the line of sight anymore, known as extended tracking, is a useful feature in some use cases. Likewise, the possibility to track more than one target image simultaneously comes in handy.
B. Android App
We developed an Android app which integrates the 6 frameworks (ARLab, ARToolKit, D'Fusion, Vuforia, catchoom and metaio) and actively tests the environmental and target criteria. By actively testing, it is meant that a framework is chosen, its camera view opens and a criterion is selected for testing. When the testing environment is set, the user starts the test timer by pressing the Start button. When the virtual content is superimposed into the real world, the testing time is saved as the time needed to detect and recognize the target image.
PAPER EVALUATION OF AUGMENTED REALITY FRAMEWORKS FOR ANDROID DEVELOPMENT
We determined if the performance and usability criteria are supported by observing the behaviour of the frameworks in a neutral context, on the default target images (no special light conditions). Some feature support, such as face tracking and text detection are set by finding the information in the framework documentation.
Criteria and criteria categories can be added to the list of predefined criteria. Use cases can be defined by adding weighted criteria into a context.
The frameworks can be compared against each other given a constraint or a set of constraints. Furthermore, an overview for each framework is available, displaying the average testing times per framework for each criteria category.
IV. RESULTS
In order to preserve the same testing conditions, such as sunrise light, a criterion is tested by all frameworks before moving on to the next criterion. The testing environment changes between criteria and by testing one criterion at a time for every framework, the environment is minimally altered during the testing.
A. Environmental Criteria
Light intensities are supported overall as observed in Fig. 8. On the left side are the average times for the desk lamp and screen glare light conditions. The graph on the right illustrates the testing times from the semidarkness and sudden change tests. Metaio has the lowest average testing time, followed by Vuforia with an average testing time under a second. The worst timing is registered by ARToolKit while ARLab and catchoom are steady under a second and a half.
Most of the frameworks detect 45° angles. The viewpoint testing times are shown in Fig. 9, the left graph provides the average testing times for the left and right perspectives, while on the right side are given the up and down viewpoints results. ARToolKit provides good testing times, nevertheless it should be considered that it recognizes only two out of the four perspectives. ARLab has difficulties detecting the target image from a 45° upward angle. The other viewpoints are easier to detect. Three out of the six frameworks detect the target image from a 45° right angle under half a second.
Detection at 10% and 20% visibility requires a longer processing time. Vuforia detects the target at 10% visibility somewhere between one and two seconds, being the only framework that scores this performance. For more than 20% visibility Fig. 10 illustrates that Vuforia's average time is less than a second. Catchoom detects the target image, having only 20% uncovered, in little over two seconds and faster given more visibility. ARLab and metaio need at least 40% of visible area before starting detection, while ARToolKit cannot detect anything under 80% visibility.
Vuforia and metaio have very similar times when dealing with noise. However, Vuforia detects a 200% level noise target image while metaio goes as high as 70%. Fig. 11 displays the average testing result for the 10 and 30% noisy targets on the left and 50, 70 and 90% noise levels on the right. The fastest times are still recorded by metaio while ARLab is the slowest detector. Four of the six frameworks have a detection time over one Table II provides the minimum and maximum distances supported by each framework. Because there are too many distances to be displayed in a readable graph, Fig. 12 depicts the average over all supported distances per framework. Vuforia is portrayed as the fastest framework when it comes to the distance between the testing device and the target image. Metaio should be on the second place, if not on first place, considering that its average testing time increased because of the extra PAPER EVALUATION OF AUGMENTED REALITY FRAMEWORKS FOR ANDROID DEVELOPMENT effort required to detect the target from 240cm away. The worst timing is registered by the ARLab framework. Fig. 13 demonstrates the average time needed by a framework to detect the target on bright colours versus a target placed against a dark coloured background. The left graph illustrates the average over all testing times per framework, and the right one details the two possible cases. D'Fusion supports both bright and dark backgrounds in a constant manner. D'Fusion and Vuforia have similar times dealing with dark contrasts, while Vuforia performs slightly better in the case of brighter backgrounds. Vuforia is the only framework that has stored a better time on bright backgrounds than a darker one. As both graphs can attest to it, metaio is the leader of the background criterion.
B. Target Criteria
We tested both the original target image and its grayscale version in order to compare the testing times as shown in Fig. 14. Metaio recorded the lowest testing time, while three other frameworks tested in under a second. Catchoom and ARLab have recorded times a little over a second. As it can be seen, coloured or gray, the detection time is roughly the same. A slight difference can be observed at ARLab.
All frameworks passed the contrast ratio test, nevertheless not without difficulties as Fig. 15 concludes. The left graph illustrates the default target testing times (contrast level 0) and the -50 contrast level. The right side shows the average times for the 50 and 100 contrast level target images. ARLab tested poorly and metaio holds the Metaio is the only framework that supports all six aspect ratio changes. ARLab, on the other hand, detects nothing. Vuforia is the closest framework to metaio. Nevertheless, metaio still has the best times and covers all constraints. Fig. 17 shows the average results for the horizontal aspect ratio on the left and the considered vertical ratios on the right.
All materials are supported by every framework, with the exception of ARToolKit, which was not tested on glass (technical difficulties). The left graph in Fig. 18 denotes the overall average for each framework and the right one breaks down this average by the material types: glass, glossy paper and plastic. ARLab has the best time
V. DISCUSSION
A number of different constraints have been tested on the evaluated frameworks. These constraints raise difficulties for some frameworks while others overcome them easily.
A. Scenarios
Finally, we defined a number of use cases called scenarios. A scenario is a collection of criteria considered for a specific context by acknowledging their importance through weights.
Scenario 1: Consider an indoor app for visualizing large objects like furniture, home furnishings, and appliances. A target image would be placed in the spot where the real object would be, and the virtual object is displayed on top of the target. A couple of constraints that must be considered include: • How near or far the user can be from the target.
• Having different viewpoints for a better outlook of the virtual object.
PAPER EVALUATION OF AUGMENTED REALITY FRAMEWORKS FOR ANDROID DEVELOPMENT
• Occluding the virtual object by real objects if necessary. • A correct alignment between the real scene and the virtual object. • Using more than one target for a better placement of the virtual object, at the right place and at the right scale. • Seeing the virtual object even when the target is not visible anymore. For this scenario, metaio and Vuforia score the highest points. Catchoom is left behind because it does not support extended tracking.
Scenario 2: All target criteria are considered for ranking the frameworks when dealing with a magazine app scenario. An image in the magazine represents a target image for the magazine app. By hovering over with a smartphone the virtual content is revealed to the user. More constraints are considered together with: • Gray coloured images and strong contrast images. • Small size images.
• Images printed on different paper types.
• Recognize images given that they are not completely visible. • Text detection to support the images. Catchoom and ARLab have a better grasp of the environmental criteria than metaio and D'Fusion. However, metaio supports text detection and catchoom does not.
Scenario 3: The two use cases presented above are indoors scenarios. Now we also consider outdoors scenarios and suppose a company comes up with a new marketing idea and uses AR for posters creatively located on the side panels of bus shelters. A user can either be at the bus stop looking at the poster or notice it from inside a moving bus and try to decipher its message. An outdoors scenario focuses on the environmental criteria along with: • Sudden light changes, such as the sun hiding behind the clouds. • How far away the user can be from the target.
• Deterioration of the poster in time due to rain, wind, sun exposure and so on. • A dynamic background such as people walking behind it or cars driving by. • How much of the poster is visible from where the user sits. • Fast moves as the user passes by in the bus.
• Once the poster is recognized, the virtual content is persistent even though the user is in a moving bus. Vuforia has the lead in this scenario, followed by catchoom. Vuforia provides better support than catchoom for performance and usability constraints.
Scenario 4: Some supermarkets today use AR for promoting their products. Passing by a shelf with various products, the customer can point the camera to the boxes, select a product and initiate a game. For example bursting some bubbles, such that when a predefined number of busted bubbles is reached, the customer wins a discount for the chosen product. A couple of features must be supported and some requirements must be met, such as: • Support for more than one target image per frame, such that the customer can choose which product he wants to play for. • A blurred image would make it hard for the customer to play the game. • The target image can be on a cereal box or a label on a wine bottle. • Flash might be needed if there is not enough light in the store for the app to recognize the target. • The label can be torn or the image on the cereal box can be ripped. Vuforia gains the highest score, closely followed by metaio. The most noticeable difference is given by the support for fast moves; from the tests performed it has been noticed that metaio does not support fast moves.
Scenario 5: The next two use cases require special features, such as text recognition and face detection. An app exists today for translating text from English into Spanish and several other languages. Imagine you are visiting a foreign country in which you do not know the language and need help getting around. This app can be used to translate signs, window ads, and menus. An important feature that the Augmented Reality framework must support is text detection, as well as: • The contrast between the text and the background where it is written. • Different materials on which the target is printed, glossy paper, metal plates, glass and others. • Handle flickering as a constant flicker of the translated word would make it difficult to read. • How far away the user can be from the target.
• Contamination when dealing with outside signs. Metaio and Vuforia register the best results for the relevant environmental criteria while no framework distinguishes itself when dealing with target criteria. Scenario 6: A number of applications already exist for virtually trying on products such as glasses, hats and order them online. Take sunglasses, for example. By using such an app, the customer can browse through the catalogue, choose a pair of glasses, switch between available colours and the sunglasses are projected on the image of the face looking back from the phone. The most important feature that must be supported is face tracking, together with: • A static background for easier face detection.
• A constant light source or the colour of the skin keeps changing. • Fast moves lead to losing the tracking.
• The possibility of switching to the front camera. D'Fusion is the framework that fits best the constraints imposed by such a use case, followed by metaio and Vuforia. Vuforia does not support face tracking, thus loosing points, while D'Fusion beats metaio by supporting fast moves. Table III shows the recommended framework for each of the described use cases.
B. Findings
Testing the frameworks on the default size target from various distances brought some interesting findings. Up close, 10cm away from the target image, the target is in most cases not completely visible. However, ARToolKit and ARLab recognized the targets. This is surprising because ARToolKit needs an 80% visible area of the target, while Vuforia only requires 10%, and still could not recognize the target up close. From these findings, it is concluded that the size of the target image and the distance to it are not strongly correlated.
The light intensity tests revealed a weak ARToolKit tracker when dealing with sudden change in light conditions or a semi dark environment. Each of the considered four sequences for testing light intensities instigates environmental issues. In each case four out of the six frameworks show signs of extra work.
Another surprising result was registered by Vuforia, which can detect a target image with a 200% level of noise. Most of the frameworks can still overcome a 70%, some even 90%, but 200% is a worth mentionable achievement.
VI. CONCLUSION
This paper presented a methodic technique for evaluating open AR frameworks for Android development. A number of evaluation criteria are defined and the results published. As a prototype we developed a test app which is implemented using Eclipse with Android Development Tools (ADT). The prototype can actively test the integrated frameworks and save the testing times in a local database for further comparison.
We would like to state that no AR framework is better than another, each having its advantages and disadvantages. In some circumstances, given a set of constraints, a framework can outperform others.
In future work, we will concentrate our attention on optional features support such as multi-targets, face tracking and text detection. The app itself can be further developed to become a powerful tool of evaluation.
For a more detailed documentation of the presented methodology and results can be found in [7]. | 5,894 | 2014-10-07T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Bicyclo[2.2.1]hept-2-en-7-yl 4-bromobenzoate
The structure of the title compound, C14H13BrO2, which contains a norbornenyl group and a 4-bromobenzoate ester at the single C-atom bridge, has been redetermined [see McDonald & Trotter (1965 ▶). Acta Cryst. 19, 456–463] to modern standards to establish high-precision geometrical data to compare with norbornyl and other tetracyclic 4-bromobenzoates. Possible structural evidence is sought to help explain solvolytic reactivities.
Related literature
For the previous structure determination of the title compound, see: McDonald & Trotter (1965). For a discussion, see: Coots (1983); Lloyd et al. (1995). For an analogous pnitrobenzoate structure, see: Jones et al. (1992). For related tetracyclic 4-bromobenzoate structures, see: Lloyd et al. (2000) and references therein. For a theoretical discussion, solvolysis rates and molecular orbital calculations, see: Chow (1998). For further synthetic details, see: Coots (1983); Lloyd et al. (1993). Considerably improved precision is obtained for the present, low temperature structure of the title compound 1 over earlier structures. An ORTEP-3 drawing of 1 is shown in Fig. 1, and a cell packing diagram is shown in Fig. 2.
The 2:4 angle shows that C2 and C3 are pyramidalized similarly as in other norbornenyl containing 4-bromobenzoate structures. The larger 1:2 and smaller 1:3 angle in 1 versus 2 may be a consequence of substituting an etheno bridge for an ethano bridge. The C1-C2, C2=C3, and C3-C4 bonds are shorter in 1 versus 2 as expected, but C1-C7 and C4-C7 are longer in 1 than in 2. These longer bonds possibly compensate for what might otherwise be even closer C2···C7 and C3···C7 intramolecular contacts in 1 (Table 3). A wider 1:2 angle in 1 versus 2 should also help relieve these contacts.
Experimental
Anti-7-norbornenyl 4-bromobenzoate (title compound 1) was prepared (Coots, 1983) from anti-7-norbornenol, which was made from 7-norbornenone reduction (Lloyd et al., 1993, and references therein). In 25 ml of freshly distilled (from KOH under N 2 ) dry pyridine was dissolved 0.700 g of sublimed (373 K, 1600 Pa) anti-7-norbornenol and 1.80 g of sublimed (373 K, 7 Pa) 4-bromobenzoyl chloride was added with stirring. The mixture was heated to 373 K for 5 min and set in a refrigerator overnight. The mixture was poured into 100 ml of cold water, and extracted three times with 100 ml of ether.
Refinement
A colorless prism shaped crystal 0.35 × 0.33 × 0.30 mm in size was mounted on a glass fiber with traces of viscous oil and then transferred to a Nonius KappaCCD diffractometer equipped with Mo Kα radiation (λ = 0.71073 Å). Ten frames of data were collected at 150 (1) K with an oscillation range of 1 °/frame and an exposure time of 20 sec/frame (Nonius, 1998). Indexing and unit cell refinement based on all observed reflections from those ten frames, indicated a monoclinic P lattice. A total of 5345 reflections (Θ max = 27.46°) were indexed, integrated and corrected for Lorentz, polarization and absorption effects using DENZO- SMN and SCALEPAC (Otwinowski & Minor, 1997). Post refinement of the unit cell gave a = 14.0633 (2) Packing diagram for the title compound.
Figure 3
Compounds 1 and 2. Special details Experimental. The program DENZO-SMN (Otwinowski & Minor, 1997) uses a scaling algorithm which effectively corrects for absorption effects. High redundancy data were used in the scaling program hence the 'multi-scan′ code word was used. No transmission coefficients are available from the program (only scale factors for each frame). The scale factors in the experimental table are calculated from the 'size′ command in the SHELXL-97 input file. Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 1,047.2 | 2012-06-27T00:00:00.000 | [
"Chemistry"
] |
Effects of Magnetic Field and an Endoscope on Peristaltic Motion
The Problem of peristaltic transport of a magnetic fluid with variable viscosity through the gap between coaxial tubes where the outer tube is nonuniform with sinusoidal wave traveling down its wall and the inner tube is rigid. The relation between the pressure gradient and friction force on the inner and outer tubes is obtained in terms of magnetic and viscosity parameter. The numerical solutions of pressure gradient, outer friction and inner friction force, and flow rate are shown graphically.
Introduction
The purpose of this paper is an attempt to understand the fluid mechanics in a physiological situation with the presence of an endoscope placed concentrically. The pressure rise, peristaltic pumping, augmented pumping, and friction force on the inner tube endoscope and outer tube are discussed by Srivastava et al. 1 and Siddiqui and Schwarz 2 . Latham 3 investigated the fluid mechanics of peristaltic pump and since then, other work on the same subject has been followed by Burns and Parkes 4 . Barton and Raynor 5 have been studied the case of a vanishingly small Reynolds number. Lykoudis and Roos 6 studied the fluid mechanics of the ureter from a lubrication theory point of view. Zien and Ostrach 7 have investigated a long wave approximation to peristaltic motion, and the analysis is aimed at the possible application to urine flow in human ureters. Roos and Lykoudis 8 studied the effect of the presence of a catheter upon the pressure distribution inside the ureter. Ramchandra and Usha 9 studied the influence of an eccentrically inserted catheter on the peristaltic pumping in a tube under long wavelength and low Reynolds numbers approximations. Abd El Naby and El Misery 10 studied the effect of an endoscope and generalized Newtonian fluid on 2 Journal of Applied Mathematics peristaltic motion. Gupta and Sheshadri 11 studied peristaltic transport of a Newtonian fluid in nonuniform geometries. L. M. Srivastava and V. P. Srivastava 12 have investigated the effect of power law fluid in uniform and nonuniform tube and channel under zero Reynolds number and long wavelength approximation. Provost and Schwarz 13 have investigated a theoretical study of viscous effect in peristaltic pumping and assumed that the flow is free of inertial effect and that non-Newtonian normal stresses are negligible. Boehme and Friedrich 14 have investigated peristaltic flow of viscoelastic liquids and assumed that the relevant Reynolds number is small enough to neglect inertia forces, and that the ratio of the wavelength and channel height is large, which implies that the pressure is constant over the cross section. El Misery et al. 15 have investigated the effect of a Carreau fluid in peristaltic transport for uniform channel. Elshehaway et al. 16 studied peristaltic motion of generalized Newtonian fluid in a nonuniform channel under zero Reynolds number with long wavelength approximation. Most of studies on peristaltic motion, that assume physiological fluids behave like a Newtonian fluid with constant viscosity, fail to give a better understanding when peristaltic mechanics is involved in small blood vessel, lymphatic vessel, intestine, ducts efferent of the male reproductive tracts, and in transport of spermatozoa in the cervical canal. According to Haynes 17 In the view of above discussion, the effect of magnetic fluid with variable viscosity through the gap between inner and outer tubes where the inner tube is an endoscope and the outer tube has a sinusoidal wave traveling down its wall is the aim of present investigation.
Formulation and Analysis
Consider the two-dimensional flow of an incompressible Newtonian fluid with variable viscosity through the gap between inner and outer tubes where the inner tube is an endoscope and the outer tube has a sinusoidal wave traveling down its wall. The geometry of the two wall surface is given by the equation: where a 1 is the radius of endoscope, a 20 is the radius of the small intestine at inlet, b is the amplitude of the wave, λ is the wavelength, t is time, and c is the wave speed.
In the fixed coordinates r, z , the flow in the gap between inner and outer tubes is unsteady but if we choose moving coordinates r, z which travel in the z-direction with the same speed as the wave, then the flow can be treated as steady. The coordinate's frames are related through p is the pressure, μ r is the viscosity function, σ is Electric conductivity, and B 0 is applied magnetic field. The boundary conditions are written as follows: w −c at u 0, r r 1 , r r 2 , u 0 at r r 1 .
2.6
We introduce the nondimensional variable and the Reynolds number Re and the wave number δ introduced.
2.8
M σ/μB 0 a 20 is Hartmann number and σ is Electric conductivity With the dimensionless boundary condition, u 0, at r r 1 .
2.9
Using the long wavelength approximation and neglecting the wave number δ 0 , one can reduce Navier-Stokes equation
2.10
The instantaneous volume flow rate in the fixed coordinate system is given by Q 2π where r 1 is a constant and r 2 is a function of Z and t.
6
Journal of Applied Mathematics Using 2.14 we obtain the relationship between dp/dz and F as follows: Solving 2.22 for dp/dz, we obtain dp dz The pressure rise ΔP λ and friction force on inner and outer tubes F
2.25
The effect of viscosity variation on peristaltic transport can be investigated through 2.25 for any given viscosity function μ r . For the present instigation, we assume viscosity variation in the dimensionless form following Srivastava where α is viscosity parameter. The assumption is reasonable for the following physiological reason. Since a normal person of animal or similar size takes 1 to 2 L of fluid every day, another 6 to 7 L of fluid are received by the small intestine daily as secretion from salivary glands, stomach, pancreas, liver, and the small intestine itself. This implies that concentration of fluid is dependent on the radial distance. Therefore, the above choice of μ r e −ar is justified.
Journal of Applied Mathematics 7 Substituting 2.27 into 2.21 , and 2.23 , and using 2.24 , we obtain dp dz
Results and Discussions
The dimensionless pressure rise P λ and the friction forces on the inner and outer tube for different given values of the dimensionless flow rate Θ, amplitude ratio φ, radius ratio ε, Hartmann number M, and viscosity parameter α are computed using the 2.29 to 2.31 . As the integrals in 2.29 to 2.31 are not integrable in the closed form so they are evaluated using Figure 1 Shows the pressure rise against the flow rate; here it is observed that the pressure increases with the increase of flow rate for different values of radius ratio ε 0.32, ε 0.38, and ε 0.44 and pressure decreases for the viscosity α 0.0 and α 0.1. Figure 2 shows that as the viscosity α increases the pressure decreases. And for the different values of amplitude ratio φ 0.0 and φ 0.4, the pressure decreases. Figures 3 and 4 show the friction force on the outer tube for different values of radius ratio and amplitude ratio; here it is observed that as radius ratio increases the friction force also decreases and they are independent of radius ratio at certain values of the flow rate for the values ϕ 0.4 and α 0.0 and α 0.1 . In Figures 5 and 6, it is noticed that the friction force on the inner tube endoscope and on outer tube is plotted against the flow rate for different values of amplitude ratio ϕ and for different values radius ratio ε 0.32, ε 0.38, and ε 0.44 and for the values of viscosity α 0.0 and α 0.1. It is noticed that as the amplitude ratio ϕ increases the friction force on the outer tube and inner tube decreases and as the viscosity increases the friction force on the outer tube and inner tube decreases.
From Figure 7, it is noticed that the pressure increases for different values of magnetic field M 1, 3, and 5. From Figures 8 and 9, it is noticed that the friction force decreases on endoscope and on the outer tube as magnetic field increases. | 1,943.2 | 2011-03-28T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Nova Sagittarii 1998 (V4633 Sgr): a permanent superhump system or an asynchronous polar?
We report the results of observations of V4633 Sgr (Nova Sagittarii 1998) during 1998–2000. Two photometric periodicities were present in the light curve during the three years of observations: a stable one at P (cid:136) 3 : 014 h, which is probably the orbital period of the underlying binary system; and a second one of lower coherence, approximately 2.5 per cent longer than the former. The latter periodicity may be a permanent superhump, or, alternatively, the spin period of the white dwarf in a nearly synchronous magnetic system. A third period, at P (cid:136) 5 : 06 d, corresponding to the beat between the two periods was probably present in 1999. Our results suggest that a process of mass transfer has taken place in the binary system since no later than two-and-a-half months after the nova eruption. We derive an interstellar reddening of E (cid:133) B 2 V (cid:134) , 0 : 21 from our spectroscopic measurements and published photometric data, and estimate a distance of d , 9 kpc to this nova.
I N T R O D U C T I O N
Nova Sagittarii 1998 (V4633 Sgr) was discovered on 1998 March 22 by Liller (1998). Brightest visual magnitude of 7.4 mag was reported by Jones (1998) on March 23.7. Liller & Jones (1999) classified V4633 Sgr as a fast nova, with t 3 < 35 d for the visual observations, and < 48 d in charge-coupled device (CCD) broadband V. An early spectrum of V4633 Sgr revealed slow expansion velocities and massive presence of iron, implying a Fe II classification (Della Valle, Pizzella & Bernardi 1998). Skiff (1998) reported no definite object at the location of V4633 Sgr in the Palomar Sky Survey, setting a lower limit of 12 mag on the outburst amplitude.
Infrared spectrophotometry indicated that V4633 Sgr was in the early stages of its coronal phase in 1999 August (Rudy et al. 1999), and revealed strong coronal lines, and a relatively low reddening in 2000 July (Rudy et al. 2000). Lipkin, Retter & Leibowitz (1998) reported a photometric modulation in the light curve (LC) of V4633 Sgr, with a period of 0.17330 or 0:14765^0:00011 d, which are 1-d aliases of each other. The modulation was detected eleven weeks, and possibly as early as six weeks, after the eruption. Later on, Lipkin & Leibowitz (2000) found that another 1-d alias, at 0.128 791 d, is in fact the dominant periodicity in the LC. They also reported the discovery of a second photometric periodicity at 0.125 573 d, modulating the brightness of the star along with the first one during 1999 and also in 1998. In this paper we describe in detail the photometric properties of the V4633 Sgr during the 1998-2000 seasons. We also report on a few spectroscopic observations that we performed on this star, and on the implications of these data on some properties of this system.
Photometry
We performed photometry of V4633 Sgr during 34 nights in 1998, 36 nights in 1999, and 26 nights in 2000, using the Tektronix 1K back-illuminated CCD, mounted on the 1-m telescope at the Wise Observatory (WO). Details on the telescope and instrument are given by Kaspi et al. (1995).
Photometry was conducted either through an I filter, or switching sequentially between I and V, or between I, V and B filters. Logs of the observations are given in Appendix A.
Photometric measurements on the bias-subtracted and flat-fieldcorrected images were performed using the NOAO IRAF 1 DAOPHOT package (Stetson 1987). Instrumental magnitudes of V4633 Sgr, as well as of a few dozen reference stars, depending on image quality, were obtained for each frame. A set of internally consistent nova magnitudes was obtained using the WO reduction program DAOSTAT (Netzer et al. 1996). Good seeing conditions on 1998 September 19 were used to calibrate the magnitudes of V4633 Sgr, as well as of about a dozen nearby comparison stars. We used the calibrated comparison stars to convert all the measurements of V4633 Sgr into calibrated magnitudes.
In our programme we obtained 84 nights of continuous time series, accumulating a total of 8250 data points in I, 2392 in V and 756 in B.
On 2000 August 4, 20 and 21, we observed V4633 Sgr in the 'fast photometry' mode (Leibowitz, Ibbetson & Ofek 1999). On the first night we observed for 2.5 h, with time resolution of 10 s, using no filter ('clear'). On the other two nights, we observed through an I filter, with time resolution of 20 s. The data were reduced in the manner described above.
Spectroscopy
V4633 Sgr was observed spectroscopically at WO on four nights: 1998 July 5 and August 30, and 1999 May 2 and July 6. The spectra were taken with the WO Faint Object Spectrograph and Camera (FOSC) described in Brosch & Goldberg (1994), and operated at the f / 7 Ritchey-Chrètien focus of the WO 1-m telescope. The Tektronix 1K CCD was used as the detector. We applied the method of long-slit spectroscopy whereby both V4633 Sgr and a bright comparison star were included in the slit (see for example Kaspi et al. 2000). The comparison star used was non-variable to within , 2 per cent. We used a 10-arcsec wide slit along with a 600 line mm 21 grism, yielding a dispersion of 4 Å pixel 21 (, 8 Å resolution). On the first two nights the spectrograph was set to cover the spectral range , 3600-7200 A, while in the last two nights we covered the range , 4000-7800 A. Two exposures of the spectrum of the nova were taken on each night.
Reduction of the bias-subtracted and flat-field-corrected spectra was carried out in the usual manner using IRAF with its SPECRED and ONEDSPEC packages. The spectra were dispersion-corrected using a He-Ar arc spectrum, which was taken on each night in between the pair of nova spectra. Each spectrum of the nova was divided by the spectrum of the comparison star observed simultaneously through the same slit. The two sets of nova/star spectrum ratios obtained on each night were compared to each other and were found to differ by no more than , 10 per cent. The average of the two ratios was then taken as the representative ratio for that night. The spectra were calibrated to an absolute flux scale by multiplying each mean nova/star ratio by a flux-calibrated spectrum of the comparison star. This spectrum, in turn, was fluxcalibrated using the WO standard sensitivity function and extinction curve. These do not change appreciably from night to night, and they are updated from time to time at WO using spectrophotometric standard stars. The absolute flux calibration has an uncertainty of , 10 per cent, but the relative flux uncertainties within each spectrum are of order 2-3 per cent.
DATA ANALYSIS
Light curves of V4633 Sgr from discovery to 2000 July are presented in Fig. 1. The visual LC was compiled using data taken from VSNET. 2 The I, V and B LCs were compiled using data obtained in our programme. Note that the apparent small vertical lines in the I LC are not error bars but dense individual successive points observed in a single night. Bars representing the observational errors in our measurements are below the resolution limit of this figure.
The visual and V LCs show an apparent change in slope, becoming more moderate about three months after maximum light (Fig. 1). Most of the 1998 photometry was conducted around the time the slope changed. Shortly after, in 1998 July-August, the brightness of the star deviated systematically from the long-term trend given by the fitted curve, forming an apparent bump in the LC (Fig. 1, inset).
A panel of sample I-band LCs from different epochs is shown in Fig. 2. Nightly LCs show almost no visible variation until 1998 May. Fragmented LCs in May show some variation, while in June modulations on a time-scale of , 3 h are clearly visible. In July and August, the variations took other forms. In a few nights the variations are quasi-periodic but on a somewhat different timescale than in June. On a few other nights, the brightness of the star varied monotonically during the entire nightly run. In all our subsequent observations in 1999 and 2000, the variations returned to the oscillation mode of 1998 June, albeit with an ever-increasing amplitude.
The 1999 light curve
We first discuss the data of 1999 since this season is better sampled than the other two. Fig. 3(C) shows the normalized power spectrum (PS) (Scargle 1982) of our 1999 I-band data, after eliminating the long-term decline by subtracting a fourth-degree polynomial from the 1998-1999 LC.
To derive the quoted periods, we performed a grid search in the x 2 space, fitting to the data a polynomial term representing the secular decline of the nova and a pair of periods near the values of P 1 and P 2 obtained from the PS. The grid was then examined to find the pair of periods yielding the lowest value in the x 2 space.
The errors of the two periods correspond to a 1s confidence level, and were derived by a sample of 2000 bootstrap simulations (Efron & Tibshirani 1993).
We used the tests described in Retter, Leibowitz & Kovo-Kariti (1998) to confirm the independence of the two periodicities. Similar results were also obtained from the PSs of our 1999 V and B data sets.
At the right-hand side of Fig. 3(C), the first overtone of P 2 is detected at 15.928 d 21 , well above the noise level in its vicinity. Such a feature is expected, as a result of the asymmetric shape of the signal (see Section 3.6).
The lower end of the 1999 PS ( Fig. 3C) is dominated by a structure of interdependent peaks, the highest of which, designated P 3 , is at 0.1976 d 21 (5.06 d) with a full amplitude of 0.096 mag. This periodicity corresponds to the beat period between P 1 and P 2 .
The signal was found to be independent of P 1 and P 2 . It was not detected in our V and B LCs. However, these data sets are of lower quality than the I data set, and span a shorter time. Owing to relatively high noise of the PS near P 3 and the fragmented nature of the LC on time-scales of a few days, the reliability of P 3 should be addressed with some caution, until it is confirmed by further observations.
In the 1999 I-band PS, the 1-d alias of P 2 , at 6.961 d 21 , is stronger than the peak associated with P 2 (Fig. 3C). The same result occurred in a few other tests we conducted, for various subsets of the data, as well as in the different bands and data sets, and using various detrending methods. Similarly, in a small number of tests the signal at 6.76 d 21 , or the one at 8.76 d 21 , dominated the alias structure of P 1 , rather than the one at 7.76 d 21 .
These results introduce some uncertainty into our selection of 7.76 d 21 and 7.96 d 21 for P 1 and P 2 , respectively. However, we consider this selection firm, because of the dominance of these periods in the bulk of our tests. Further support to this selection comes from the presence in the PS of the first overtone of 7.96 d 21 , and the absence of any noticeable signal at the frequency of the expected first overtone of 6.96 d 21 (Fig. 3C, inset). The presence in the PS of P 3 -the beat of 7.76 d 21 and 7.96 d 21 -is yet another strong argument for selecting these two periods.
The 1998 May -June light curve
The PS of the I-band data obtained during six nights in 1998 June (Fig. 3A) resembles that of 1999. Two peaks at 7.782 and 7.999 d 21 , each of which is the centre of a 1-d, 1 2 -d, 1 3 -d (etc.) alias pattern, dominate the PS. The group of peaks at the lower end of the PS are of questionable reliability owing to the short time-span of the data set, and since they are sensitive to the method used to detrend the strongly declining LC. The values of P 1 and P 2 , derived by simultaneously fitting two periods and a linear term to the LC, are 0:12893^0:00015 d and 0:12523^0:00033 d. A peak at 15.52 d 21 (Fig. 3A) probably corresponds to the first overtone of P 1 , which is expected at this frequency.
Adding the fragmentary time series obtained in 1998 May to the June data, the power of the two peaks corresponding to P 1 and P 2 increased in the combined PS (not shown), relative to the PS of June only. Indeed, examination of the short LC of May revealed a hump that is in fair agreement with the modulations of the June LC, extrapolated to the times of observation in May. Thus, it is likely that the light of the star was modulated by at least one of the two periodicities as early as 1998 May.
The 1998 July-August light curve
The PS of the V-band data gathered during 13 nights in 1998 July-August (Fig. 3B) is different in its structure and details from the former two PSs. A broad excess of power in the vicinity of 5 d 21 dominates the PS, but no obviously significant peak stands out above the wide hump. Looking for the known periodicities, a peak at 7.756 d 21 is found (marked with an arrow in Fig. 3B). However, this peak is well within the noise level and there is a high a priori probability for its presence in the PS as a result of random coincidence. The July-August data are therefore consistent with an LC that is not significantly modulated by either of the periods P 1 or P 2 .
To test further the difference between the July-August data and that of June, we constructed an artificial LC by extrapolating the signals of the June LC on to the actual times of observation of the July-August LC. Comparing the observed LC and the artificial one, there was only little resemblance between the two in the phases and shapes of the modulations. Also, in contrast to the PS of the actual data ( Fig. 3B), the periods of 1998 June were clearly detectable in the PS of the artificial LC.
The 2000 light curve
The PS of the I-band data of 2000 ( Fig. 3D) is dominated by the signal of P 1 at 7.795 d 21 . The signal of P 2 , at 7.964 d 21 , is obscured by the alias structure of P 1 , but becomes the dominant feature in the residual PS once P 1 is removed from the data. A weak signal at 15.929 d 21 is probably the first overtone of P 2 . A simultaneous fit of two periods and a linear term to the data yields the best-fitting values P 1 ¼ 0:128292^0:000007 d and P 2 ¼ 0:125570^0:000010 d.
Finally, we looked for periodic variations in the data accumulated in the three 2000 nights of fast photometry (Section 2.1). We found no sign in the data for any periodicity in the range of a few tens of seconds to a few tens of minutes.
Stability of the signals
The value of P 2 measured in 2000 is just 0.015 per cent smaller than in 1999. The difference amounts to only 1.2s of the uncertainty in the derived value of the periods themselves. The two values are therefore consistent with the notion that P 2 is the same in both years. This is not the case for P 1 . The value measured in 2000 is 0.3 per cent smaller than in 1999, and the difference is highly significant, more than 30s.
To examine further the stability of the periodicities, we measured P 1 and P 2 in six different data sets during the 1998-2000 time interval, in the manner described in Section 3.1. The measured values of P 2 are scattered around the average value of 0:12559 d, although a linear fit yields a formal rate of period change _ P 2 ¼ ð21:7^0:7Þ Â 10 27 (Fig. 4, bottom panel). We consider this result as consistent with a constant period. The slope for P 1 is highly significant: _ P 1 ¼ ð21:26^0:05Þ Â 10 26 (Fig. 4, top panel).
Waveforms and amplitudes
The waveforms of P 1 and P 2 in 1998 June, 1999 and 2000 are shown in Fig. 5. In each case, we 'pre-whitened' the LC before folding by removing the signal of the other periodicity, as well as a polynomial term representing the decline in the brightness of the nova. In the 1999 data set P 3 was subtracted as well. The waveform of P 1 was symmetric during the three observational seasons. In 1998 June, a clear dip of 0.012 mag was imposed on the primary maximum. In 1999 the waveform transformed into a nearly sinusoidal shape, which was maintained also in 2000 (Fig. 5, left panels). The peak-to-peak amplitude of P 1 was 0.030 mag in 1998 June, 0.105 mag in 1999 and 0.25 mag in 2000. P 2 maintained an asymmetric shape during the three observational seasons, with a slow rise and a fast decline (Fig. 5, right panels). The peak-to-peak amplitude of P 2 was 0.019 mag in 1998 June, 0.100 mag in 1999 and 0.19 mag in 2000.
The waveforms of P 1 and P 2 in 1999 in V and B, as well as those obtained from the much limited V-band data in 1998 June and in 2000, were similar to the ones in I.
The limited data in V and B do not allow an accurate tracking of the amplitudes of the two signals. However, some information on the change in amplitude may be gained by inspecting the secular change in nightly variation, e.g. by following the secular change in the standard deviation (STD) of nightly LCs. The variation in I steadily increased by about 0.05 mag yr 21 during 1998-2000, consistent with the increasing amplitudes of the two periodicities described above. The STD of the V magnitudes has not changed significantly in 1998-1999, maintaining a value of , 0.027 mag, and increased in 2000 to , 0.073 mag. In B it decreased, from , 0.040 mag in 1998 to , 0.023 mag in 1999. One should bear in mind that these trends reflect not only changes due to the brightness variations of sources within the binary system of V4633 Sgr but also some varying contribution of the nebula to the total light of the source. Thus, during 1998-1999 the contribution of the nebula in the V band increased from 40 per cent to about 70 per cent (Section 3.7), implying that the amplitude of the variations in the stellar V continuum was in fact larger than indicated by the STD values.
Spectroscopy
Four spectra obtained at WO in 1998-1999 are plotted in Fig. 6. Fluxes of a few of the emission lines are shown in Table 1 (Williams et al. 1991;Williams, Phillips & Hamuy 1994). The 1998 July 5 spectrum is probably classified A n , and the other three spectra are probably in the A o phase.
In each of the spectra we calculated the integrated V magnitude of the star by convolving the observed spectral energy distribution with the transmission curve of the V filter. The results agreed with the values obtained from photometry. The spectra also allowed us to subtract from the integrated V brightness the contribution of the emission lines that originate mostly in the nebula. As expected, when considered alone, the V continuum faded faster than the integrated V magnitude, with V continuum 2 V total ¼ 0:55, 0.92, 1.28 and 1.26 mag on 1998 July 5, 1998 August 30, 1999 May 5 and 1999 July 6, respectively. June I data set. The peak marked in the inset frame probably corresponds to the first overtone of P 1 . (B) The 1998 July-August V data set. A night of monotonic trend was excluded. The data were pre-whitened by subtracting the mean magnitude from each night. An arrow marks a peak at 7.756 d 21 , which is the seventh highest peak in the PS. (C) The 1999 I data set. The low end of the PS is dominated by P 3 , at 0.1976 d 21 . The first overtone of P 2 , at 15.928 d 21 , is marked in the inset. The first overtone of the 6.963 d 21 1-d alias of P 2 is expected at 13.926 d 21 (marked by a dashed arrow in the inset). However, no noticeable signal is detected in the vicinity of this frequency. (D) The 2000 I data set. The first overtone of P 2 is marked in the inset. The photometric data of the three-year observations of V4633 Sgr confirm the presence of two independent periodicities in the LC of V4633 Sgr: P 1 ¼ 3:08 h ¼ 0:1285 d and P 2 ¼ 3:014 h ¼ 0:125576^0:000009 d. We suggest that P 2 is the orbital period of the underlying binary system, as its behaviour during the three years of photometry is consistent with a stable period. In addition, during the photometric monitoring, the waveform of the signal has maintained its shape. The asymmetric shape of the waveform is rather unique for orbital modulations; none the less, we note its close similarity to the shape of the orbital modulation of V1974 Cyg in 1996 (Skillman et al. 1997). The 3.01-h period is well situated within the range of orbital periods of cataclysmic variables. To confirm this suggestion, radial velocity measurements should be carried out. In the following we shall consider P 2 to be the orbital period, P orb , of the binary system.
The second periodicity
It is more difficult to interpret the longer period, P 1 . This signal is characterized by the following traits: (1) it is , 2.5 per cent longer than the binary period; and (2) it is at least an order of magnitude less stable than P orb , decreasing by , 0.3 per cent during 1999-2000, with _ P , 210 26 (Section 3.5). Two possible interpretations come to mind. One is that the origin of the P 1 variation is the rotation of the white dwarf (WD). The modulation may arise, for instance, from aspect variation of a hotspot on or near the surface of the WD. The small deviation of P 1 from the orbital period would then suggest that V4633 Sgr belongs to the asynchronous polars group (BY Cam stars, hereafter APs). An alternative interpretation is that the origin of P 1 is in an accretion disc in the system, namely, that it is the period of the well-known phenomenon of superhumps (SHs).
In the following two sections we discuss the two interpretations and some of their implications. The data to hand seem insufficient to make a reliable choice between them.
Asynchronous polar interpretation
APs are a subclass of magnetic cataclysmic variables, sharing many of the properties of polars (AM Her stars), but having a WD that rotates with a period that differs by , 1 per cent from the orbital period. There are four known APs. They are listed in Table 2 along with the major characteristics of their periodicities. In one AP, V1500 Cyg, the asynchronous rotation is clearly associated with its nova eruption in 1975. Two other APs are suggested to have also undergone a recent nova event: V1432 Aql (Schmidt & Stockman 2001) and BY Cam (Bonnet-Bidaud & Mouchet 1987).
The AP interpretation of V4633 Sgr is supported by the monotonic decrease in P 1 , the proposed rotation period of the WD (P rot ), towards synchronization with P orb . A synchronization trend in P rot is expected in APs as a result of the magnetic torque exerted on the WD by the secondary star. Indeed, such a trend was detected in three of the four APs (Table 2). Also, the orbital period of V4633 Sgr, P orb ¼ 3:01 h, is similar to that of three APs ( Table 2). The beat period, P 3 ¼ 5:06 d, detected in 1999 (Section 3.1), may be naturally explained in the AP framework. If a dipole geometry is assumed, pole switching is expected to occur at the beat cycle, modulating the LC at P beat .
However, a simple AP interpretation seems to be inapplicable in V4633 Sgr for the following reasons: (i) The synchronization rate of the proposed P rot is j _ P 1 j , 10 26 -much larger than in APs, where j _ P rot j , 3 Â 10 29 to 4 Â 10 28 (Table 2).
(ii) In V4633 Sgr, P 1 is longer than P orb , while in three of the four APs P rot is shorter. In V1432 Aql, the only AP in which P rot . P orb , the difference is marginal. Even so, the longer P rot poses some theoretical difficulties [we note, however, that Schmidt & Stockman (2001) argue that P rot , P orb is a possible outcome of a nova eruption, in slow novae with strong magnetic fields]. Indeed, an alternative model for this object was proposed by Mukai (1998), in which V1432 Aql is an intermediate polar with a spin period of 1.12 h.
(iii) The difference between the two periods in V4633 Sgr is (Table 2). (iv) The distinctly asymmetric waveform of P orb in V4633 Sgr is hardly that of an eclipsing system (Section 3.6). If there is no disc in the system, as the AP model suggests, the light modulation on the orbital period must be ascribed to the 'reflection' effect. Any simple model of this effect produces symmetric binary LCs.
(v) If the modulation at P beat is caused by pole switching, the latter is expected also to affect P 1 , invoking a phase shift of 1808 twice every beat cycle. This effect should reveal itself both in the PS, reducing the power of the peak associated with P 1 , and in the folded LC of P 1 . However, these effects are not detected.
Few of the distinctive characteristics of V4633 Sgr may be explained in the framework of the AP model if they are attributed to short-term changes taking place in the system in the first few years after the nova outburst. Such an irregular behaviour was observed in V1500 Cyg during the first three years after its outburst. These have been described in detail (e.g. Patterson 1979;Lance, McCall & Uomoto 1988) and interpreted by Stockman, Schmidt & Lamb (1988).
Applying the model of Stockman et al. (1988) to V4633 Sgr, we should assume that the spin of the WD was synchronized with the orbital revolution prior to the nova event. The rapid expansion of the WD's envelope during the first stages of the outburst increased the star's moment of inertia, resulting in a spin-down of the WD by *2.5 per cent. The decrease in P rot in 1998-2000 should be attributed to the contraction of the still expanded envelope of the WD, with the associated reduction in its moment of inertia. Thus, P rot is expected to continue decreasing until the WD finally regains its original radius. Following this, a slower synchronization trend is expected to occur on the magnetic synchronization time-scale of the system. In analogy with V1500 Cyg, if the contraction of the envelope decreases the moment of inertia of the WD by a magnitude comparable to that gained during the nova outburst (Patterson 1979;Stockman et al. 1988), and if the spin acceleration rate maintains its value of 1999-2000 (Section 3.5), the WD would regain its pre-nova dimension around the year 2006.
For an order-of-magnitude calculation, we attribute the change in P 1 in 1998 June -2000 entirely to the contraction of the WD's envelope. We further assume that the WD is a rigid sphere of mass M 1 and radius R 1 , rigidly coupled to a thin shell of mass M ph and radius R ph . Let DR ph and Dv be the changes in the radius and the angular velocity of the WD during a time interval Dt. Conservation of angular momentum requires that Since in 2000 August, R ph $ R 1 , the photosphere radius at time Dt prior to 2000 August is bounded by From the speed class of V4633 Sgr ðt 3 < 42 d, Section 4.5.1) we infer M 1 < 1:1 M ( for the mass of the WD (Kato & Hachisu 1994). As a rough estimate of the mass of the contracting envelope we take M ph , 10 26 M ( (Prialnik 1986;Prialnik & Kovetz 1995). Inserting these values into the above equation together with the observed values of P 1 , we obtain R ph * 71R 1 and 53R 1 in 1998 June and 1999 May, respectively.
The scenario depicted above is considerably different from the one in V1500 Cyg. In particular, in the 1975 nova outburst, the WD and its envelope gained angular momentum through coupling with the orbiting secondary during the common envelope phase, almost resynchronizing the WD's spin with the orbital cycle within a few tens of days after outburst. This is why, in that system, P rot became shorter than P orb as the WD's envelope contracted. In V4633 Sgr such a coupling either did not take place at all, or was much less effective in transferring orbital to spin angular momentum. It is therefore also likely that this system will remain with a spin period longer than the binary period even after the WD regains its pre-outburst dimension.
Some other different aspects in the evolution of V4633 Sgr, such as the apparently larger increase in P rot during the outburst, and the longer time-scale of the envelope contraction, may be attributed to a less massive WD, which is expected to shed more mass during outburst, and regain its original size on a longer time-scale (Kato & Hachisu 1994;Prialnik & Kovetz 1995).
The AP interpretation should be tested against an observational search for evidence for the magnetic nature of V4633 Sgr. This should manifest itself, for example, by strong X-ray radiation and/ or circularly polarized light, modulated by the WD rotation period. So far no such observations (or results) have been reported. 3
Permanent superhump interpretation
Superhumps (SHs) are periodic brightness variations in the LCs of certain subgroups of disc-accreting cataclysmic variables (CVs), with a period a few per cent longer than the orbital period of the binary system (Warner 1995).
Initially, SHs were found in the SU UMa subclass of dwarf novae during superoutburst events. SHs of longer duration, of months and years, are termed 'permanent SHs'. They appear in LCs of CVs with short orbital periods (typically P orb & 4 h; Patterson 1999) and high mass transfer rates, such as nova remnants, nova-like and AM CVn systems (for reviews see Patterson 1999;Retter & Naylor 2000). Superhumps also occur in X-ray binaries (e.g. O'Donoghue & Charles 1996).
The properties of V4633 Sgr make it a good candidate for hosting the SH phenomenon. The 3.01-h orbital period puts V4633 Sgr near the centre of the period interval that contains most of the known SH systems (Patterson 1998).
The observed stable decline in the brightness of the nova is consistent with the presence in the system of an accretion disc, which in the years 1999-2000 is the main source of the optical luminosity, and which is thermally stable. If mass accretion is indeed the main luminosity source, we can estimate its rate using equation (3) of Retter & Naylor (2000). In terms of absolute magnitude it is given by where Ṁ 17 is the mass transfer rate in 10 17 g s 21 , M V is the absolute V magnitude of the disc, M 1 is the mass of the WD, and DM i ¼ 22:5 log½ð1 1 1:5 cos iÞ cos i is a correction to the magnitude due to the inclination angle (i ) of the disc. From the V-band LC (Fig. 1), and the estimated distance and reddening towards V4633 Sgr (Section 4.5.4), we derive M V;2000 , 0:5-1:5. The non-eclipse shape of the LC (Section 3.6) implies that the inclination angle is i # 658. For a M 1 < 1:1 M ( WD, we obtain _ M , ð30-300Þ Â 10 17 g s 21 . The critical mass transfer rate, below which the disc is thermally unstable, is given by Osaki (1996, equation 4 therein), which for P orb ¼ 3:01 h takes the value _ M crit < 1:7  10 17 g s 21 . Thus the observed mass transfer rate in V4633 Sgr is some two orders of magnitude above the critical value, and the disc is indeed thermally stable.
Superhumps are known to be poor clocks. In permanent SH systems, the instability in superhump period is: _ P SH ¼ 10 28 to 5 Â 10 26 (Patterson & Skillman 1994). The value of Ṗ 1 that we found in V4633 Sgr in 1999-2000 is within this range.
The similarity in the shape of the orbital and the superhump waveforms of V1974 Cyg to those of P 2 and P 1 (Skillman et al. 1997) serves as a further support to the SH interpretation.
On the weak side of the SH interpretation stands the value of the period excess e ; ðP SH 2 P orb Þ/P orb . Superhump systems are known to follow a nearly linear relation between e and P orb (Stolz & Schoembs 1984). In V4633 Sgr the measured value, e ¼ 0:024^0:003, is about a third of the value expected for P orb ¼ 3:01 h (Fig. 7). Inspection of Fig. 7 reveals, however, that, while the point of V4633 Sgr deviates the most from the empirical linear 3 Non-detection of linear polarization in 1998 March (Ikeda et al. 2000) is of small relevance, since, at that early epoch in the history of the outburst, any polarization would be masked by the luminous extended photosphere and ejecta. Naturally, as the nova continues to fade, detection of circular polarization becomes increasingly feasible. Ramsay et al. (1999); 7 this work. Figure 7. The period excess-P orb relation of superhump systems. Data were taken from Patterson (1998Patterson ( , 1999 and Retter et al. (2001). The solid line is a linear fit to the data. We should also note that the apparent large deviation of the black dot representing the SU UMa system CN Ori should be treated with caution, until its period excess is confirmed by further observations (Patterson, private communication).
Since the disc precession is caused by the perturbation of the secondary star, the precession rate should be proportional to the secondary's mass, M 2 . Such a relation was found by Osaki (1985), who examined the motion of a free particle in a binary potential. In particular, for a disc with radius < 0.46 times the binary separation (this is approximately the disc radius at the 3:1 resonance where SH are most likely to occur), Osaki derived the relation where q ; M 2 /M 1 . For V4633 Sgr, this relation yields the value q < 0:10-0:11. Since the mass of the WD should be smaller than the Chandrasekhar mass (1.44 M ( ), the mass of the secondary is bounded by M 2 & 0:16 M ( . On the other hand, if the secondary is a Roche lobe filling, mainsequence star, its mass can be derived analytically (e.g. Warner 1995), if P orb is known. An empirical P orb -M 2 relation yields a result similar to the analytical ones (Smith & Dhillon 1998). For P orb ¼ 3:01 h, the mass of a main-sequence secondary is M 2 < 0:27 M ( , much larger than the limit obtained above. This inconsistency may infer that the cause of the exceptionally small e may be an undermassed secondary star, which is off the main sequence. In this case, V4633 Sgr may be an extremely evolved CV system (e.g. Howell, Rappaport & Politano 1997;Patterson 1998).
The 5.06-d signal observed in 1999 (P 3 , Section 3.1) presents another difficulty for the permanent SH scenario. This period corresponds to the beat period between P orb and P SH . It is therefore natural to interpret this signal as arising from the precession of the accretion disc. However, theoretically, apsidal precession of an eccentric disc is not expected to modulate the light of the nova (Skillman & Patterson 1993;Patterson 1998). We note however that such modulations were actually observed in the permanent SH system AH Men (H0551-819) in 1993-94, when the object showed positive SH (Patterson 1995).
The permanent SH interpretation may be tested photometrically during the next few years. Superhump periods are found to wander about a mean value, and therefore Ṗ SH is expected occasionally to change its sign.
The visual light curve
We derive some of the properties of the visual LC of V4633 Sgr using magnitudes of the nova published in the IAU Circulars and in the VSNET website, and the LC presented by Liller & Jones (1999).
The data suggest that the nova was discovered before reaching maximum brightness, as was already pointed out by Liller & Jones (1999). However, the scatter in magnitude estimates during the first few days after discovery does not allow us to determine the exact timing and magnitude of maximum brightness. We can only conclude that the nova reached maximum light sometime between JD 245 0895.5 and 245 0898.5. We adopt the value of m v;0 ¼ 7:7^0:1 for its visual magnitude at maximum.
From the VSNET data we estimate decline rates of t 2;v ¼ 19^3 d and t 3;v ¼ 42^5 d, somewhat longer than the estimation of Liller & Jones (1999) t 3;v < 35 d.
By the classification scheme of Duerbeck (1981), V4633 Sgr should be classified as a Ba-type nova -moderately fast with minor irregular fluctuations during decline.
Photometric changes in 1998 June -August
Around 1998 June, there was an apparent bend in the visual and V LCs (Section 3, Fig. 1). Leibowitz (1993) noted that such a feature is found in the visual LCs of many classical novae, and suggested attributing it to the decay of the WD's light level below the brightness emitted by the accreted material. This interpretation was cast into quantitative form in models suggested recently by and for the LCs of the two recurrent novae V394 CrA and U Sco.
Shortly after the change in the slope of the LC, in 1998 July-August, the I LC deviated from its smooth decline, forming an apparent bump. A similar bump was seen in the B and V LCs (Section 3, Fig. 1, inset). Two of our spectra, taken at the same time, on 1998 July 5 and August 30, show the emergence of strong [O III] ll4959, 5007 emission lines (Section 3.7). The simultaneous occurrence of the two effects was observed in a few other novae, and was connected with the beginning of the nebular stage (Chochol et al. 1993).
During July-August another photometric peculiarity occurredthe LC was modulated in a different form than previously. In particular, the periodicities of 1998 June were not detected during these months (Section 3.3). We offer no explanation for this phenomenon, or to its possible connection to the aforementioned phenomena.
Interstellar reddening
We can estimate the interstellar reddening towards V4633 Sgr in three ways. First, we consider the observed Balmer decrement in the spectra of the nova. {In the following we neglect the contribution of the [N II] ll6548, 6584 lines to the measured Ha line intensity. From the [O III] ð5007 1 4959Þ=4363 line ratio and the [N II] 5755 line intensity (Osterbrock 1989), we estimate it to be less than 5 per cent of the measured flux.} Slightly more than three months after maximum, the line intensity ratio Ha/ Hb was as high as 6.8, probably due to self-absorption (Williams 1994). Our spectra show the progressive decrease of this line ratio during the following year. Between our last two spectroscopic observations the trend of decrease has flattened considerably. About 15 months after outburst, in our last spectrum measurement, this line ratio reached the value 3:52^0:10 (Table 1). Attributing the difference between this value and the theoretical case B value of 2.8 (Osterbrock 1989) entirely to dust extinction, and using the numerical form of the Whitford (1958) reddening curve given by Miller & Mathews (1972), we obtain a reddening of EðB 2 VÞ ¼ 0:21^0:03.
A second way to estimate the reddening is from the He triplet ratio, 5876=4471 ¼ 2:9, which seems to be insensitive to radiation transfer effects (Ferland 1977). The observed ratio on 1998 August 30 was 3:7^0:4, leading to EðB 2 VÞ , 0:23, in agreement with the value derived from the H lines. We did not measure this line ratio in the spectrum of 1998 July 5, because the uncertainty in the measurement of the He l4471 line was much larger at that epoch. We note that, as pointed out by Ferland (1977), this method is inaccurate as a result of the small baseline and also the weakness of the He l4471 line.
We also estimated the reddening using colour photometry of the nova shortly after outburst. Novae have intrinsic colours ðB 2 VÞ 0 ¼ 10:25^0:05 at maximum (Downes & Duerbeck 2000) and ðB 2 VÞ 0 ¼ 20:02^0:04 at 2 mag below maximum light (van den Bergh & Younger 1987). Observations in the B and V bands by S. Kiyota, reported in VSNET, yield a colour index ðB 2 VÞ ¼ 10:50 on 1998 March 25, slightly below maximum light, and ðB 2 VÞ ¼ 10:24 on 1998 April 19, slightly below maximum plus 2 mag. At these two dates, the intrinsic colour of the nova was somewhat redder than the corresponding two 'standard' values quoted above. The difference between the two pairs of values constrains the interstellar reddening towards the nova to EðB 2 VÞ & 0:25, in agreement with the value derived from spectroscopy.
Maximum magnitude and distance
We estimate the absolute magnitude of V4633 Sgr at maximum brightness, M V,0 , by two methods. First, we use the empirical maximum magnitude -rate of decline (MMRD) relation obeyed by novae. We use the linear MMRD relations for t 2 and t 3 derived by Downes & Duerbeck (2000) from an ensemble of 28 measured novae. Their relations yield for V4633 Sgr values of M V;0 ¼ 28:1^0:6 and 27:9^0:8 mag, respectively. Downes & Duerbeck (1981) also derived MMRD relations for t 2 and t 3 from 17 nova classified as B, C and D in the LC classification scheme of Duerbeck (1981). These relations yield for V4633 Sgr values of M V;0 ¼ 27:4^1:1 and 27:2^1:7 mag, respectively.
We can also estimate M V,0 using the absolute magnitude 15 d after maximum, which appears to be independent of speed class (Warner 1995). Downes & Duerbeck (2000) derived from 28 objects a value of M V;15 ¼ 26:05^0:44 mag. This value, together with our estimate of the visual magnitude of V4633 Sgr at maximum, m v;0 ¼ 7:7^0:1 mag (Section 4.5.1), and with the value of m V;15 ¼ 9:45^0:06 mag that was measured for V4633 Sgr at WO on JD 245 0912.51, yields M V;0 ¼ 27:8^0:5 mag for the absolute magnitude of V4633 Sgr at maximum brightness.
We adopt the average of the above results, M V;0 < 27:7 mag, for the intrinsic magnitude at maximum.
Incorporating our estimations of the reddening, and the intrinsic and apparent maximum brightness of the nova into the distance modulus equation (Allen 1976), we derive a distance of 8:92 :5 kpc to V4633 Sgr, compatible with the estimation of Ikeda et al. (2000). We note that the derived distance to V4633 Sgr implies that it probably belongs to the population of 'bulge' novae. Indeed, the spectroscopic classification of V4633 Sgr as a Fe II nova, as well as its rate of decline, are characteristic of 'bulge' novae (Della Valle & Livio 1998).
S U M M A RY
Three-year observations of V4633 Sgr revealed two photometric periodicities in the light curve of the nova. We interpret the shorter one, P 2 ¼ 3:014 h, as the orbital period of the underlying binary system. The longer period, P 1 ¼ 3:08 h, varied during 1998-2000 with _ P 1 ¼ ð21:26^0:05Þ Â 10 26 . The beat of the two periods, P 3 ¼ 5:06 d, was probably present in the LC in 1999.
The period P 1 may be interpreted as a permanent superhump, or, alternatively, as the spin period of the white dwarf in a nearly synchronous magnetic system. V4633 Sgr would be a unique SH system, since its relative period excess is exceptionally small -, 2.5 per cent. This may imply an extremely low mass ratio. The characteristics of V4633 Sgr are also unique for the near-synchronous polar model.
Further photometric monitoring of V4633 Sgr in the next few years will probably allow us to determine the classification of the system, since the non-orbital period is expected to evolve differently in the two models. Radial velocity measurements should be done to confirm the orbital period. Time-resolved polarimetry and X-ray observations should be conducted to test the near-synchronous polar interpretation. 422 This paper has been typeset from a T E X/L A T E X file prepared by the author. | 10,746.4 | 2001-09-10T00:00:00.000 | [
"Physics"
] |
Ab initio electronic structure and prospects for the formation of ultracold calcium--alkali-metal-atom molecular ions
Experiments with cold ion-atom mixtures have recently opened the way for the production and application of ultracold molecular ions. Here, in a comparative study, we theoretically investigate ground and several excited electronic states and prospects for the formation of molecular ions composed of a calcium ion and an alkali-metal atom: CaAlk$^{+}$ (Alk=Li, Na, K, Rb, Cs). We use a quantum chemistry approach based on non-empirical pseudopotential, operatorial core-valence correlation, large Gaussian basis sets, and full configuration interaction method for valence electrons. Adiabatic potential energy curves, spectroscopic constants, and transition and permanent electric dipole moments are determined and analyzed for the ground and excited electronic states. We examine the prospects for ion-neutral reactive processes and the production of molecular ions via spontaneous radiative association and laser-induced photoassociation. After that, spontaneous and stimulated blackbody radiation transition rates are calculated and used to obtain radiative lifetimes of vibrational states of the ground and first-excited electronic states. The present results pave the way for the formation and spectroscopy of calcium--alkali-metal-atom molecular ions in modern experiments with cold ion-atom mixtures.
A mixture of Ca + ions and Na atoms was considered in the pioneering proposal, which suggested combining ions trapped in a Paul trap with ultracold atoms [10]. Such mixture was later experimentally realized [26]. Coulomb crystals of Ca + ions were immersed into ultracold Rb atoms to study radiative charge exchange and molecular ions formation [21,25,48]. Ca + ions were also experimentally studied in mixtures with ultracold Li atoms [15,24,49,50]. In fact, Ca + /Li ion-atom combination is one of the most promising systems for reaching the quantum regime of ion-atom collisions [8] due to the favorable mass ration reducing the impact of micromotion-induced heating in hybrid traps [51]. Recently, a new apparatus with a mixture of laser-cooled Ca + ions in a linear Paul trap overlapped with ultracold K atoms in a magneto-optical trap was presented [28]. This setup incorporates a high-resolution time-of-flight mass spectrometer designed for radial extraction and detection of reaction products opening the way for detailed studies of the state-selected formation of CaK + molecular ions. While, the electronic structure of the ground and excited electronic states was already studied arXiv:2003.02813v1 [physics.atom-ph] 5 Mar 2020 for the CaLi + [15,[52][53][54][55][56][57], CaNa + [10,52,58,59], and CaRb + [35,52,60,61] molecular ions, to the best of our knowledge, the structure of excited electronic states of the CaK + and CaCs + molecular ions has not been presented, yet.
Here, to fill this gap, in a comparative study, we investigate the electronic structure of the group of five diatomic molecular ions composed of a Ca + ion interacting with an alkali-metal atom: CaAlk + (Alk=Li, Na, K, Rb, Cs). We calculate ground and several low-lying excited electronic states using a theoretical quantum chemistry approach based on non-empirical pseudopotential, operatorial core-valence correlation, large Gaussian basis sets, and full configuration interaction method for valence electrons. Next, we employ electronic structure data to access prospects for field-free and light-assisted ion-neutral reactive processes and the formation of the considered molecular ions via spontaneous radiative association and laser-induced photoassociation. We discuss similarities and differences between considered systems. Finally, we calculate spontaneous and stimulated blackbody radiation transition rates together with radiative lifetimes of vibrational states of the ground and first excited electronic states. This paper has the following structure. Section II describes the used computational methods. Section III presents and discusses obtained results, including electronic structure data and spontaneous charge transfer and radiative association rates. The radiative lifetimes of the ground and excited states are also presented there. The experimental implications of the presented calculations are analyzed in detail. To conclude, section IV summarizes our work.
II. COMPUTATIONAL DETAILS
In this work, we calculate non-relativistic potential energy curves within the Born-Oppenheimer approximation for the ground and excited electronic states of calciumalkali-metal-atom molecular ions: CaAlk + (Alk=Li, Na, K, Rb, Cs). To this end, we employ the ab initio approach, which was developed and presented previously in several works on alkali hydrides [62][63][64][65], alkali-metal dimers [66][67][68][69], alkaline-earth-metal hydrides [70][71][72], and alkali-metal-alkaline-earth-metal molecular ions [73,74]. The investigated CaAlk + molecular ions, thus, are treated effectively as two-electron systems with efficient non-empirical pseudopotentials in their semi-local form [75] used to replace core electrons. Additionally to the pseudopotential treatment, the self-consistent field (SCF) computations are followed by a full valence configuration interaction (FCI) calculations using the CIPCI algorithm (Configuration Interaction by Perturbation of a multiconfiguration wave function Selected Iteratively) of the standard succession of programs developed by the "Laboratoire de Chimie et Physique de Toulouse". The core-valence electronic correlations between the polariz-able Ca 2+ core with the valence electrons and polarizable Alk + cores are included by using core polarization potentials (CPP) [76].
In the present work, the interaction of the considered alkali-metal and alkaline-earth-metal atoms and ions in the ground and different excited electronic states results in different molecular electronic states of the singlet or triplet Σ + , Π, and ∆ symmetries. The lowest seven atomic thresholds for each of the considered CaAlk + molecular ions, together with their valence energies and associated molecular electronic states, are collected in Table I. The calculated energies of Ca + ( 2 S)+Alk( 2 S) limits, which describe essential ground-state collisions of alkaline-earth-metal ions with alkali-metal atoms, agree very well (within 5 cm −1 ) with experimental values. Description of the 1 D and 3 D excited electronic states of the Ca atom is the most challenging with discrepancies of 597 cm −1 and 566 cm −1 for related atomic limits, respectively. Nevertheless, the overall agreement is good, suggesting good accuracy of molecular calculations.
The spectroscopic constants are extracted from the ab initio points interpolated using the cubic spline method. The permanent and transition electric dipole moments are calculated as expectation values of the dipole operator with the calculated electronic wavefunctions. The z axis is chosen along the internuclear axis and is oriented from a Ca atom to an alkali-metal atom. The origin is set in the center of mass. Masses of the most abundant isotopes are assumed within the paper.
The time-independent Schrödinger equation for the nuclear motion is solved using the renormalized Numerov algorithm [78] for both bound [62] and continuum states [34]. Rate constants for elastic scattering and inelastic charge-exchange reactive collisions are calculated as implemented and described in Refs. [34,79]. The wave functions are propagated to large interatomic distances, and the K and S matrices are extracted by imposing the long-range scattering boundary conditions in terms of the Bessel functions. The elastic rate constants and scattering lengths are obtained from the S matrix for the entrance channel, while inelastic rate constants are computed using the Fermi golden rule type expressions based on the Einstein coefficients between bound and continuum nuclear wave functions of relevant electronic states. The radiative lifetimes τ v of vibrational levels v, τ v = 1/Γ v , are calculated from the radiative rates Γ v = v <v A vv + v B vv , which are given by the sums of the Einstein coefficients for the spontaneous emission A vv and coefficients for the absorption and stimulated emission B vv . The coefficients for the spontaneous emission A vv ∼ ω 3 vv d 2 vv are proportional to the third power of the transition frequencies ω vv and second power of the transition dipole moments d vv between the initial v and the final v vibrational states. The coefficients for the absorption and stimulated emission are proportional to the coefficients for the spontaneous emission and the spectral energy density of the present black body radiation [81,82]. The bound-continuum transitions are included either using the Franck-Condon ap- proximation [81] or the sum rule approximation [83], and both methods give the same results.
III. RESULTS AND DISCUSSION
A. Potential energy curves Potential energy curves (PECs) for the ground and several excited electronic states of the CaLi + , CaNa + , CaK + , CaRb + , and CaCs + molecular ions are presented in Figs. 1-6. All electronic states correlated with the seven lowest atomic thresholds of each system are investigated (see Table I). Thus, several singlet and triplet electronic states of the Σ + , Π, and ∆ spatial symmetries are studied. Spectroscopic characteristics of calculated PECs, i.e. equilibrium interatomic distances R e , well depths D e , transition energies T e , harmonic constants ω e , anharmonicity constants x e , and rotational constants B e , are collected in Tables II-VI. Results for excited states of the CaK + and CaCs + molecular ions are reported for the first time, while spectroscopic constants for other systems are compared with previous available results.
Interactions between the ground-state Ca + ion and ground-state alkali-metal atom are described by the 1 Σ + and 3 Σ + electronic states and govern ground-state colli- sions in respective hybrid ion-atom experiments [1]. For all considered mixtures, these states are electronically excited, and the radiative charge-transfer and association processes are energetically allowed and may lead to collisional losses [79] leading to the 1 Σ + ground electronic state. Therefore in Fig. 1, we present and compare electronic states correlated with Ca + ( 2 S)+Alk( 2 S) and Ca( 1 S)+Alk + ( 1 S) atomic thresholds, and transition electric dipole moments between the two lowest 1 Σ + electronic states, which drive radiative losses. Corresponding rate constants for reactive collisions are presented in Sec. III D.
The ground-state proprieties of the CaAlk + molecular ions are similar to those predicted for the SrAlk + sys- Potential energy curves of the CaNa + molecular ion. Line styles are used as described in Fig. 2.
S)
Ca FIG. 5. Potential energy curves of the CaRb + molecular ion. Line styles are used as described in Fig. 2.
FIG. 6. Potential energy curves of the CaCs + molecular ion. Line styles are used as described in Fig. 2.
tems [74]. The 1 1 Σ + ground electronic state dissociate into Ca( 1 S)+Alk + ( 1 S). Therefore, its long-range behavior is very similar for all considered molecular ions and is determined by the induction interaction of the charge of the alkali-metal ion with the polarizability of the Ca atom (see Fig. 1(b)). The short-range behavior depends more on the involved alkali-metal ion and have covalent bonding nature. The well depth decreases with the mass of the alkali-metal ion from 9986 cm −1 for CaLi + to 3174 cm −1 for CaCs + , while the equilibrium distance increases with the mass of the alkali-metal ion from 6.11 bohr for CaLi + to 8.34 bohr for CaCs + . The presented ground-state PECs can be compared with recent results calculated with the small-core pseudopotentials and coupled cluster method [52]. The well depths obtained with the two methods agree with the mean absolute difference of 68 cm −1 (1.7%), while the equilibrium distances agree with the mean difference of 0.084 bohr (1.1%). Calculations with large-core pseudopotentials give slightly smaller equilibrium distances and deeper well depths, but the overall good agreement cross validates both approaches and suggests that similar accuracy may be expected for excited electronic states. The agreement with older results collected in Tables II-VI is also satisfactory.
The energy difference between the lowest Ca( 1 S)+Alk + ( 1 S) and Ca + ( 2 S)+Alk( 2 S) dissociation thresholds increases with the mass of alkali-metal atom from 5819 cm −1 for CaLi + to 17880 cm −1 for CaCs + . The well depth of the 2 1 Σ + state dissociating into Ca + ( 2 S)+Alk( 2 S) increases with the mass of alkalimetal atom from 411 cm −1 for CaLi + to 1574 cm −1 for CaCs + , while the equilibrium distance decreases slightly from 13.74 bohr for CaLi + to 12.92 bohr for CaCs + . The 2 1 Σ + electronic state is relatively shallow because of its non-bonding nature around the equilibrium distance and avoided crossing with the ground 1 1 Σ + electronic state. The 3 Σ + electronic state associated with the Ca + ( 2 S)+Alk( 2 S) atomic threshold is the lowest triplet state for the CaLi + , CaNa + , and CaK + molecular ions, while it is the first excited triplet state for the CaRb + and CaCs + molecular ions. The change of the order of the Ca + ( 2 S)+Alk( 2 S) and Ca( 3 P )+Alk + ( 1 S) atomic thresholds in CaRb + and CaCs + visibly affects their 3 Σ + electronic states dissociating into Ca + ( 2 S)+Alk( 2 S), which are much shallower because of avoided crossing with lower lying 3 Σ + states (see Fig. 1(a)). Thus, no clear trend is observed for the lowest 3 Σ + electronic states. For example, the well depth and equilibrium distance of the first 3 Σ + state in CaK + is 7275 cm −1 and 8.58 bohr, respectively.
The density of electronic states increases with the excitation energy, and for all investigated molecular ions, several avoided crossings between excited states of the same electronic symmetry can be found. Strong radial nonadiabatic couplings between involved electronic states can be expected. Some of the excited atomic thresholds are close together that furthermore facilitates inter- actions between associated electronic states. As a result, several excited states have double-well structures. For example, all 2 3 Σ + states are significantly repulsive at the short-range distances, partially due to broad avoided crossing with 1 3 Σ + states, and thus they intersect with attractive 3 3 Σ + states, forming narrow avoided crossings at the short range. Avoided crossings between 2 1 Σ + , 3 1 Σ + , and 4 1 Σ + at short-and intermediate-range distances are also pronounced. Additionally, electronic states of different spin and spatial symmetries intersect with each other. These crossings may become avoided crossings if relativistic spin-orbit couplings would be included, which is out of the scope of this paper. Avoided and real crossings may provide mechanism for efficient non-radiative and non-adiabatic charge transfer between ions and atoms in excited electronic states [23,60,84,85].
Spectroscopic constants of calculated excited electronic states for the investigated molecular ions are collected in Tables II-VI. Results for excited electronic states of the CaK + and CaCs + molecular ions are reported for the first time, while spectroscopic constants for other systems can be compared with previous available theoretical results [10,15,35,[52][53][54][55][56][57][58][59][60][61]. Similarly as for the ground electronic state, the results for the excited states obtained with different computational methods agree reasonably well. Both well depths and equilibrium distances mostly FIG. 8. Permanent electric dipole moments of (a) 1 Σ + and 3 Σ + , and (b) 1 Π, 3 Π, 1 ∆, and 3 ∆ electronic states of the CaK + molecular ion.
agree within several percent. In the case of the CaLi + molecular ion, the present results agree very well with the results of Refs. [15,54], while well depths seem to be underestimated in calculations presented in Ref. [53], which employed a single-reference method. For the CaNa + molecular ion, the present results agree very well with the results of Refs. [58,59], while the agreement is worse with calculations presented in Ref. [10], which employed smaller basis sets. In the case of the CaRb + molecular ion, the present results also agree well with the results of Refs. [35,60,61]. The overall good agreement between the present and previous calculations for the CaLi + , CaNa + , and CaRb + molecular ions suggests that similar accuracy may be expected for our results for the CaK + and CaCs + molecular ions, which have not yet been studied. Electronic structure data for the CaLi + , CaNa + , and CaRb + molecular ions were successfully employed to guide and interpret experimental measurements [15,21,[24][25][26][48][49][50]. Presented potential energy curves for the CaK + and CaCs + molecular ions may correspondingly find similar applications, e.g., in the context of experimental studies of a mixture of laser-cooled Ca + ions in a linear Paul trap overlapped with ultracold K atoms in a magneto-optical trap as presented recently in Ref. [28].
B. Permanent and transition electric dipole moments
The permanent and transition electric dipole moments (PEDMs and TEDMs) determine the interaction of atomic and molecular systems with static and dynamic electric fields, including the laser field. Thus, their knowl- edge is essential for predicting molecular spectra, lifetimes, and formation schemes. Here, we calculate permanent electric dipole moments for all investigated electronic states of the CaLi + , CaNa + , CaK + , CaRb + , and CaCs + molecular ions, as well as all transition electric dipole moments between electronic states of the same spin and spatial symmetries. Transition electric dipole moments between the two lowest 1 Σ + states of the CaAlk + molecular ions are presented in Fig. 1(c). Their functions have quite similar shapes and values. They govern the radiative chargetransfer and association processes in ground-state collisions between Ca + ions and alkali-metal atoms which are studied in Sec. III D.
Permanent electric dipole moments of the two lowest 1 Σ + electronic states, i.e., 1 Σ + states dissociating into the Ca( 1 S)+Alk + ( 1 S) and Ca + ( 2 S)+Alk( 2 S) atomic thresholds, and of the 3 Σ + electronic state associated with the Ca + ( 2 S)+Alk( 2 S) atomic threshold are presented in Fig. 7 for all investigated molecular ions. Values of the permanent electric dipole moments for charged molecules depend on the choice of the coordinate-system origin. Here, they are calculated with respect to the center of mass, which is a natural choice for investigating the rovibrational dynamics. Their absolute values increase with increasing internuclear distance and asymptotically approach the limiting cases where the charge is completely localized at one of the atoms. This behavior is typical for heteronuclear molecular ions and implies that even molecular ions in very weakly bound states have effectively a significant permanent electric dipole moment in contrast to neutral molecules [79]. The difference between the calculated values and the limiting cases is the interaction-induced variation of the permanent electric dipole moment or, in other words, the degree of charge delocalization. Curves for the 1 Σ + electronic states are smooth and their asymptotic behaviors reflect the change of the center-of-mass position for different molecular ions. The degree of charge delocalization increases with the mass of the alkali-metal atom according to the increasing difference of the electronegativity of the Ca and alkalimetal atoms. Different asymptotic behaviors for the 1 1 Σ + and 2 1 Σ + states reflect the different charge localization for the Ca( 1 S)+Alk + ( 1 S) and Ca + ( 2 S)+Alk( 2 S) atomic thresholds. Curves for the 3 Σ + electronic state of the CaRb + and CaCs + molecular ions show irregularities due to avoided crossings with nearby-lying states.
Permanent electric dipole moments of all investigated electronic states of the CaK + molecular ion are presented in Fig. 8, while transition electric dipole moments between electronic states of this molecular ion are plotted in Fig. 9. PEDMs and TEDMs for other studied molecular ions are collected in Supplemental Material. For PEDMs, two families of curves associated with two possible arrangements of the charge at the Ca + +Alk and Ca+Alk + atomic thresholds can be identified. The short-range deviations from the asymptotic behavior give information about charge exchange and delocalization due to interatomic interactions. The shapes of calculated PEDM and TEDM curves and their irregularities at short-range distances can be directly associated with avoided crossings between corresponding potential energy curves, that confirms strong interactions between involved electronic states. The knowledge of changing physical character of electronic states may be useful to predict and explain channels of non-radiative charge-transfer processes. TEDMs at large distances drop to zero when two associated atomic thresholds have different charge arrangements or related atomic excitations are dipole-forbidden. They asymptotically tend to the atomic values only in case of atomic thresholds connected by dipole-allowed transitions.
C. Vibrational levels
We use the present PECs to calculate corresponding vibrational states. In Figure 10, we present the energy spacings between the adjacent vibrational levels (E v − E v−1 ) of the ground and excited 1 Σ + electronic states of the investigated CaAlk + molecular ions. For the ground electronic state, there are 86, 103, 114, 125, and 126 vibrational levels for the CaLi + , CaNa + , CaK + , CaRb + , and CaCs + molecular ions, respectively. The number of vibrational levels increases with the mass of the involved alkali-metal atom, despite the decreasing potential well depth (see Fig. 1(b)), because the effect of the increasing mass dominates. The spacing between vibrational levels diminishes gradually with a vibrational energy that reflects the strong anharmonicity of the PECs. The overall pattern of energy spacings of different electronic states for different molecular ions is similar. For some states, however, irregularities related to the avoided crossings are visible, e.g., for the 3 1 Σ + state of CaK + , CaRb + , and CaCs + . Interactions and collisions of laser-cooled trapped Ca + ions with ultracold alkali-metal atoms are of the highest importance for experimental realizations of ultracold ionatom mixtures [10, 15, 21, 24-26, 28, 48-50]. Even if both the Ca + ion and alkali-metal atom are in their electronic ground states, the collision-and interaction-induced radiative charge rearrangement is possible in the form of the radiative charge transfer (RCT) where the electron is spontaneously transferred from the alkali-metal atom to the Ca + ion emitting a photon of energy ω and the radiative association (RA) where the CaAlk + molecular ion in the (v, j) rovibrational level of the electronic ground state is sponta-neously formed. The interaction between the ground-state Ca + ion and alkali-metal atom both in the 1 Σ + and in the 3 Σ + states at large distances is dominated by the induction term where the leading long-range induction coefficient C 4 = 1 2 e 2 α Alk is given by the static electric dipole polaizability of the alkali-metal atom α Alk . This long-range interaction determines the characteristic length scale R 4 = 2µC 4 / 2 and the related characteristic energy scale E 4 = 2 /2µR 2 4 [1]. These quantities are relevant for ultracold ion-atom collisions because the length scale R 4 establishes the order of magnitude of typical ion-atom scattering lengths while the energy scale E 4 determines the quantum regime of s-wave collisions [8]. Table VII collects the long-range coefficients, characteristic lengths, and characteristic energies of the ground-state ion-atom interaction for the investigated mixtures. The character- istic lengths are from 1337 bohr for Ca + +Li to 4706 bohr for Ca + +Cs, and they are an order of magnitude larger for ion-atom systems as compared with neutral counterparts. The characteristic energies are from 0.13 µK for Ca + +Cs to 8.12 µK for Ca + +Li, and they are two orders of magnitude smaller as compared with neutral counterparts. This is one of the reasons, together with inelastic losses and micromotion-induced hearing in the Paul trap [51], why the realization of ion-atom collisions in the quantum regime is very challenging [8].
To produce and study the considered ion-atom mixtures in the quantum regime, the ion, after initial laser cooling, should be subsequently cooled sympathetically via elastic collisions with surrounding ultracold neutral gas [8,29,50]. Such a scheme is feasible only if rates for elastic scattering are significantly larger than rates for inelastic collisions. Therefore, in Fig. 11, we present rate constants for elastic and radiative inelastic collisions between the Ca + ion and alkali-metal atoms in the 2 1 Σ + electronic state as a function of the collision energy. We assume typical scattering length of a s = R 4 in the entrance channel, while the results do not depend on the scattering length in the exit channel. Rate constants for small collision energies and pattern of shape resonances depend strongly on the scattering length, but the overall magnitude of rate constants does not depend on it.
In the range of investigated collision energies, the rate constants for the elastic scattering K el for all systems have similar values of around 10 −8 cm 3 /s. Because the same scattering length (in unites of the characteristic length) is assumed for all Ca + +Alk mixtures, the pattern of shape resonances is very similar for all systems, however positions of shape resonances are scaled according to the characteristic energies. Similarly to other ionatom systems [33-35, 84, 86], shape resonances are more pronounced for inelastic rate constants, however if the thermal distribution of collision energies is assumed, the thermal averaging removes energy dependence for temperatures larger than 1 mK in agreement with predictions of the classical Langevin capture theory [87]. The magnitude of the rate constants for the radiative association K RA and charge transfer K RCT depends on the system. In Table VIII, we collect thermally averaged rate constants for the radiative association and radiative chargetransfer collisions in the investigated mixtures compared with the Langevin rate constants K L = 2π 2C 4 /µ. Similarly as for other alkaline-earth-metal-alkali-metal ionatom systems [15,34,35], the rate constants for the radiative losses are at least 10 4 times smaller than Langevin and elastic rate constants. The radiative rates constants increase with the mass of the alkali-metal atom according to the increasing energy of an emitted photon. At the same time, the radiative association is 53, 38, 3.1, 2.4, and 1.6 times more probable than the radiative charge transfer for Ca + +Li, Ca + +Na, Ca + +K, Ca + +Rb, and Ca + +Cs collisions, respectively.
Radiative association rate constants as a function of the energy of the final ro-vibrational level in the 1 1 Σ + electronic ground state for the Ca + ion colliding with the alkali-metal atoms in the 2 1 Σ + state are presented in Fig. 12. For all systems, the formation of molecular ions in vibrational levels from the middle of the spectrum is the most probable. For example, the formation of molecular ions in vibrational levels with the vibrational quantum number around v = 46, v = 28, and v = 13 and the binding energy around 1595 cm −1 , 2033 cm −1 , and 2361 cm −1 are the most probable for CaLi + , CaK + , and CaCs + , respectively. The molecular formation probability decreases gradually for decreasing binding energies and is strongly suppressed for binding energies larger than 2500 cm −1 because of the interplay between Franck-Condon factors between vibrational levels of the 2 1 Σ + and 2 1 Σ + electronic states and transition electric dipole moment between them (see Fig. 1). Interestingly, more deeply bound molecular ions can be formed for heavier alkali-metal atom despite they have smaller potential well depths.
In a field-free case, where all spin orientations are present, the described above reactive collisions governed by the 2 1 Σ + electronic state constitute 25% of scattering. Remaining collisions are governed by the 3 Σ + state, which for Ca + +Li, Ca + +Na, and Ca + +K is free from radiative losses. For Ca + +Rb and Ca + +Cs the radiative losses are also possible from the 3 Σ + state but they should be much less probable than radiative losses from the 1 Σ + state because of much smaller energies of realized photons. The radiative association and charge transfer are expected to be a dominant loss mech- anism for the ground-state Ca + +Li and Ca + +Na collisions, because the entrance atomic threshold is well separated from lower and higher lying electronic states in these systems. In the case of Ca + +K collisions, the radiative processes should also be most important however the coupling with the 1 3 Π state may affect them.
In the case of Ca + ( 2 S)+Rb( 2 S) collisions, nonradiative charge-transfer losses were observed [21,60] as a dominant mechanism because of strong nonadiabatic couplings with below nearby-lying electronic states associated with the Ca( 3 P )+Rb + ( 1 S) atomic threshold. In the case of Ca + +Cs collisions, more balanced interplay between radiative and nonradiative processes can be expected. Detailed studies of the nonradiative collisional dynamics are out of the scope of this paper. If the Ca + ions or alkali-metal atoms are excited by a laser field, the light-induced charge-transfer and association processes are possible Ca + + Alk + ω → Ca + Alk + , where a laser field can be employed to directly stimulate the transition to the ground electronic state [34,37] or to excite the ion-atom system to higher excited states [36].
In the latter case, both radiative and nonradiative deexcition processes can happen depending on the structure of excited electronic states. Nonradiative deexcition can be driven by nonadiabatic couplings between electronic states of the same symmetry or spin-orbit couplings between electronic states of different symmetry. If the ionatom system is excited to higher-lying atomic thresholds, then the sequence of radiative and nonradiative deexcitations trough intermediate excited electronic states can also be envisioned [36]. The calculated potential energy curves and transition electric dipole moments can be employed to predict and interpret experimental measurements. Various rate constants for charge-transfer collisions between excited-state Ca + ions and Li [15], Na [26], and Rb [21] atoms were measured and rationalized based on the structure of real and avoided crossings between involved molecular electronic states at short distances. For all investigated systems, exciting both Ca + ion to the 2 D or 2 P states and alkali-metal atom to the 2 P state significantly enhances the charge-transfer rate constants from negligible to significant fraction of the Langevin rate constant. For example, in Fig. 2 and Fig. 3, the Ca + ( 2 D)+Li( 2 S) and Ca + ( 2 D)+Na( 2 S) atomic thresholds and associated molecular electronic states are closely surrounded by several electronic states. Similar light-assisted enhancements of charge transfer can also be expected for Ca + +K and Ca + +Cs collisions.
The interplay between photoassociation into excited weakly bound molecular ions and subsequent deexcition to the ground-state molecular ions or competitive dissociative charge transfer can be expected for collisions in the laser field [34,36,37]. The Ca( 2 S)+Alk( 2 P ) atomic thresholds are relatively well separated from other thresholds in the considered systems. This opens the way for photoassociation spectroscopy and molecular ion formation similar as in alkali-metal gases [88]. The existence of several charge-transferred atomic thresholds may also allow for short-range photoassociation schemes. For example, the Ca( 3 P )+Li + ( 1 S) and Ca( 3 P )+Na + ( 1 S) atomic thresholds are well separated from other asymptotes, and the relevant 1 3 Σ + and 2 3 Σ + electronic states have a similar shape and are connected by the large transition dipole moment at the short range. Finally, magnetic Feshbach resonances in the ground electronic state can be employed to enhance molecular ion formation rates. Detailed studies of the photoassociation and magnetoassociation schemes, however, are out of the scope of this paper.
E. Radiative lifetimes
We use the present PECs, PEDMs, and TEDMs to calculate the lifetimes of vibrational states of the ground and first excited 1 Σ + electronic states of the considered CaAlk + molecular ions. These lifetimes may be useful to assess prospect for the formation and spectroscopy of calcium-alkali-metal-atom molecular ions in modern experiments with cold ion-atom mixtures. The lifetimes of vibrational levels of the first excited 1 Σ + electronic state are presented in Fig. 13(a) and are governed by the transition electric dipole moment to the ground electronic state associated with emitting an optical photon. This transition moment is significant at short internuclear distances and decreases exponentially with the increasing internuclear distance (see Fig. 1(c)). Therefore, the lowest vibrational levels have lifetimes in the range of tens to hundreds of nanosecond (between 22 ns for CaCs + and 335 ns for CaLi + for the lowest vibrational level), while the most weakly bound levels have lifetimes exceeding microseconds. The lifetimes increase with the vibrational number and decrease with the increasing mass of the alkali-metal atom.
The lifetimes of vibrational levels of the ground 1 Σ + electronic state are presented in Fig. 13(b) and are governed by its permanent electric dipole moment responsible for weak transitions between different vibrational levels associated with emitting microwave photons. In the case of these transitions, both relatively weak spontaneous emission and stimulated by the black body radiation absorption and emission have to be included.
We assume the black body radiation spectrum with the temperature of 300 K. The lifetimes of the lowest and the most-weakly-bound vibrational levels exceed ten seconds (between 9 s for CaLi + and 77 s CaCs + for the lowest vibrational level), while other levels have lifetimes of the order of one second. The interplay between the spontaneous and stimulated transitions can be seen in Fig. 13(c), where we compare the spontaneous and stimulated transition rates for the vibrational levels of the ground 1 Σ + electronic state of the CaAlk + molecular ions. The present lifetimes have similar characteristics as comparable results for other neutral and ionic dimers [89,90].
IV. CONCLUSION
Motivated by recent experimental studies on ultracold mixtures of Ca + ions immerse in alkali-metal atoms, in a comparative study, we have investigated the electronic structure and the prospects for the formation of the molecular ions composed of a calcium ion and an alkali-metal atom: CaAlk + (Alk=Li, Na, K, Rb, Cs). We have used the theoretical quantum chemistry approach based on non-empirical pseudopotential, operatorial core-valence correlation, large Gaussian basis sets, and full configuration interaction method for valence electrons. We have calculated adiabatic potential energy curves, spectroscopic constants, and transition as well as permanent electric dipole moments for the ground and several excited singlet and triplet electronic states of the Σ + , Π, and ∆ spatial symmetries. Next, the electronic structure data have been employed to examine the prospects for the ion-neutral reactive processes and production of molecular ions via spontaneous radiative association and laser-induced photoassociation. Finally, we have calculated the radiative lifetimes of vibrational states of the ground and first excited electronic states.
Our results are in good agreement with the previous theoretical studies of the electronic structure of the ground and excited electronic states of the CaLi + [15,[52][53][54][55][56][57], CaNa + [10,52,58,59], and CaRb + [35,52,60,61] molecular ions, which confirms the accuracy of the employed computational approach. The structure of the excited electronic states of the CaK + and CaCs + molecular ions is reported here for the first time. The rate constants for the radiative charge transfer and association in the ground-state collisions of the Ca + ion and alkali-metal atom are predicted to be much smaller than the rate constants for elastic scattering for all the considered systems. They are also predicted to increase with the mass of the alkali-metal atom. For the groundstate Ca + +K collisions, radiative losses should be the main source of losses, negligible for buffer gas cooling or other applications. For the ground-state Ca + +Cs collisions, the interplay between radiative and nonradiative charger-transfer processes is expected. The radiative association leads to the formation of ground-state molec-ular ions with binding vibrational energies in the range of 1500-2500 cm −1 and is predicted to be more probable than radiative charge transfer. For all the systems, the excited-state inelastic collisions are expected to be much faster than the ground-state ones. Based on the electronic structure, photoassociation schemes based on both short-range and long-range excitations can be envisioned. The radiative lifetimes of vibrational states of the ground and first excited electronic states are found in the range of 0.1-100 s and 10 ns-10 µs, respectively. The present results may be useful and pave the way for the formation and spectroscopy of calcium-alkali-metal-atom molecular ions in modern experiments with cold ion-atom mixtures. In the future, the presented computational scheme will be employed to study excited electronic states in triatomic molecular ions.
The full potential energy curves, permanent and transition electric dipole moments as a function of interatomic distance in the numerical form are available for all investigated systems from the authors upon request. | 8,534.4 | 2020-03-05T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Faulty Synchronization of Salient Pole Synchronous Hydro Generator
: This article presents the simulation results of hydro generator faulty synchronization during connection to the grid for various voltage phase shift changes in a full range ( − 180 ◦ ; 180 ◦ ). A field-circuit model of salient pole synchronous hydro generator was used to perform the calculation results. It was verified using the measured no-load and three-phase short-circuit characteristics. This model allowed observing the physical phenomena existing in the investigated machine, especially in the rotor which was hardly accessible for measurement. The presented analysis shows the influence of faulty synchronization on the power system stability and the construction components which are the most vulnerable to damage. From a mechanical point of view, the most dangerous case was for the voltage phase shift equal to − 120 ◦ , and this case was analyzed in detail. Great emphasis was placed on the following physical quantities: electromagnetic torque, stator current, stator voltage, rotor current, current in rotor bars, and active and reactive power. The physical quantities existing during faulty synchronization were compared with a three-phase sudden short-circuit state. From this comparison, we selected the values of physical quantities that should be taken into account during design of new hydro generators to withstand the greatest possible threats during long-term work.
Introduction
Synchronization of a generator with a power system must be carried out carefully. It is a dynamic process that requires the coordinated operation of many components such as mechanical, electrical, and human. The voltage and frequency of the disconnected generator must be closely matched to the voltage and frequency existing in the network bus. The instantaneous value of the voltage induced in the armature winding must be close to the instantaneous value of the network bus voltage. The interconnection of large numbers of synchronous generators operating in parallel constitutes the power system. These generators are hydro generators (possessing a salient-pole rotor) and turbogenerators (with a cylindrical rotor). These machines are connected by transmission lines supplying the network loads. A disconnected generator can be paralleled with the network by driving it at synchronous speed and adjusting its excitation current so that its terminal voltage is equal to the network bus voltage.
Failure of the synchronizing procedure results in out-of-phase synchronization, mainly caused by the following [1][2][3][4]: • Failure in wiring during commissioning Wiring errors lead to particular out-of-phase angles. Polarity errors at a voltage transformer can cause synchronizing at 180 • . • Delay during breaker closure This can occur if the breaker physically closes slower than anticipated and the systems go beyond the designed safe conditions before the breaker closes. The closing process cannot be stopped when the breaker coil is energized, and out-of-phase synchronization can happen. During this abnormal condition, transformers, generators, and associated equipment can be damaged. • Flash-over in breaker's contacts A breaker is designed to sustain the voltage that occurs before synchronizing and in the case of inequality of the generator and the network voltage phase. Several factors can reduce the electrical strength of the breaker's insulation. This results in arcing between contacts before galvanic closing. The following phenomena favor the flash-over in a breaker: pollution, low pressure, humidity, and decomposition of insulation. • Wrong setting of synchronous system This emerges from a human mistake. • Problem in manual synchronization In addition to automatic synchronization, the vast majority of generators have the possibility of manual synchronization. An operator may not predict how fast the phase angle difference converges and energizes the breaker close coil in advance or with delay. Sometimes, the operator does not take into account the closing mechanism delay of the generator breaker and, therefore, the main contacts do not make a close to 0 • angle difference.
A hydro generator is a synchronous generator. Synchronous speed comes from interaction between two poles coming from a stator and rotor that create a rotating magnetic field. Interaction between these two poles occurs when direct current flows in the excitation (rotor) winding and three-phase voltages are applied to the armature (stator) winding. Rotor speed is determined by the number poles and the grid frequency. The rotor is coupled to a water turbine by a mechanical shaft which supplies the mechanical energy for transformation to electrical energy.
During synchronizing, before closing the breaker between the power system and generator terminals, the frequency of the voltage induced in the armature is adjusted by the angular velocity of the rotating magnetic field (the rotor speed), whereas, after synchronizing, when the breaker is closed, the frequency of the power system regulates the speed of the rotating magnetic field. The position and speed of the rotor must be closely matched at the instant the hydro generator is connected to the power system in order to eliminate the transient torque required to bring the rotor into synchronism. If the frequency of voltage induced on the stator (coming from angular velocity) is significantly different from the frequency in the power system, then large transient torque will appear. Stability can be achieved by accelerating or decelerating the rotating masses (rotor and turbine) until the rotor speed matches the power system frequency. If the voltage phase angle difference is significant (rotor position is off), then transient torque required to set the rotor position into phase with the power system can be even higher.
Transient torques appearing during faulty synchronization can cause instantaneous and cumulative fatigue damage to the hydro generator and water turbine over their lifetime. Instantaneous stator current associated with this abnormal state can exceed the three-phase short circuit. Huge current windings cause the appearance of large forces in end-winding.
The consequences of faulty synchronization are the following: • Damage to the hydro generator rotor and water turbine because of mechanical stresses caused by rapid acceleration or deceleration of rotating masses. IEEE Standards C50.12 and C50.13 [5,6] define the following limits which guarantee that a generator is for service without inspection or repair after synchronizing: • Phase shift angle difference between generator-side voltage and the power system: ±10 • . • Generator-side voltage relative to the power system: 1.0-1.05 U N .
•
Frequency difference: ±0.067 Hz. The synchronizing can be done by an operator using manual means or automated control systems. The synchronizing system is dedicated to the following:
•
Closing the breaker as close to 0 • angle difference as possible. The operator must predict how fast the phase angle difference is coming and energize the breaker close coil in advance to account for the closing mechanism delay. • Controlling the governor to match speed.
•
Controlling the excitation current value to match voltage at the stator terminals.
Faulty synchronization is an abnormal operating condition and it can be detected by protection devices which give a signal to a device able to disconnect the generator from the power system. Protection devices such as reverse power and loss-of-field protections have time delays to avoid unwanted trips during transient operation of the generator. In this case, these times are around several seconds, whereas the generator breakers have a certain operating time which does not exceed 100 ms [7].
The faulty synchronization of synchronous generators has been the subject of many research works in the latest years. In particular, these studies referred to turbo generators. All of them were based on models with d-q axes considering two damper winding on both axes [8,9]. The parameters taken into account to create these models were extracted using Canay's approach. Simplified d-q models do not reflect the saturation effect of magnetizing steel. The effect of saturation on the rotor shaft torques during faulty synchronization is significant. Reflecting the saturation effect shows that the torsional moment on the shaft is higher [10]. Neglecting this effect affects erroneous conclusions. Currently, the most accurate calculation method for electrical machines is the finite element method. This method reflects the real distribution of generator construction elements and allows observing the physical phenomena existing inside the generator [11]. Additionally, it can be helpful in the choice of the best material for conductive parts and to optimize the magnetic path for the flux. The above statement applies only to large cylindrical-rotor generators. The studies on faulty synchronization for hydro generators with power of 1-10 MVA were omitted.
The literature lacks information on the effect of the damping cage on faulty synchronization for hydro generators, whereas, in the case of turbogenerators, there is a significant influence of the location of rotor wedges on the damping of excitation current oscillation during out-of-phase synchronization [11]. So far, the distribution of the current induced in the rotor bars of hydro generators has not been discussed, and it has not been compared to the most dangerous state, which is sudden short-circuit fault. Most often, articles [8,10,12] focused on showing the maximum values of stator current and electromagnetic torque, i.e., in such physical quantities that can lead to irreversible damage. The influence of induced current in the rotor bars on the possibility of damaging damping bars was ignored.
In this work, the impact of voltage phase shift (synchronizing angles) on hydro generator stability was investigated. A two-dimensional field-circuit model was used in calculation. This model was previously verified by comparisons of the calculated and measured no-load and three-phase short-circuit characteristics during running tests. The simulations were carried out for different synchronizing angles to evaluate the instantaneous and peak shaft torques. In addition, the following quantities were also determined: stator current, stator voltage, field current, induced current on rotor bars, and active and reactive power. In this article, the waveforms of physical quantities of the most dangerous case are shown from the mechanical point of view. Finally, a typical comparative study between faulty synchronization and three-phase short circuit faults of a hydro generator is presented.
Description of Field-Circuit Model and Main Rated Data of Hydro Generator
A field-circuit model of a hydro generator was utilized in computation. The circuit equations (based on Kirchhoff's laws) for rotor and stator windings were coupled with field equations used to describe the temporal-spatial distribution of the electromagnetic field [13]. The vector magnetic potential A was used to describe the temporal-spatial distribution of the time-varying electromagnetic field. In this way, the partial differential equations were solved. The mathematical description is expressed in Equation (1) for the low-frequency range. The time-varying electric and magnetic fields are calculated using Equation (2) for the fully coupled dynamic physics solution implemented in Ansys Maxwell software. The current density vector is expressed by yjr vector magnetic potential A and scalar electrical potential V.
where J is the current density vector, µ is the magnetic permeability, σ is the electric conductivity, and ν is the velocity vector of the environment moving relative to the electromagnetic field. In a two-dimensional model, the vector magnetic potential has only one component. When knowing the mean values of the potential in the cross-section of the winding conductors (s i ) and the effective machine length l e , the mean value of the flux associated with k-th winding can be expressed by Equation (3). The electromagnetic state of the k-th winding is described by the Kirchoff equation (Equation (4)).
where u k , i k , Ψ k are the instantaneous values of voltage, current, and flux coupled to the k-th winding.
The two-dimensional model of the hydro generator was created in Ansys Maxwell software dedicated to finite element analysis of electromagnetic field distribution. The investigated hydro generator (type: GCV-1610M, produced in December 2019) was a vertical salient pole machine with static excitation (the slip rings were on the shaft). The ratings of the generator are presented in Table 1. The stator core and rotor poles were laminated. The rotor winding was wound around the poles. The rotor possessed a damper cage located in the pole shoes. These bars were short-circuited by ring-shaped segments in all poles. The bars and ring-shaped segments were made of copper. The field-circuit model of the hydro generator reflected nonlinear magnetizing curves of rotor and stator cores, possibilities of inducing current in damper bars, rotor movement, and external electrical circuit with voltage sources. However, the skin effects in stator coils and eddy currents in the stator and rotor laminations were neglected. The two-dimensional model was reduced to half of the cross-section because of geometrical symmetry and electromagnetic periodicity. The investigated region, together Energies 2020, 13, 5491 5 of 21 with the finite element mesh, is shown in Figure 1. The analyzed area contained 15,712 finite elements and 31,372 nodes. This model had two boundary conditions to solve the electromagnetic field equations. The first Dirichlet boundary condition was located at the outer stator diameter (edge Γ 2 ), where the vector magnetic potential was equal to 0. The periodic condition of the magnetic potential was set at edge Γ 1 . The numbering of rotor poles and damper bars refer to calculation results, where the waveform of current in each bar is shown. The main dimensions of the geometry are presented in Table 2. Figure 2 presents the scheme of the electrical circuit which contained stator and rotor windings, as well as the distribution of rotor damper bars. The armature winding circuit was extended by resistance (R S ) and inductance (L Sew ), representing the end-winding part. Additionally, there were added resistances and inductances representing the transformer (R TR , L TR ) and the power system (R PR , L PR ). The stator winding was supplied with three-phase voltage sources shifted with respect to each other by 120 • (U 1 , U 2 , U 3 ). The field part of the excitation winding circuit was extended by resistance (R F ) and inductance (L Few ), representing the end-winding part. The excitation winding was Energies 2020, 13, 5491 6 of 21 supplied by direct current (DC) voltage source U F . The excitation voltage was modeled as a constant voltage source; therefore, no actuation of the automatic voltage regulator was taken into account. R Bew and L Bew contained fragments of segments of the short-circuiting rotor cage between bars in one pole, whereas R B2ew and L B2ew were part of the ring-shaped segments between rotor poles. Switching the circuit breaker (S 1 , S 2 , and S 3 ) on or off enabled the hydro generator to be analyzed in various scenarios of faulty synchronization. During the simulation, it was assumed that the generator was connected to the power system, which possessed a short-circuit power equal to 15,000 MVA (i.e., a strong system) [14,15]. Computed inductances and resistances used in the simulations are shown in Table 3. The names of parameters refer to the circuit model shown in Figure 2. Table 3. Computed resistances and reactances of the power system and unit transformer.
Symbol
Unit Value The created field-circuit model of the hydro generator allowed calculating waveforms of electromagnetic quantities in steady and transient states. This model was verified experimentally on the basis of no-load and three-phase short-circuit curves calculated and compared with the measurements obtained during running tests. These curves are presented in Figure 3. The computed reactances and time constants of the generator are presented in Table 4.
Symbol
Unit Value On the basis of the no-load and three-phase short-circuit characteristics, it could be concluded that the measurements did not differ significantly from the calculation results obtained from the field-circuit model of the hydro generator. Therefore, this model was utilized to calculate the faulty synchronization for different values of voltage phase shift.
Analysis of Faulty Synchronization
The calculations of faulty synchronization were prepared for different voltage phase shift angles in the range of [−180 • ; 180 • ] with a 5 • step. This was a case when the generator and power system voltages were in sequence, amplitude, and frequency, but not in phase. When the voltage phase shift was in the range of [−180 • ; 0 • ], the power system voltage lagged with respect to generator voltage ( Figure 4a); however, in the range of [0 • ; 180 • ], the power system voltage led with respect to generator voltage (Figure 4b). The study was extended to include the effect of increased generator voltage up to 1.05 U N according to IEEE Standard C50.12 [5].
The investigated hydro generator was modeled in a single-machine system, as shown in Figure 5. In the simulation, the generator was connected to the power system at instant t = 0.02 s. During the simulation, the following physical quantities were obtained: electromagnetic torque, stator current, stator voltage, field current, currents in rotor bars, and active and reactive power in the power Energies 2020, 13, 5491 8 of 21 system bus. The frequency during the simulations was 50 Hz. The rotor speed was equal to 600 rpm before synchronization. The simulations were carried out for different voltage phase shift angles to determine the instantaneous and peak shaft torques. Figure 6a shows the maximum amplitude of electromagnetic torque. The maximum value was for −120 • of voltage phase shift, when the power system lagged with respect to generator voltage. If we took into account the absolute sum of two maximum electromagnetic torques with opposite directions (Figure 6b), then the maximum value (peak-to-peak) was obtained for 180 • of voltage phase shift. The significant differences between cases of U = 1.00U N and U = 1.05U N were visible when the voltage phase shift was higher than 60 • and below −60 • . The statement of computed extreme values is shown in Table 2. Figure 7a,b present the maximum amplitudes of stator current and the minimum amplitude of stator voltage, respectively. When the voltage phase shift was equal to 180 • , the maximum value of stator current and the minimum value of stator voltage were observed. For the stator current, significant differences between cases of U = 1.00U N and U = 1.05U N were visible when the angle was higher than 90 • and below −90 • , whereas, for stator voltage, there were no visible differences in the whole range.
Energies 2020, 13, x FOR PEER REVIEW 9 of 22 Figure 7a,b present the maximum amplitudes of stator current and the minimum amplitude of stator voltage, respectively. When the voltage phase shift was equal to 180°, the maximum value of stator current and the minimum value of stator voltage were observed. For the stator current, significant differences between cases of U = 1.00UN and U = 1.05UN were visible when the angle was higher than 90° and below −90°, whereas, for stator voltage, there were no visible differences in the whole range. The maximum amplitude of induced current in the excitation winding is presented in Figure 8a. The maximum value was for the angle of 180°. Differences between cases of U = 1.00UN and U = 1.05UN were visible in the whole range. Figure 8b shows the maximum amplitude of the induced current in rotor bars in the active part and end-winding part (shorted ring segment between bars). The induced current in the end-winding part was almost twofold higher than the induced current in the active part. The maximum value of induced current in the active part was observed for 130°, whereas that in the end-winding part was observed for −130°. According to basic knowledge of hydro generator design, the selected diameter of rotor bars was equal to 16 mm (cross-section = 201 mm 2 ), whereas the cross-section of the segments of the short-circuiting rotor cage between rotor bars was equal to 500 mm 2 . It follows from this that the end-winding part of rotor bars was able to withstand currents 2.5-fold higher than those existing in the rotor bar active part. Figures 9 and 10 show the distribution of maximum values of induced current in each rotor bar and each end-winding part, respectively. These distributions were determined for a voltage phase shift of 180°. The bar numeration was as follows: the first number denotes the number of rotor poles (according to numeration in Figure 1), followed by the number of bars in the pole (direction: counterclockwise), and the double number (after the number of poles) denotes the segment between two rotor bars (e.g., Segment 1-12). In the case of a segment between two poles, the numeration was as follows: the first number denotes the number of poles, followed by the number of bars, the second number of poles, and the number of bars in the second pole (e.g., Segment 1-8-2-1).
The maximum amplitudes existing in the active part of rotor bars were close to the extremes of the rotor poles, whereas the lower values were in bars located in the center of poles. The opposite situation existed in the case of induced current in the end-winding part of rotor bars. Figure 11a presents the generated and absorbed active power by the investigated hydro generator. Due to the changing nature of the electric circuit, along with the change in voltage phase shift, there were local extremes in the graph. The maximum value of absorbed active power was equal The maximum amplitude of induced current in the excitation winding is presented in Figure 8a. The maximum value was for the angle of 180 • . Differences between cases of U = 1.00U N and U = 1.05U N were visible in the whole range. Figure 8b shows the maximum amplitude of the induced current in rotor bars in the active part and end-winding part (shorted ring segment between bars). The induced current in the end-winding part was almost twofold higher than the induced current in the active part. The maximum value of induced current in the active part was observed for 130 • , whereas that in the end-winding part was observed for −130 • . According to basic knowledge of hydro generator design, the selected diameter of rotor bars was equal to 16 mm (cross-section = 201 mm 2 ), whereas the cross-section of the segments of the short-circuiting rotor cage between rotor bars was equal to 500 mm 2 . It follows from this that the end-winding part of rotor bars was able to withstand currents 2.5-fold higher than those existing in the rotor bar active part. Figures 9 and 10 show the distribution of maximum values of induced current in each rotor bar and each end-winding part, respectively. These distributions were determined for a voltage phase shift of 180 • . The bar numeration was as follows: the first number denotes the number of rotor poles (according to numeration in Figure 1), followed by the number of bars in the pole (direction: counterclockwise), and the double number (after the number of poles) denotes the segment between two rotor bars (e.g., Segment 1-12). In the case of a segment between two poles, the numeration was as follows: the first number denotes the number of poles, followed by the number of bars, the second number of poles, and the number of bars in the second pole (e.g., Segment 1-8-2-1).
The maximum amplitudes existing in the active part of rotor bars were close to the extremes of the rotor poles, whereas the lower values were in bars located in the center of poles. The opposite situation existed in the case of induced current in the end-winding part of rotor bars. Figure 11b presents the generated and absorbed reactive power by the investigated hydro generator. Due to the changing nature of the electric circuit, along with the change in voltage phase shift, there were local extremes in the graph, as in the case with active power. The maximum value of absorbed reactive power was equal to 5.4-6.4 QN. A higher reactive power was absorbed when the voltage phase shift during faulty synchronization was in the range of [−120°; 120°], whereas the maximum generated reactive power did not exceed 9. Figure 11a presents the generated and absorbed active power by the investigated hydro generator. Due to the changing nature of the electric circuit, along with the change in voltage phase shift, there were local extremes in the graph. The maximum value of absorbed active power was equal to 5.8-6.6 P N , where a higher value refers to a higher value of stator voltage at the terminals during synchronization. A higher active power was absorbed when the voltage phase shift during faulty synchronization was in the range of [−180 • ; 0 • ], i.e., when the power system voltage lagged with respect to generator voltage, whereas the maximum generated active power occurred in the opposite situation, i.e., when the power system voltage led with respect to generator voltage. The maximum value of generated active power did not exceed 7.2 P N . Figure 11b presents the generated and absorbed reactive power by the investigated hydro generator. Due to the changing nature of the electric circuit, along with the change in voltage phase shift, there were local extremes in the graph, as in the case with active power. The maximum value of absorbed reactive power was equal to 5.4-6.4 Q N . A higher reactive power was absorbed when the voltage phase shift during faulty synchronization was in the range of [−120 • ; 120 • ], whereas the maximum generated reactive power did not exceed 9.
Analysis of Out-of-Phase Synchronization of −120°
Simulations were carried out for out-of-phase synchronization of −120°. A negative sign denotes that the power system voltage lagged with respect to generator voltage (as shown in Figure 4a). This was the worst case of examined faulty synchronization, because the highest electromagnetic torque appeared. The value of voltage at the terminal before synchronization was equal to 1.05 UN (maximum acceptable value according to [5,6]). The connection of the hydro generator to the power system took place at t = 0.02 s. The simulation step was equal to 0.2 ms. Figure 12 presents the waveform of speed. At the beginning, there were fluctuations which disappeared after 3.5 s, and the investigated hydro generator returned to a steady state. The speed fluctuations depended on electromagnetic torque and the moment of inertia for all rotating
Analysis of Out-of-Phase Synchronization of −120 •
Simulations were carried out for out-of-phase synchronization of −120 • . A negative sign denotes that the power system voltage lagged with respect to generator voltage (as shown in Figure 4a). This was the worst case of examined faulty synchronization, because the highest electromagnetic torque appeared. The value of voltage at the terminal before synchronization was equal to 1.05 U N (maximum acceptable value according to [5,6]). The connection of the hydro generator to the power system took place at t = 0.02 s. The simulation step was equal to 0.2 ms. Figure 12 presents the waveform of speed. At the beginning, there were fluctuations which disappeared after 3.5 s, and the investigated hydro generator returned to a steady state. The speed fluctuations depended on electromagnetic torque and the moment of inertia for all rotating components. In this study, it was assumed that the moment of inertia of the water turbine as twofold higher than the moment of inertia of the hydro generator. The calculated maximum value of electromagnetic torque was 9.94 T N (Figure 13), and the largest values were turned in one direction. Significantly lower values were observed in the opposite direction. Figure 14 presents the stator current. The calculated maximum value was equal to 8.90 I SN . Stator currents reduced to zero after seven periods and then increased and oscillated with a period of ca. 0.3 s. A significant increase in stator current caused a drop in terminal voltage to a value of 0.66 U SN (seen in Figure 15). After one period, the stator voltage was rebuilt and returned to its value before synchronization within 3.5 s. During the simulations, these was a lack of excitation regulation in order to show how the field current changed during faulty synchronization. The maximum computed value of field current was equal to 2.81 I FN (Figure 16). When the maximum value was reached, the field current returned to the value before synchronization, which allowed inducing a value of 1.05 U SN on the stator terminals during the no-load state. During the simulations, these was a lack of excitation regulation in order to show how the field current changed during faulty synchronization. The maximum computed value of field current was equal to 2.81 IFN (Figure 16). When the maximum value was reached, the field current returned to the value before synchronization, which allowed inducing a value of 1.05 USN on the stator terminals during the no-load state. Figure 18. Figure 19 shows the distribution of maximum values of each current induced in the rotor's circuited segments. The maximum values of currents were in segments located in the center of the pole (poles No. 1 and 2) and close to the extremes of rotor poles (poles No. 3, 4, and 5). The maximum current appeared in a segment between rotor bars No. 3-8 and No. 4-1 (Segment 3-8_4-1). The waveform of this current is presented in Figure 20.
Waveforms of active and reactive power are presented in Figure 21a,b, respectively. An active power above zero denotes that power was absorbed from the power system. The same situation applied to reactive power, whereby a reactive power above zero denotes that power was absorbed from the power system, and a value below zero means that active or reactive power was generated to the grid. The computed active power was huge due to the fact that, at the first moment of analyzed faulty synchronization (voltage phase shift equal to −120 • ), the generator speed decreased and there was a need to absorb active power from the power system to keep the machine in synchronism. The huge stator currents forced a drop in stator voltage and there was a need to absorb reactive power in order to magnetize the rotor and stator cores. After a 1/4 period of faulty synchronization, the highest saturation of cores appeared, especially in the rotor pole and yoke, which significantly exceeded the acceptable value (1.5 T). The saturation in the stator yoke was exceeded as well. The maximum computed value was equal to 1.9 T, which is above the acceptable value for a salient pole synchronous machine (1.4 T). The current densities in rotor bars did not exceed 120 × 10 6 A/m 2 .
At a time of 0.03 s, the maximum value of current densities in the rotor appeared and was equal to 180 × 10 6 A/m 2 . Huge saturation existed only in the rotor pole and yoke. Saturation in the stator teeth and yoke was within the limits. The magnetic field inside the machine was significantly distorted.
At a time of 0.302 s, the most visible field distortion was observed. There was a lack of pole symmetries. Saturation in the stator core was the same as before the faulty synchronization, whereas that in the rotor pole slightly exceeded 1.5 T. The current densities in rotor bars were no higher than 50 × 10 6 A/m 2 .
Comparison of Faulty Synchronization with Sudden Short-Circuit State
Hydro generators are designed to withstand the thermal and stress effects in severe emergencies. One such state is sudden three-phase short circuit. Emerging electrodynamic forces on the end-winding part of the stator winding during sudden short circuit were taken into account in the calculation in order to predict the lifetime of this construction element. The computed value of electromagnetic torque was used to estimate the stresses on the shaft. The thermal resistance of the conductive rotor parts was assessed on the basis of verified analytical equations. The temperature increase as a function of the inducted current in the rotor bar and circuited segments did not cause overheating of these construction parts before the expiry of the protection operation.
Simulations were carried out for the three-phase short-circuit fault which occurred suddenly. Before the analyzed state, the investigated hydro generator worked with no load. The three-phase stator winding was short-circuited at 0.1 s. The terminal voltage before the shorted circuit was equal to U N .
The calculated maximum value of electromagnetic torque (Figure 25a) was equal to 8.39 T N and was less than the maximum value obtained from the faulty synchronization (9.35 T N ), whereas the absolute sum of the two maximum electromagnetic torques with opposite directions was equal to 14.16 T N and was significantly higher than the value obtained from faulty synchronization (10.6 T N ). Figure 25b,c present the waveforms of stator current and field current, respectively. The amplitudes of stator and field current were less than those obtained during faulty synchronization.
The same situation was observed in the case of induced current in rotor bars and circuited segments, which produced values lower than those for faulty synchronization. The distributions of maximum values of these currents are shown in Figures 26 and 27. The maximum values of induced current in the rotor bar were close to the extremes of rotor poles, while smaller values were noted in the bars located in the center of the rotor pole. A different situation was observed in the case of current in the rotor's circuited segment. The maximum values were located in the center of rotor pole, whereas smaller values were found in the extremes of rotor poles. The waveforms of the maximum current in the rotor bar (bar No. [5][6][7][8] and in the circuited segment (segment No. 4-45) are presented in Figure 28a,b, respectively. A comparison of faulty synchronization (for U N ) with the three-phase sudden short circuit is shown in Table 5. Only the amplitude of electromagnetic torque (T MAX 0-PEAK ) obtained from the analyzed faulty synchronization was higher (by ca. 11%) than the value coming from the sudden short circuit state. A different situation occurred in the case of the absolute sum of the two maximum Energies 2020, 13, 5491 20 of 21 electromagnetic torques with opposite directions, where a higher value was obtained for the three-phase sudden short circuit.
Conclusions
In this article, we analyzed the impact of voltage phase shift during faulty synchronization on physical quantities such as electromagnetic torque, stator current, terminal voltage, field current, induced current in the rotor conductive part, and active and reactive power.
The utilization of the finite element method allowed observing electromagnetic phenomena inside the investigated generator, which until now were unknown. Most published studies used simplified computational models which commutate only the maximum instantaneous values of the stator current and electromagnetic torque. The influence of local core saturations and the uneven distribution of induced current in the rotor bars were ignored.
The greatest thermal hazards existed for a voltage phase shift of 180 • . For this angle, the stator current, field current, and induced current in the rotor bars and circuited segments were the highest. The greatest electrodynamics forces on the stator end-winding part existed for this angle as well.
The greatest mechanical hazards existed for the voltage phase shift of −120 • (when the power system voltage lagged with respect to generator voltage). In this state, the greatest electromagnetic torque appeared and caused a severe torsional moment on the shaft. The amplitude of electromagnetic torque was higher than that obtained from the three-phase sudden short circuit, whereas, in the case of the absolute sum of the two maximum electromagnetic torques with opposite directions, the maximum value was noted for a phase shift angle equal to 180 • during the sudden short circuit.
Larger values of induced current in the rotor conductive parts were observed for the three-phase sudden short circuit but they disappeared relatively quickly (ca. 0.3 s, Figure 26). The induced current decay in the rotor bars was significantly slower and lasted ca. 2 s (Figures 18 and 20). This potentially caused strong heating of the rotor during faulty synchronization.
The stator current and electromagnetic torque did not exceed the rated values during faulty synchronization when the voltage phase shift was less than 10 • . | 8,388.8 | 2020-10-20T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
Genetic and structural validation of Aspergillus fumigatus N-acetylphosphoglucosamine mutase as an antifungal target
Aspergillus fumigatus is the causative agent of IA (invasive aspergillosis) in immunocompromised patients. It possesses a cell wall composed of chitin, glucan and galactomannan, polymeric carbohydrates synthesized by processive glycosyltransferases from intracellular sugar nucleotide donors. Here we demonstrate that A. fumigatus possesses an active AfAGM1 (A. fumigatus N-acetylphosphoglucosamine mutase), a key enzyme in the biosynthesis of UDP (uridine diphosphate)–GlcNAc (N-acetylglucosamine), the nucleotide sugar donor for chitin synthesis. A conditional agm1 mutant revealed the gene to be essential. Reduced expression of agm1 resulted in retarded cell growth and altered cell wall ultrastructure and composition. The crystal structure of AfAGM1 revealed an amino acid change in the active site compared with the human enzyme, which could be exploitable in the design of selective inhibitors. AfAGM1 inhibitors were discovered by high-throughput screening, inhibiting the enzyme with IC50s in the low μM range. Together, these data provide a platform for the future development of AfAGM1 inhibitors with antifungal activity.
INTRODUCTION
Aspergillus fumigatus is a human fungal pathogen capable of causing infections ranging from allergic to invasive disease [1], and the major cause of IA (invasive aspergillosis) in immunocompromised patients [2]. In these patients, the crude mortality is 30-95 % and remains about 50 % even when treatment is given [3,4]. Antifungal drugs such as azoles, polyenes and candins are chains of α-glucan, galactomannan and polygalactosamine [8]. Chitin, accounting for approximately 10-20 % of the cell wall [9], is synthesized by chitin synthases that use UDP (uridine diphosphate)-GlcNAc as the sugar donor. In addition, UDP-GlcNAc is also utilized in the biosynthesis of cell wall mannoproteins and GPI (glycosylphosphatidylinositol)-anchored proteins [10,11].
In eukaryotes, UDP-GlcNAc (N-acetylglucosamine) is synthesized from Fru-6P (fructose 6-phosphate) by four successive reactions: (i) the conversion of Fru-6P into GlcN-6P (glucosamine 6-phosphate) by GFA1 (glutamine: Fru-6P amidotransferase); [12] the acetylation of GlcN-6P into GlcNAc-6P by GNA1 (GlcN-6P acetyltransferase); (iii) the interconversion of GlcNAc-6P into GlcNAc-1P (N-acetylglucosamine-1phosphate) by AGM1 (N-acetylphosphoglucosamine mutase); and (iv) the uridylation of GlcNAc-1P into UDP-GlcNAc by UAP1 (UDP-GlcNAc pyrophosphorylase) [13]. The third enzyme, AGM1, is a member of the α-D-phosphohexomutase superfamily, that catalyses intramolecular phosphoryl transfer on a range of phosphosugar substrates [14]. AGM1 has been isolated and characterized from Saccharomyces cerevisiae, Candida albicans and Homo sapiens [15][16][17][18]. It has been reported that the AGM1 enzyme requires a divalent metal ion such as Mg 2 + as a co-factor, but the reaction is inhibited by Zn 2 + ions [19,20]. The sequence motif Ser/Thr-X-Ser-His-Asn-Pro is highly conserved and priming phosphorylation of the serine at the third position is required for full activity [15,[21][22][23]. To date, only the crystal structure of CaAGM1 (Candida albicans AGM1) has been reported, revealing four domains arranged in a 'heart-shape' [14]. The overall structure is similar to those of phosphohexomutases such as phosphoglucomutase/phosphomannomutase from Pseudomonas aeruginosa [24]. The agm1 gene is essential for cell viability in S. cerevisiae [17]. Mice lacking the agm1 homologue (pgm3) die prior to implantation, whereas heterozygotes have intrinsic haematopoietic and reproductive defects [25]. Although AGM1 has been proposed as a potential drug target, the issue of selectivity has not been explored and to date no drug-like inhibitor has been described for this class of enzyme.
Here, we show that A. fumigatus possesses a functional AGM1 enzyme that is essential for cell viability and cell wall synthesis. A crystal structure of the enzyme revealed the possible exploitable differences in the active site compared with the human enzyme. Using a high-throughput screening approach, we identified the first low micromolar inhibitors for this enzyme.
A. fumigatus strain KU80 pyrG − derived from KU80 pyrG + [27], a kind gift from Jean-Paul Latgé, Institut Pasteur, France, was propagated at 37 • C on YGA (0.5 % yeast extract, 2 % glucose, 1.5 % Bacto-agar) with addition of 5 mM uridine and uracil. The Aspergillus nidulans alcA promoter (P alcA ) was induced by growing on the MM (minimal medium) [28] with 0.1 M glycerol, 0.1 M threonine or 0.1 M ethanol as carbon sources. YEPD (2 % (w/v) yeast extract, 2 % (w/v) glucose and 0.1 % (w/v) peptone) medium and CM (complete medium) [29] were utilized to repress the P alcA completely and partially, respectively. Strains were grown in liquid medium at 37 • C, with shaking at 200 rev./min. At the specified culture time point, mycelia were harvested, washed with distilled water, frozen in liquid N 2 and then ground using a mortar and pestle. The powder was stored at − 70 • C for DNA, RNA and protein extraction.
Conidia were prepared by growing A. fumigatus strains on solid medium with or without uridine and uracil for 48 h at 37 • C. The spores were collected, washed twice then resuspended in 0.1 % (v/v) Tween 20 in saline solution, and the concentration of spores was confirmed by haemocytometer counting and viable counting.
Cloning of agm1
The coding sequence of Af AGM1 (A. fumigatus Nacetylphosphoglucosamine mutase) (accession: XP_750370) was amplified by PCR from an A. fumigatus cDNA library (kindly provided by Jean-Paul Latgé, Institut Pasteur, France) using the forward primer P1 (5 -GCGAATTCATGGCGTCTCCAGCCG-TTCGC-3 ) and the reverse primer P2 (5 -CTGCGGCCGC-TTAAGAAGCCTGCAAGATTTCTTTGACGGTG-3 ), exploiting the EcoRI and NotI restriction sites, for cloning into pGEX-6P1 (GE Healthcare) following a modification that removed the BamHI site from the original pGEX-6P1 vector such that the EcoRI site immediately followed the PreScission Protease coding sequence. The cloned protein sequence had a deletion of the amino acid residues VSSYGTFDGGMKGEFAD, corresponding to residues 85-101 of the reference XP_750370 sequence, but the protein sequence alignment with Aspergillus clavatus (XP_001269528) and Neosartorya fischeri (XP_001265046) Nacetylglucosamine-phosphate mutase sequences suggested that this deletion is most likely because of alternative splicing. All plasmids were verified by sequencing using the University of Dundee sequencing service.
Construction of the conditional inactivation mutant
Plasmid pAL3 containing the P alcA and the Neurospora crassa pyr-4 gene as a fungal selectable marker [30] was employed to construct a suitable vector allowing the replacement of the native promoter of the A. fumigatus agm1 gene with the P alcA . To this end, an 898 bp fragment from − 32 to + 866 of agm1 was amplified with primers P3 (5 -GGGGTACCACACGACT-TTCGCCAGGTC-3 , containing a KpnI site) and P4 (5 -G-CTCTAGATCCTTGCTCAGTAGGCTCAC-3 , containing an XbaI site). The PCR-amplified fragment was cloned into the expression vector pAL3 to yield pALAGM1N and confirmed by sequencing. The pALAGM1N was used to transform strain KU80 pyrG − by PEG-mediated fusion of protoplasts [31] and positive transformants were selected by uridine/uracil autotrophy.
Genotyping of the transformants was performed by PCR and Southern blot analysis. For PCR analysis, three pairs of primers were employed. Primers P5 (5 -ATGGCGTCTCCAGCCGTT-3 ) and P6 (5 -TTAAGAAGCCTGCAAGATTTC-3 ) were used to amplify the agm1 gene (2 kb). Primers P7 (5 -AAACGCAA-ATCACAACAGCCAAC-3 ) and P8 (5 -CTATGCCAGACG-CTCCCGG-3 ) were used to amplify the pyr-4 gene (1.2 kb). Primers P9 (5 -TCGGGATAGTTCCGACCTAGGA-3 ) and P10 (5 -TGATGCCAATACCCATCCGAG-3 ) were used to amplify the fragment from the P alcA to the downstream flanking region of the agm1 gene (2.8 kb). For Southern blotting, genomic DNA was digested with PstI, separated by electrophoresis, and transferred to a nylon membrane (Zeta-probe + , Bio-Rad). The 898-bp fragment of agm1 and a 1.2 kb HindIII fragment of the N. crassa pyr-4 gene from pAL3 were used as probes. Labelling and visualization were performed using the DIG DNA labelling and detection kit (Roche Applied Science) according to the manufacturer's instructions.
Quantitative PCR
Total RNA from the spores cultured in liquid MM was extracted using Trizol reagent (Invitrogen). cDNA synthesis was performed with 5 μg RNA using the SuperScript-First-Strand Synthesis System (Fermentas). Primers P11 (5 -TGTTGGAAGCTGAATGG-GAAGC -3 ) and P12 (5 -CGATCTCCTTAAC CAATTCGTCG -3 ) were used to amplify a 96-bp fragment of agm1, and primers P13 (5 -CCACCTTGCAAAACATTGTT-3 ) and P14 (5 -TACTCTGCATTTCGCGCATG-3 ) were used for an 80-bp fragment of tbp gene (encoding TATA-box-binding protein). To exclude contamination of cDNA preparations with genomic DNA, primers were designed to amplify regions containing one intron in the gene [32,33]. Each PCR reaction mixture (20 μl) contained 8 μl sample cDNA, 0.4 μl ROX Reference Dye and 10 μl SYBR Premix Ex Taq TM from the SYBR Premix Ex Taq TM Kit (TAKARA), 0.8 μl ddH 2 O and 0.2 μM of each pair of primers. Thermal cycling conditions were 50 • C for 2 min and 95 • C for 1 min, followed by 40 cycles of 95 • C for 5 s, 60 • C for 60 s. Real-time PCR data were acquired using Sequence Detection software. The standard curve method [34] was used to analyse the real-time PCR data. Samples isolated from different strains and at different times were tested in triplicate.
Electron microscopy and chemical analysis of the cell wall
To monitor the development of the cell wall structures, the conidia and mycelia grown in solid and liquid MM were fixed and examined with an H-600 electron microscope as described by Li et al. [35]. For the chemical analysis of the cell wall, conidia were inoculated into 100 ml MM or MMG liquid medium at a concentration of 10 6 conidia ml − 1 and incubated at 37 • C with centrifugation (200 rev/min) for 36 h. The mycelium was harvested, washed with deionized water and frozen at − 80 • C. The cell wall components were isolated and assayed as described previously [36]. Three independent samples of lyophilized mycelial pad were used for cell wall analysis, and the experiment was repeated twice.
AfAGM1 production and purification
LB medium (1 l) containing 0.1 mg ml − 1 ampicillin was inoculated with 10 ml of an overnight culture of BL21 (DE3) pLysS cells harbouring the plasmid, and grown at 37 • C to A 600 = 0.8; at this absorbance , the temperature was reduced to 20 • C and protein expression was induced by the addition of 250 μM IPTG (isopropyl β-D-thiogalactoside) and the incubation time was prolonged for a further 18 h. The cells were harvested by centrifugation at 3500 g at 4 • C for 30 min, resuspended in Tris buffer (25 mM Tris, 150 mM NaCl, pH 7.5) containing lysozyme, DNAse (Sigma) and a tablet of protease inhibitor cocktail (Roche). Cells were lysed using a French press at 1000 psi. The insoluble fraction was removed by centrifugation at 40 000 g for 30 min and the supernatant was incubated with Glutathione Sepharose 4B beads (GE Healthcare) previously equilibrated with the same buffer for 2 h. The beads were collected by centrifugation at 1000 g for 3 min and washed using the same buffer. The beads were then incubated with PreScission protease in the same buffer at 4 • C on a rotating platform overnight. The cleaved protein was filtered from the beads, concentrated and confirmed by SDS-PAGE. In the last stage of purification, the protein was passed through a Su-perdex75 gel filtration column (2.6 × 60 cm) (Amersham Biosciences), previously equilibrated with 25 mM Tris buffer containing 150 mM NaCl, pH 7.5. Concentrated protein (5 ml) was loaded onto the column and eluted using the same buffer at 1.0 ml min − 1 flow rate. Approximately 5 ml fractions were collected and fractions containing the protein were pooled and concentrated using a 10-kDa cut-off Vivaspin concentrator (GE Healthcare).
Liquid chromatography-Tandem MS
Reduction and alkylation was performed on pure Af AGM1 protein prior to digestion by trypsin. The resulting peptides were dried down and reconstituted in 0.1 % (v/v) formic acid. Peptides separated on a nano-C18 reverse phase column using a Dionex 3000 ultimate n-HPLC coupled to an LTQ-Orbitrap Velos mass spectrometer (Thermo Fisher). The mass spectrometer operated on a data-dependent CID mode, allowing automatic switching after taxonomy: 207946; taxonomy specified: Aspergillus, with a Precursor Mass tolerance of 10 ppm and fragment mass tolerance of 0.6 kDa. Dioxidation (M), oxidation (M), phospho (STY) were allowed as variable modifications and carbamidomethyl (C) modification was fixed. The analysis layout included a phosphoRS node to calculate the probability of phosphorylation site mapping. The site mapping spectrum was manually inspected and validated.
Protein crystallography 20 mg ml − 1 of pure Af AGM1 protein in 25 mM Tris buffer, 150 mM NaCl with pH 7.5 was used to screen for crystals at 20 • C using the sitting-drop vapour diffusion method. Each drop contained 0.6 μl of the protein solution with an equal volume of the mother liquor. To obtain the Af AGM1-Mg 2 + complex, the protein was incubated at 4 • C with 5 mM MgCl 2 for 4 h before setting up crystal trays. The complex crystallized after 2-3 days in the space group P2 1 2 1 2 1 (Table 3) from a mother liquor containing 0.1 M ammonium sulfate, 0.1 M sodium acetate trihydrate, pH 4.6. X-ray data from the Af AGM1 crystal were collected at the BM14 beam line of the European Synchrotron Radiation Facility (ESRF, Grenoble, France). Crystals were cryo-protected with 15 % (v/v) glycerol in mother liquor and frozen in a nitrogen gas stream at 100 K. Data were processed with HKL2000 [37]. The structure of Af AGM1 was solved by molecular replacement using MOLREP [38] with the CaAGM1 structure (PDBID 2DKA) [14] as the search model. Refinement was performed with REFMAC5 [39] and model building with COOT [40]. Pictures were generated using Pymol [41].
Enzyme kinetics
Four methods were used to measure Af AGM1 activity. The first was a coupled assay with G6PDH as described by Liu et al. [42]. Briefly, the assay was carried out in a 100 μl reaction volume containing 50 mM MOPS pH 7.4, 1.5 mM MgSO 4 , 1 mM DTT (dithiothreitol), a range of concentrations of Glc-1P, 1 mM NAD + and 0.01 units of G6PDH. The reaction was started by the addition of 10 nM Af AGM1 and incubated for 60 min at 20 • C. The amount of NADH produced was measured using the micro plate fluorescence reader (FL×800).
A second assay involved UAP1 and pyrophosphatase as coupling enzymes, as described by Mok and Edwards [43]. The reaction mixture (100 μl) contained 50 mM MOPS pH 7.4, 1.5 mM MgSO 4 , 250 μM UTP, varying concentrations of GlcNAc-6P (2.5-300 μM), 100 nM Af AGM1, 0.5 μM Af UAP1 and 0.04 units pyrophosphatase to convert the Af UAP1 reaction product PPi to inorganic phosphate. The reaction was incubated at 20 • C for 30 min and terminated by the addition of 100 μl Biomol green (0.03 % (w/v) malachite green, 0.2 % (w/v) ammonium molybdate and 0.5 % (v/v) Triton X-100 in 0.7 N HCl) and left for a further 20 min at 20 • C for colour development. Absorbance at 620 nm was read using a spectrophotometer.
A third assay involved coupling with UDP-glucose pyrophosphorylase using Glc-6P as the substrate, using the Biomol green assay as described [26].
A fourth assay was used to monitor product formation directly. The reaction mixture (100 μl) contained 50 mM MOPS buffer pH 7.4, 1.5 mM MgSO 4 , 1 mM Glc-1P and 20 nM Af AGM1. The reaction was incubated at 20 • C for 30 min and terminated by adding 100 μl of 0.2 M NaOH. The samples were analysed by High Performance Anion Exchange Chromatography coupled to a Pulse Amperometric Detector (HPAEC-PAD, Dionex) using a CarboPac PA1 column and conditions adapted from Zhou et al. [44]. Briefly, a linear gradient from 150 mM sodium acetate (Merck), 0.1 M NaOH to 400 mM sodium acetate, 0.1 M NaOH was applied over 25 min, before lowering the concentration of sodium acetate back to the initial conditions over 5 min. The column was then re-equilibrated at 150 mM sodium acetate, 0.1 M NaOH for 5 min. The flow rate was kept constant at 0.25 ml min − 1 .
Inhibitor screening
The Prestwick library (Prestwick chemical, France, 1120 compounds) and the LOPAC library (Sigma, 1280 compounds) were screened at 100 μM using the G6PDH coupled Af AGM1 assay. Compounds with percentage inhibition of 40 % were investigated as possible hits. The compounds were purchased and false positives eliminated by testing inhibition of the coupling enzyme. IC 50 s of the most potent compounds against Af AGM1 were estimated using the direct assay method described above.
In vivo AfAGM1 activity assay and MIC (minimum inhibitory concentration) assay
For in vivo protein extraction, the ground frozen powder was dissolved in 50 mM Tris-HCl pH 7.5 and placed on ice for 30 min. Intracellular proteins were collected by centrifugation. In order to eliminate PPi derived from the intracellular extract, a 10 kDa cut-off concentrator was used with each sample. Protein concentration was determined using the Folin-phenol method [45]. The Af AGM1 activity was determined as described by Mok and Edwards [43].
Three compounds were tested against A. fumigatus according to the Clinical and Laboratory Standards Institute (formerly NC-CLS) M38-A microdilution methodology. Briefly, conidial suspensions of 10 5 ml − 1 were dispensed (100 μl) into a microtiter plate containing serial two-fold dilutions of compounds. After incubating at 37 • C for 48 h, growth was visually inspected. The MIC endpoint was defined as the lowest concentration producing complete inhibition of growth.
A. fumigatus possesses a functional AGM1
A BLASTp search of the A. fumigatus genome revealed the existence of a putative agm1 gene. The coding sequence of the gene was amplified by PCR from an A. fumigatus cDNA library, cloned into pGEX-6P-1 and overexpressed as a GST-fusion protein in Escherichia coli. Purification using glutathione beads followed by GST cleavage and size exclusion chromatography yielded 4 mg of pure Af AGM1 per litre of bacterial culture. A coupled assay with A. fumigatus UAP1 as the coupling enzyme was used to investigate the activity of Af AGM1 towards the predicted physiological substrate, GlcNAc-6P, yielding a K m of 25 + − 8 μM ( Figure 1A, Table 1). This is comparable with the K m of 46 μM obtained for HsAGM1 for the same substrate [15]. Enzymes of the α-D-phosphohexomutase superfamily have been reported to be promiscuous in terms of their phosphohexose substrate specificity [22]. Indeed, Af AGM1 is active on Glc-1P as demonstrated with a different coupled assay, with G6PDH as a coupling enzyme, revealing a K m of 1200 + − 100 μM (Figure 1B, Table 1). This is different from the 12 + − 1 μM K m obtained for PaPMM/PGM [46], suggesting Af AGM1 is more selective for GlcNAc phosphosugars. The presence of glucose-1,6-bisphosphate, an activator normally needed for this superfamily, did not enhance the activity implying that the enzyme may have been purified as the active, phosphorylated form [23]. Af AGM1 is also capable of converting Glc-6P into Glc-1P with a K m of 300 μM, investigated using T. brucei UDP-glucose pyrophosphorylase as the coupling enzyme (Table 1). However, the k cat /K m values for GlcNAc-6P, Glc-1P and Glc-6P were 0.0084, 0.0313 and 0.0006 μM − 1 s − 1 , respectively, demonstrating that Af AGM1 has higher catalytic efficiency for Glc-1P than for GlcNAc-6P and Glc-6P (Table 1).
AfAGM1 is essential for A. fumigatus survival
In fungi, AGM1 catalyzes an important step in the synthesis of UDP-GlcNAc, an important precursor for the synthesis of chitin, a key component of the fungal cell wall. Deletion of AGM1 in S. cerevisiae has been shown to be lethal [13,17]. We first attempted to construct a deletion mutant in A. fumigatus by replacing the agm1 gene with a pyrG gene. A total of 108 transformants were screened, none of them was positive. Therefore a conditional inactivation mutant was constructed by replacing the native promoter of the agm1 gene with P alcA , a tightly regulated promoter that can be induced by ethanol, glycerol or threonine, and repressed completely on the YEPD medium [30,47]. To this end, a plasmid (pALAGM1N) that contains the pyr-4 gene and P alcA fused to a 3 -end truncated version of the agm1 gene was employed in transformation of A. fumigatus KU80 pyrG − to generate a strain carrying the P alcA -agm1 fusion gene by homologous recombination. One mutant named AGM1 was confirmed to be correct by PCR and Southern blot analysis. PCR analysis revealed that a 1217-bp fragment of pyr-4 and a 2764bp fragment of P alcA -agm1 could be amplified from the mutant, while no such fragment was amplified from the wild-type strain (Figure 2A). When an 898-bp fragment of the agm1 gene was used as probe, the expected 4.2 kb fragment was found in the wild-type, whereas the expected 3.3 and 7 kb fragments were detected in the AGM1 strain ( Figure 2B). When the 1.2 kb fragment of the pyr-4 gene was used as a probe, no fragment was found in the wild-type, whereas the expected 7 kb fragment was detected in the AGM1 strain ( Figure 2C). These results demonstrated that the promoter of the agm1 gene was replaced with the P alcA in the AGM1 strain. The AGM1 strain grew normally on the solid MM containing 0.1 M glycerol (MMG), 0.1 M ethanol or 0.1 M threonine at 37 • C for 36 h, whereas its growth was significantly inhibited on MM medium containing 1-3 % glucose and completely retarded on YEPD or CM ( Figure 2D), demonstrating that Af AGM1 is essential for A. fumigatus viability. Total suppression of agm1 expression led to cell death. This suggests that no other members of the phosphohexomutase superfamily can substitute AGM1 in A. fumigatus, although the enzyme itself possesses both phosphoglucomutase and phosphoacetylglucosamine mutase activity ( Figure 1 and Table 1).
In order to investigate the function of Af AGM1, MM containing 1 % glucose (MM) was chosen for subsequent experiments.
Under this condition, total RNAs were prepared from mycelia and the transcription levels of agm1 in mutant and wild-type were examined by real-time quantitative PCR. Using relative standard curve quantitation, the transcription level of the agm1 gene in the AGM1 strain was reduced to 32 % of the wild-type transcript level. Intracellular proteins were extracted from mycelium cells and investigated for Af AGM1 activity using the Af UAP1 coupled assay revealing a 50 % reduction in AGM1 activity.
AfAGM1 is important for cell wall synthesis and ultrastructure
Examination of the ultrastructure of the spore and hyphal cell wall revealed the spore of the AGM1 strain is similar to that of wild-type upon induction of agm1 expression (using MM supplemented with 0.1 M glycerol, MMG) ( Figure 3A). Using gene repression (with MM), the spore and hyphae of strain AGM1 had a thinner cell wall, unable to retain surface melanin ( Figure 3B). Furthermore, the cell wall contents were analysed. With agm1 induction, the cell wall components of the AGM1 strain were similar to those in wild-type. With agm1 suppression, the content of α-glucan and β-glucan was increased by 25 and 33 %, respectively; the amounts of glycoprotein and chitin in strain AGM1 were decreased by 16 and 19 %, respectively; GlcNAc released from cell wall proteins was decreased by 34 % and mannose was increased by 14 % (Table 2). These results suggest that the suppressed expression of agm1 induces a decreased content of chitin and GlcNAc in the A. fumigatus cell wall, presumably as a direct consequence of the decreased pools of the UDP-GlcNAc precursor. Although α/β-glucan contents were found to be increased (perhaps by the activation of the cell wall integrity signalling pathway [48,49]), this did not effectively compensate for the reduction of chitin, indicating that Af AGM1 is essential for cell wall synthesis in A. fumigatus.
AfAGM1 possesses structurally exploitable differences compared with the human enzyme
The agm1 gene disruption provides genetic validation of Af AGM1 as a potential antifungal target in A. fumigatus, jus- Table 3 Details of diffraction data collection and structure refinement Values between brackets are for the highest resolution shell. All measured data were included in structure refinement.
Measurement
AfAGM1 tifying efforts towards the discovery of inhibitors of this enzyme.
Although the structure of human AGM1 has not been reported, this enzyme is 49.3 % identical to Af AGM1 at the amino acid sequence level. Given that mice lacking the agm1 orthologue die prior to implantation [25] it is essential to discover inhibitors that selectively inhibit the fungal enzyme over the human orthologue.
To investigate possible differences in the active site compared with the human enzyme, the crystal structure of Af AGM1 in complex with magnesium was determined to 2.35Å resolution (Table 3), with two protein molecules in the asymmetric unit. The molecules interact via a 553Å 2 contact surface area, suggesting weak crystallographic (rather than physiologically relevant) contacts, in agreement with the gel filtration trace that showed a Figure 4A) forming a heart shape. Domain 1 (residues 1-187) bears the predicted active serine loop, domain 2 (residues 188-305) bears the metal-binding loop, domain 3 (residues 306-442) bears the sugar-binding loop and domain 4 (residues 443-542) bears the phosphate-binding loop. Although the Af AGM1 structure was determined in absence of any substrates or products, all amino acids important for substrate binding and catalysis, as gleaned from the CaAGM1 structure, are conserved, in agreement with the observed catalytic activity of Af AGM1 ( Figure 4B). Strikingly, the electron density revealed a phosphorylated active site S69 ( Figure 4A), which was confirmed separately by mass-spectrometric phosphosite mapping ( Figure 4C), explaining why the enzyme is active in absence of glucose-1,6-bisphosphate (Table 1), an activator normally required to load the active site of this class of enzyme with a phosphate [23,46,50]. It is probably that Af AGM1 became phosphorylated during expression in the E. coli host. It is known that enzymes of the wider family of phosphohexomutases are magnesium-dependent. In Af AGM1, the magnesium ion is pentagonally coordinated in a square-pyramidal arrangement by pSer69, Asp 284 , Asp 286 and Asp 288 ( Figure 4B). This type of coordination has also been described for the structure of CaAGM1 complexed with either GlcNAc-1P or GlcNAc-6P and Zn acting as an inhibitor [14].
Although the catalytic machinery is fully conserved, careful analysis of sequence conservation in the active site area revealed that the human and fungal enzymes possess differences near the substrate-binding site ( Figure 4B). For example, within the loop carrying the catalytic serine, Af AGM1 Ala 73 is equivalent to Glu 68 in the human enzyme. The sugar phosphate-binding loop harbours Ala 506 and Ala 512 in Af AGM1, equivalent to Pro 497 and Val 503 in the human enzyme, respectively. Within the active site itself, located close to the phosphoGlcNAc binding site, Val 425 occupies a position in Af AGM1 that is equivalent to the smaller Ala 416 in the human enzyme ( Figure 4B). Such differences can be exploited in the design of inhibitors to selectively target the fungal enzyme.
Screening-based discovery of micromolar AfAGM1 inhibitors
To identify potential inhibitors of Af AGM1, high-throughput screening of the Prestwick (1120 compounds) and Sigma LOPAC (1280 compounds) libraries was carried out at 100 μM compound concentration. Screening was performed using the G6PDH coupled assay. Compounds with percentage inhibition of 40 % were considered to be hits, corresponding to 84 compounds (3.5 %) of the total screened. These compounds were tested against the coupling enzyme, resulting in a pool of 16 true positive hits. Where possible, structural representatives of different scaffolds were purchased and retested on Af AGM1. A group of anthraquinone-based compounds were found to be the most potent Af AGM1 inhibitors identified from the Prestwick screen with 1,5-diamino-4,8-dihydroxyanthraquinone ( Figure 5A, compound 1) having a K i of 300 + − 13 μM by a non-competitive mixed inhibition mechanism ( Figure 5B). Interestingly, a similar compound, the organic dye Disperse Blue 56 (2-chloro-1,5diamino-4,8-dihydroxyanthraquinone) ( Figure 5A, compound 2) was identified by virtual screening method as an inhibitor of PaPMM/PGM with an IC 50 of 5 μM [42]. The authors observed that inhibition of this enzyme was due to aggregation of the compound. However, no inhibition of PaPMM/PGM was observed when the compound 1 was tested [42]. The selectivity of this compound may be the result of the dissimilarity between Af AGM1 and PaPMM/PGM (sequence identity of only 19.6 %).
The other compounds, compound 3 (6-hydroxyl-DL-DOPA) and compound 4 (tri-sodium 4-[(2z)-2-[4-formyl-6-metyl-5-oxo-3-(phosphonatooxymetyl)-pyridin-2-ylidene] hydrazinyl] benzoate) ( Figure 5A) from the LOPAC screen were tested against Af AGM1 by a direct assay method using HPAEC-PAD (Figure 5C) to avoid the observed slight inhibition of the coupling enzyme and were found to inhibit Af AGM1 activity with IC 50 s of 58 + − 4 μM and 7.1 + − 0.2 μM for the compounds 3 and 4, respectively ( Figure 5D). When compounds 1, 3 and 4 were tested on A. fumigatus cultures, compounds 1 and 3 showed severe precipitation during dilution for the MIC assay, whereas compound 4 did not inhibit growth of either the wild-type or AGM1 mutant strains at concentrations up to 1.4 mM. Although these compounds are not active against A. fumigatus, either because of the solubility of the compounds, limited cell penetration or low efficacy, they are the first low micromolar inhibitors identified for the phosphohexomutase superfamily. Structural complexes of these inhibitors with Af AGM1 and synthesis of derivatives to address solubility, penetration and potency may lead to insights into mode of action and generation of molecules reproducing the genetic phenotype of the agm1 gene knockout.
In conclusion, by combination of genetic and structural approaches we have validated Af AGM1 as a potential antifungal drug target. Together with the novel compounds identified here, these results provide a platform for the development of AGM1 inhibitors that target fungal cell wall synthesis. | 6,904.8 | 2013-07-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Structure and morphology of ultrathinCo/Ru(0001) films
We follow the layer by layer growth of cobalt on ruthenium by means of low-energy electron microscopy (LEEM) and diffraction (LEED). Around 500 K each layer forms through the nucleation and growth of triangular islands. Fully dynamical calculations of the diffraction intensities establish that the first monolayer (ML) grows pseudomorphically, i.e. following the Ru hexagonal close packed (hcp) stacking sequence. In films thicker than a ML, the in-plane lattice spacing is relaxed and the resulting superstructures produce satellite spots in the diffraction patterns. Our LEED analysis indicates that in two ML films the first Co layer is stacked in two ways on Ru, but there are no stacking faults between the Co layers themselves. In three ML thick regions, additional stacking faults are located at the topmost Co layer. These stacking faults are associated with distinct island shapes and unique selected-area diffraction spectra. Our study supports the simple picture that island orientation reflects the stacking type.
Introduction
An important challenge in the science of thin-film growth is to understand how the atomic structure of film/substrate interfaces comes about during the growth of the films. Bulk crystal structures of substrate-and film-material are in general not identical and therefore, to accommodate mismatch between the materials, defects such as dislocations and stacking faults develop at the film-substrate interfaces.
Understanding how interface structure evolves during the dynamic conditions of film growth can benefit greatly from measuring atomic structure in situ, during film deposition. In this field, electron diffraction techniques, including the analysis of intensity versus voltage low-energy electron diffraction (LEED) spectra IV [1,2], have supplied much helpful experimental data. However, the fact that only spatially averaged information is accessible to conventional LEED IV measurements is a severe limitation. Interfaces are usually not homogeneous and spatial averaging of heterogeneities induces strong artifacts even in some of the simplest imaginable interfaces. For example, if one considers the Ru(0001) surface as a simple interface between a bulk hcp crystal and the vacuum, an important artifact of spatial averaging is evident. Observing the LEED pattern of Ru(0001), one usually finds a pattern with six-fold symmetry, even though an atomically perfect surface of an hcp crystal has three-fold symmetry. The observed six-fold pattern is a result of spatial averaging over many atomically flat terraces: the apparent change of symmetry happens because the three-fold symmetry of terraces separated by atomic-height steps is rotated by 180 • . Using a low-energy electron microscope (LEEM) [3]- [5] for micro-LEED IV measurements, we recently showed that the LEED patterns of individual atomic terraces on Ru(0001) indeed have the correct three-fold symmetry [6]. From analysis of these micro-LEED IV measurements, we were able to determine structural parameters of the relaxed Ru(0001) surface that are in better agreement with theoretical predictions [7,8] than results from the previous LEED IV analysis, which was based on spatially averaged measurements [9]. The benefit of using micro-LEED IV to resolve heterogeneities was also demonstrated in a recent study of surface alloying during the deposition of Pd onto Cu(100) [10] and in earlier work on CO/NO reaction diffusion fronts on Pt(100) [11].
DEUTSCHE PHYSIKALISCHE GESELLSCHAFT
The central aim of the present paper is to understand how atomic structure evolves near the film/substrate interface during the deposition of the first few atomic monolayers (MLs) of Co on Ru(0001). The system is interesting because many studies have shown unusual magnetic properties, which have already spawned important applications. For example, Ru spacer layers are used to induce an antiferromagnetic coupling between thin films of Co alloys in spin-valves and magnetic-recording media [12]. Basic studies have shown that deposition of Co on Ru can result in the formation of three-dimensional (3D) islands in a Stranski-Krastanov growth mode [13]- [15], and that these 3D islands have interesting magnetic properties. In our recent work on flat, ultrathin Co/Ru(0001) films, we showed that the film magnetization easy-axis changes from an in-plane orientation for ML thick films, to out-of-plane for bilayer islands or films, and back to in-plane again for thicker islands or films [16].
One issue that has not been fully understood up to now is how the interfacial atomic structure in Co/Ru(0001) films evolves to accommodate lattice mismatch. Hexagonally close-packed bulk Ru has a 7.3% larger lattice constant than bulk cobalt. Yet, using LEEM we have seen that under appropriate conditions Co can be grown on Ru(0001) in a layer-by-layer mode up to 10 ML [16]. In this paper we present our detailed micro-LEED IV analysis of the morphology and the structure of Co films grown on Ru(0001) up to 3 ML thick. We employ LEEM to follow in real time the growth of the films. The films grow layer by layer through the nucleation and growth of triangular-shaped islands in large, atomically flat terraces. An earlier conjecture suggested that the orientation of the islands reflects the stacking sequence of the Co layers that form each island [17,18]. We confirm this hypothesis using LEED IV spectra acquired from individual, microscopic regions on the surface that are essentially free of atomic-scale defects.
Experimental
The experiments were performed in LEEM systems with base pressures in the 10 −11 Torr range. The lateral spatial resolution is close to 10 nm. The instruments have facilities for in situ heating (up to 1300 K) and cooling (down to 100 K) of the samples while recording images at up to video rate.
The Co films were grown on Ru(0001) crystals. The substrates were cleaned in situ by repeated cycles of exposure to oxygen followed by heating to at least 1800 K. The crystals contains areas, in some cases over 100 µm wide, with a low density of atomic steps. In these regions, terraces more than 5 µm wide can be found routinely. Co was sublimated from highpurity Co rods heated by electron-beam bombardment. Typical evaporation rates were between 1 ML/2 min and 1 ML/35 min. The chamber pressure always remained below 4 × 10 −10 Torr.
By placing an aperture in the electron beam path to reduce the illuminating beam diameter on the sample to a few microns, we used the LEEM instrument to measure local-area intensity versus energy (IV) curves [1,2]. This procedure is described in more detail in [6].
IV analysis
Fully dynamical IV calculations were performed with a modified version of the Van Hove-Tong package [2,19,20]. The surface was modelled by stacking the required number of Co atomic planes (1, 2 or 3) on top of two Ru atomic planes. These surface layers were then stacked on top of 4 DEUTSCHE PHYSIKALISCHE GESELLSCHAFT five atomic planes of bulk Ru(0001) using the renormalized forward scattering (RFS) approach. The resulting 2D slabs thus contained between 8 and 10 atomic layers, depending on the Co film thickness. Relativistic phase-shifts [21] were calculated and subsequently spin-averaged. Wellconverged values for both the number of beams and the number of phase shifts (l max = 8) were employed in all cases. We explored the parameter space comprised of the stacking sequence of the Co layers, the topmost three interlayer spacings, d 1−3 , and the interlayer spacing of the bulk Ru atomic planes, d bulk , by calculating the IV curves over fine 3D-grids. The interlayer spacings were swept over wide ranges for all possible stacking sequences of the Co layers. We found that including the parameters d 3 and d bulk in the IV analysis had little effect on the agreement between experiment and theory, but increased significantly the error bars for the other parameters. Therefore, we fixed both to the literature Ru spacing, d 3 = d bulk = 2.14 Å. We also included the in-plane lattice parameter, a , in the structural search. a was made common to all surface and bulk layers.
All simulations were performed for a temperature T = 300 K using an energy increment of 2 eV. The experiment-theory agreement was quantified via Pendry's R-factor (R P ) [22], while the error bars for each parameter were obtained from its variance: where E i gives the optical (inner) potential and E corresponds to the total energy range analysed. Correlations between the structural parameters were taken into account for the error-limits estimation. We note that all structural parameters derived in this work represent well-defined minima in their respective R-factor plots. Non-structural parameters such as the muffin-tin radius, the optical potential or the Debye temperatures at the surface planes were varied. We found that these parameters had no impact on the final structural conclusions; therefore, we omitted their systematic optimization.
Results and discussion
The growth of the Co films was followed in real time by LEEM imaging. The growth on flat areas of the sample proceeds layer by layer up to at least 10 ML. In figure 1 we show frames from a representative movie of a film grown on terraces larger than 5 µm. Islands are nucleated on the terraces in addition to some material growing from the steps of the substrate. The shape of the islands is triangular. On a given substrate terrace, the islands point in one direction for 1 ML islands on Ru, and in the opposite direction for 2 ML islands on a 1 ML film. For 3 ML islands on a 2 ML film, two orientations can be detected on a single substrate terrace.
To understand the structure of the growing film, we perform selected-area diffraction on areas of uniform thickness. Representative LEED patterns are shown in the right panel of figure 1. There is a clear difference between the LEED patterns of different thickness films. One ML islands and continuous films only produce 1 × 1 reflections corresponding to the same reciprocal lattice vectors as the original Ru substrate beams. This clearly indicates that the first Co ML is coherently strained to match the in-plane lattice spacing of the Ru substrate. By contrast, the LEED patterns of 2 ML and thicker islands or films are more complex, featuring satellite beams around the substrate reflections. Both for two-and for 3 ML-thick regions, the satellite beams around the specular beam have hexagonal symmetry with the same orientation as the Ru integer beams (i.e. these superstructure spots are non-rotated relative to the substrate beams). In different regions of 3 ML thick films, two noticeably different LEED patterns can be identified. The two patterns can be correlated with the orientation of the original 3 ML islands that expanded into the film regions. (a)-(c) Sequence of LEEM images acquired during continuous growth at a rate of 1 ML/120 s. The substrate temperature is 470 K. We observe triangular Co islands 1 ML thick on Ru (a), 2 ML thick on 1 ML film (b) and 3 ML thick on a 2 ML film (c), respectively. The LEEM images are 4.4 µm wide, and were acquired with an electron energy of 5 eV (a) and (b) and 20 eV (c). Only in the case of 3 ML islands (c) we observe two different triangular orientations on the same substrate terrace. (d)-(f) LEED patterns acquired from constant thickness areas in Co films with total coverage close to 1 (d), 2 (e), and 3 ML (f). The LEED patterns were recorded at 60 eV (1 and 3 ML) and 37 eV (2 ML) beam energy. The two LEED patterns for 3 ML areas were acquired on the same substrate terrace, on two different regions that originated from islands with different orientation.
The first ML
For the nucleation and growth of the first ML, good contrast conditions with a high intensity of backscattered electrons can be observed using an electron energy of 5 eV. Under these conditions the ML islands appear dark on a light grey background corresponding to the Ru substrate.
A sequence of images illustrating the growth of the first ML is shown in figure 2. There is no growth of second layer islands until more than 90% of the Ru surface is covered with a single-ML film.
The shape of the growing islands is triangular, as already reported in a previous scanning tunneling microscopy (STM) study [23]- [25]. Within each substrate terrace, the triangular islands are all oriented in the same direction, with their edges following compact directions of the substrate surface. When crossing from one terrace to the next, the islands change their orientation by 180 • . This change is explained by the substrate hcp(0001) structure, where a rotation of the terrace structure by 180 • occurs when one atomic step is crossed [6,26], as shown schematically in figure 3 In principle, one might have guessed that the observed island shapes are energetically favoured equilibrium shapes. By considering the atomic structure of a hexagonal island on an hexagonal substrate, as sketched in figure 3(a), one can appreciate the origin of three-fold symmetric island shapes. The sketched hexagonal island has two types of symmetry-inequivalent step edges, each step-type exposing a different type of microfacet [27]. In another system, it was shown that the resulting energetic inequivalence can be sufficient to lead to three-fold symmetric shapes of ML islands [28,29]. However, from our measurements it is evident that the triangular shapes of the islands are not equilibrium shapes, but are a consequence of kinetic limitations. For example, when islands coalesce during growth, the merged multisided shapes do not readily evolve into compact, equilibrium shapes. This implies that edge diffusion is too slow to equilibrate the island shape. In our system, the fact that edge diffusion rates are different for the two types of steps leads to the observed island shapes. The same conclusion was drawn from the previous STM observations [23]- [25].
The fact that all observed islands point in the same direction within each substrate terrace is a strong hint that the stacking sequence is the same for all the islands [17,18]. For example, in order to expose equivalent microfacets at its step edges, a stacking-faulted island would be rotated by 180 • compared to a pseudomorphic island. Edge-atom diffusivity along the two different types of steps is different, leading to different growth rates of the two step-types. As a result, islands grow in triangular shapes. (b) Schematic of triangular hcp-islands bounded by {100}-type steps. In consecutive terraces (left and right sides of the schematic) triangular islands pointing in opposite directions expose the same step-edge type.
In order to describe the stacking sequence of the films, we will use two notation styles. The first is the classic labelling A, B or C for each possible close-packed layer, with ABC . . . or BCA . . . indicating face centred cubic (fcc) structure, and ABAB . . . indicating hcp structure. In this paper, we use a slash to indicate the interface between the Ru substrate and the Co film (or the vacuum interface when the substrate surface is bare). For example, the notation AB/A stands for 1 ML of Co continuing the bulk hcp sequence in a given substrate terrace. Additionally, for summarizing and understanding our results concerning the orientation of the triangular-shaped islands, it is helpful to employ Frank's notation [30], indicating the stacking of each layer relative to the one below. Transitions of one layer to the next following the sequence A→B→C→A are denoted by , while the opposite transitions, namely C→B→A→C are denoted by . An fcc structure is written either or . An hcp structure corresponds to . To determine the stacking sequences of the films, we use LEED IV analysis in the following way. Selected-area LEED patterns from cobalt-ML covered regions show only integer (1×1) spots (figure 4(a)). The Co films were grown at 464 K and LEED IV curves were measured three hours later when the sample was at room temperature. Electron energy was swept in the range For each sequence and a value, we plot the best R P value among the rest of the structural parameters. The horizontal grey line corresponds to the R-factor variance for the best-fit stacking sequence, hcp, from which errors (vertical grey lines) on this parameter are estimated.
50-350 eV and the intensities of the specular (00) beam, the three (10) beams, the three (01) beams, as well as four of the six (11) beams were measured (the remaining two (11) beams were omitted due to systematic distortions associated with the instrumentation). Symmetry-equivalent spectra were averaged, leading to the four experimental IV-curves shown in figure 4(b) as solid AB/C P = 0.59. In Frank's notation, the best-fit structure corresponds to / . The associated interlayer spacings are reported in table 1. Given the misfit of 7.3% between the in-plane lattice spacings of bulk-Co and bulk-Ru, the Co film is severely strained. This strain is reflected in the first Co-Ru interlayer spacing, d 1 = 2.05 Å, which is 4% smaller than the bulk Ru-Ru out of plane distance, and leads to an interatomic distance d Co−Ru = 2.58 Å, which is in nice agreement with the sum of their covalent radii, r Co + r Ru = 1.25 + 1.34 = 2.59 Å. Finally, figure 4(c) presents the R-factor behaviour versus a for all stacking sequences explored. The purpose of this fit is to check the sensitivity of the LEED IV curves to the Ru in-plane lattice constant, a Ru = 2.70 Å. The error estimation reveals a reasonably good lateral resolution of ±0.03 Å. The error level is, nevertheless, large enough that we can disregard the potential errors from assuming a constant inner potential as described in [31].
After resolving the structure of the 1 ML film, we proceed to determine the type of step edge of the islands. To such end we need to orient the diffraction pattern of a 1 ML Co film on a single Ru terrace relative to the triangular orientation of the growing islands, as observed on the same terrace. The magnetic lenses of the LEEM system used for these measurements rotate the image when switching between imaging-and diffraction-mode. Therefore, we experimentally measured the image rotation between both lens settings. With this calibration, comparing the island shape and the LEED pattern on the same substrate terrace shows that the exposed step edges are of type {100}.
Our main conclusion for the 1 ML case is that Co grows pseudomorphic with the Ru substrate, keeping both the in-plane lattice parameter and the hcp stacking sequence. It is important to note that our R P shows a strong sensitivity to the bulk Ru orientation, which allowed us to establish unambiguously the BABA/ (or /) termination of the Ru terrace used in the experiment reported here. This fact will become relevant for the analysis of higher Co coverages, discussed in sections 4.3 and 4.4, since for those cases it is otherwise difficult to know the Ru orientation.
Transition from first to second ML
When the first Co-ML nearly covers the substrate but before second layer islands nucleate, a change in contrast is observed within the film (figure 5). We interpret this contrast as the conversion of the pseudomorphic first layer film into another phase. The new phase grows quickly until it covers nearly the entire 1 ML film. Second ML islands nucleate shortly after the appearance of the new phase in the ML areas. As the 2 ML islands grow, the new 1 ML phase starts to disappear around them. If deposition is interrupted, the new phase disappears after half a minute at 523 K.
The transient nature of this phase prevents us from carrying out a meaningful LEED IV analysis. Nevertheless, a reasonable guess about its nature can be made based on what has been observed in other, similar systems. Observations of a similar effect have been reported for Cu/Ru(0001) [32] and Pt/Pt(111) [33] films, where a new phase near 1 ML total coverage corresponds to a metastable network of misfit dislocations in the ML areas, making the film about 5% denser than the pseudomorphic phase seen at lower coverage. In those systems, the network of dislocations is only stable under a high concentration of adatoms on top of the film. Consistent with Frenkel-Kontorova modelling [34], this effect is explained by the difference in the energy required to incorporate an atom into the film at a dislocation versus at a step edge. We propose that the new phase in the Co-ML areas corresponds to a similar network of misfit dislocations. In support of our interpretation, we note that misfit dislocations have been observed by STM in Co-ML islands [35]. Note that this type of dislocation network can be described as a reconstruction composed of a network of stacking transitions; i.e. in our interpretation the observed transient phase contains a mixture of small regions stacked in the BA/B hcp sequence and small regions stacked in the BA/C, fcc-like, sequence.
The reason for the ephemeral nature of this reconstructed phase is, firstly, that the required high concentration of adatoms on the 1 ML islands is only achieved when the islands cover most of the Ru surface (which otherwise acts as an adatom sink). Secondly, once 2 ML islands are nucleated, the adatom concentration falls because the edges of 2 ML islands act as adatom sinks. This explains the appearance and subsequent disappearance of the reconstructed phase during the completion of 1 ML Co and the onset of second layer growth, respectively.
The second ML
The growth of the second layer is shown in figure 6. There are many similarities with the 1 ML case. Nucleation of the second layer is followed by the growth of triangular islands. The orientation of the triangles is constant within each atomic terrace and rotates by 180 • from terrace to terrace. The orientation of 2 ML islands relative to 1 ML islands that grew in the same terrace is also changed by 180 • (compare figure 6(a) with figure 2(a)). This observation might be interpreted as an indication that the second layer of Co also grows following the hcp stacking sequence of the underlying Ru substrate. The detailed atomic structure is, however, more intricate. The LEED pattern shows satellite spots around the specular and the Ru integer-beams ( figure 7(a)), aligned in the same directions as the Ru beams. This indicates that the in-plane lattice spacing of the 2 ML Co islands differs from the lattice spacing of the Ru substrate. Measuring the spacing of the satellite spots indicates a contraction of 5.4 ± 2%, where the error bar is due in part to distortions of the LEEM imaging optics. The LEED pattern confirms previous STM experiments [25] where a periodic superstructure was observed and attributed to different lattice parameters of the Co film and the underlying Ru substrate. The reported size of the unit cell was close to a 13 × 13 Ru unit cell [25,35]. This kind of LEED pattern is typical for lattice-mismatched, heteroepitaxial systems (for example, Co/Pt(111) [36] or 4 ML Cu/Ru(0001) [37]). The simplest model for such structures is a moiré pattern formed by the coincidence lattice between substrate and film. In principle, in a moiré pattern all relative positions between film and substrate atoms are present. However, filmsubstrate interactions normally favour three-fold hollow sites (either fcc or hcp adsorption sites). As a result, films tend to distort such that atoms are displaced towards fcc-or hcp-sites, with only few atoms remaining close to bridge and on-top positions. This effect is easily observed in Frenkel-Kontorova models. For example, in the Frenkel-Kontorova calculations of figure 10 of [34], the commensurate supercell is split into two sections: in one section, atoms are close to hcp positions and in the other section atoms are close to fcc positions. The two sections are separated by a smaller number of atoms in or close to bridge and on-top positions. Thus, most film atoms are very close to either fcc-or hcp-sites relative to the substrate.
Dynamical LEED computation accounting for such large unit cells exceeds the scope of this work. Since we are interested primarily in identifying the stacking sequences of the films, we can simulate the super-structures by starting from a simplified model. We first consider combinations of perfect stacking sequences of atoms (A, B, or C stacking positions, not bridge nor on-top positions), and we model the Co layers plus the bulk Ru by assuming a common p(1×1) cell. We then simulate the structure of the supercell by performing weighted mixtures of the IV-curves corresponding to the different stacking sequences of the two main sections of the unit cell.
We fit only the integer beams (0,0), (1,0) and (0,1) for this exploration, choosing integration boxes around each beam that were large enough to include the satellite spots. We vary the lattice in-plane parameter between the Ru bulk constant (a Ru = 2.70 Å) and that of Co-hcp (a Co = 2.50 Å). For the case of 2 ML films on an hcp substrate, there are eight possible stacking sequences (BA/BA, BA/BC, BA/CA, BA/CB, and AB/AB, AB/AC, AB/CA, AB/CB). Here we do not consider the possibility of stacking faults forming between the top several atomic layers of the Ru crystal. The reason for disregarding this possibility is that introducing stacking faults in the Ru substrate is unlikely. To relax the tensile strain in the Co film, the topmost Ru layers would need to become denser. The needed atoms would have to come from the Co film or etching the substrate. From measurements described elsewhere [38] we know that Co/Ru interdiffusion can indeed take place, but only at temperatures well above those used in the preparation of samples described in this paper. Also, we do not observe motion of Ru steps during growth. On the other hand, when the first Co layer is being overgrown by the second, the cobalt layer can readily increase its density by incorporating Co atoms from the growth flux. Detailed analysis also shows that Ru substrates do not reconstruct in the closely related systems of Ag/Ru, Au/Ru [39] or Cu/Ru [37].
Our knowledge of the previous 1 ML analysis, then allows us to eliminate half of the eight possible stacking sequences: having measured the IV-curves for 1 ML coverage on the same terrace where the data for the 2 ML islands was subsequently acquired, we can determine the substrate stacking termination by comparing against the 1 ML curves shown in figure 4. The clear difference between the (1,0) and (0,1) spectra permits unambiguous identification of the substrate stacking termination and shows that the Ru surface ends with a BA/ stacking sequence (see the data set summarized in figure 7). Therefore, we explored all four possible registries for the two Co layers: BA/BA, BA/BC, BA/CA and BA/CB. The first two interlayer spacings, d 1 and d 2 , were optimized for each sequence. We point out that the intensity of the experimental (0,1) beam was found to be strongly attenuated for energies above 100 eV. One problem we detected is that the large integration boxes used in order to include the intensity of the satellite beams decreases the signal to noise ratio in the IV-curves. Also we note that the beam alignment was slightly worse in the 2 ML dataset that in either the 1 ML or the 3 ML films. We therefore chose to suppress this range from the analysis in the 2 ML case.
The analysis shows that the stacking sequences BA/BA and BA/CB match the observed IV-data more reliably than the sequences BA/CA and BA/CB. Therefore, we conjecture that the supercell of 2 ML Co/Ru(0001) films contains two regions, one where atoms are close to the stacking sequence BA/BA, and another where atoms are close to BA/CB. To model this structure, we assume that the two phases cover the supercell with given area fractions, and we set the parameters d 1 and d 2 common to both phases. Then we optimize these three parameters. The best R-factor drops to R P = 0.27 when we assume an area fraction of 70% for the BA/BA stacking sequence (hcp) and 30% for BA/CB (fcc), consistent with the STM observations reported in [25].
The two stacking sequences included in our best fit correspond, in Frank's notation, to /( + ) . As Frank's notation indicates the stacking sequence relative to the layer below, the unique Frank's symbol for the top layer of the film means that all the atoms of the upper Co layer are in equivalent three-fold hollow sites formed by the lower Co layer, i.e. no stacking faults exist between the two Co layers. The reconstruction is composed of a network of stacking faults that are confined to the interface between the Ru substrate and the Co bilayer film. In other words, the atoms of the first Co layer occupy both types of Ru hollow sites. But the second-layer Co atoms occupy only one type of hollow site. Figure 7(c) also shows the sensitivity of R P to the value of a . Our analysis suggests the value a = 2.56 ± 0.08 Å, which is quite close to the bulk Co in-plane constant (a Co = 2.50 Å) and clearly different from the bulk Ru in-plane spacing (2.70 Å). The unit cell calculated from this value is close to 14 × 14 Co atoms (35 Å in length), in good agreement with the STM findings For each sequence and a value, we plot the best R P value among the rest of the structural parameters. The horizontal grey line corresponds to the R-factor variance for the best-fit stacking sequence, BA/BA or hcp, from which errors on each parameter are estimated.
reported in [25]. We feel the agreement between experimental data and our computed IV spectra is reasonable, given the simplified model employed. The structural parameters corresponding to the best fit are given in table 1. Summarizing, for 2 ML Co films, we find that they present a moiré pattern composed of two stacking sequences. The stacking transitions are located at the Co/Ru interface, not between the The substrate temperature is 523 K and the growth rate is 1 ML/223s. Three ML Co is dark and medium grey and 2 ML Co is light grey (some white regions are 1 ML Co). Three frames (a, b, c) are shown in chronological order. Field of view is 10 µm, and electron energy is 20.6 eV.
two Co atomic layers. This structure of the 2 ML islands can be expressed in Frank's notation, which reveals more directly the fact that the termination of the Co film is unique:
The third ML
The third ML also grows by way of triangular islands (figure 8). Unlike in thinner films, we now find populations of triangle-shaped islands with two (opposite) orientations on each terrace (figure 9(a)). One might guess that the presence of both orientations on a given terrace can be explained by associating the two orientations with two different stacking sequences. That is, depending on the stacking sequence, favoured growth of type {100} island edges leads to different orientations of triangular islands (see figures 9(c) and (d). This effect has been used as a fingerprint of stacking faults in other systems such as Co on Cu(111) [18], or Ir/Ir(111) [40]. We use LEED IV analysis to show that this interpretation is correct and to establish exactly at which layer the stacking fault occurs. We find that at particular electron energies, for example close to 20 eV and to 48 eV, the specular reflectivities of islands with opposite orientations are substantially different. In the images taken at 20 eV all the islands with one orientation appear dark grey, while islands in the opposite orientation appear light grey ( figure 9(b)). A similar kind of contrast has been observed previously in the Cu/Ru(0001) system, where we attributed the effect to stacking-sequence domains at the second layer of Cu on Ru(0001) [41]. The difference in contrast between islands oriented in opposite directions allows us to keep track of the stacking type of local regions during growth from isolated islands to continuous films, in which island shapes and orientations became obscured due to coalescence. The contrast also reveals that both types of regions occur in areas where the third layer grows from steps (step-flow, lower part of figures 9(a) and (b)). The contrast difference is stable with respect to mild annealing. The two film types can be also imaged in dark field as well as in bright field, using non-specular integer reflections. At 40 eV the (01) beams are much more intense than the (10) beams in islands of one orientation, while the reverse is true for the islands with opposite orientation. This leads to strong contrast in dark-field images (similar to the contrast observed by transmission electron microscopy between twins in systems such as gold-on-mica [42]). The dark-field contrast reverses when going from one substrate terrace to DEUTSCHE The island has a stacking fault, i.e. the atoms are located at fcc adsorption sites. With the inverted shape relative to (c), step edges expose the same type of microfacet. (e) and (f) Same area observed in bright field (specular beam, eE = 22 eV) (e) and in dark-field using one of the first order Co beams (f, eE = 55 eV). In dark-field imaging (f) island-orientation reversal matches contrast reversal when crossing a substrate step. In bright field imaging (e) contrast remains constant when crossing substrate steps. the next, matching the orientation reversal of the island shape at consecutive terraces, as shown in figure 9(f). On the other hand, bright field contrast shows no reversal from terrace to terrace (compare figures 9(e) and (f)).
It is helpful to note that when we assign stacking sequences to films 3 ML thick, the sequences correspond to either fcc or hcp structure within the Co-slab, independent of their registry with the Ru substrate. Not taking into account the presence of a moiré-like structure at the Co/Ru interface, we might naively try to assign the two island populations to hcp and fcc stacking sequences. We might guess that hcp stacking corresponds to the islands whose orientation is inverted relative to 2 ML islands that grew previously on the same terrace, as suggested by Frank's notation of an hcp stacking sequence . The other 3 ML islands, aligned in the same orientation as 2 ML islands grown earlier in the same terrace, would be assigned to fcc stacking by this argument. Surprisingly enough, this guess of the stacking sequences within the Co film is confirmed by selected-area LEED IV analysis.
For this analysis, we measured LEED patterns and IV-curves from each type of region when the total coverage is close to three complete MLs. As in the case of 2 ML films, the diffraction patterns show satellite spots, indicating that the 3 ML films are relaxed (reconstructed) within the surface plane. Following the same procedure as in the case of 2 ML films, the integer beam IV-curves were obtained by integrating intensity within boxes that were sufficiently large to include also the satellite beams.
We simulated the IV spectra for a 3 ML thick Co film on top of Ru(0001), again assuming a common p(1 × 1) cell for all layers. Using our prior knowledge (as discussed in section 4. BA/BAB P = 0.30 for region II. Using Frank's notation, these minima correspond to /( + ) in region I and to /( + ) in region II. We note that the two minima in region I (figure 10(d)) correspond to fcc stacking within the Co film (/CBA and /BAC) and that both minima share very similar a values. The only difference between these two minima is the relative stacking of the film with respect to Ru. In contrast, both minima in region II (figure 10(f)) correspond to hcp stacking within the Co film. Again, both share the same in-plane lattice constant and differ only by their relative stacking with respect to the Ru substrate. This local-area LEED IV analysis confirms the conjecture we proposed at the beginning of this section, stating that the two different orientations of the triangular shapes of 3 ML islands are associated with fcc-versus hcp-stacking. The two types of 3 ML films only differ in how the third Co layer is stacked on the second layer. This stacking difference, a stacking fault, consistently explains our experimental observations. The existence of two minima per region suggests that the two types of 3 ML regions are both reconstructed and that the reconstructions are composed of two stacking sequences relative to the underlying Ru, as expected from the moiré-like LEED pattern. As was done in the case of 2 ML, we tested this model by comparing the experimental spectra with weighted mixtures of the two best-fit pure structures. This led to improvements for both regions: R P = 0.31 for region I (fcc) and R P = 0.28 for region II (hcp). For each sequence and a value, we plot the minimal R P value with respect to all other structural parameters. The horizontal grey line corresponds to the R-factor variance for the stacking sequences, BA/CBA and BA/CBC, from which errors on each parameter are estimated.
During mild annealing, or during deposition of additional Co, we do not observe changes in the stacking structure of completed 3 ML thick regions of the films. This stability of the two populations with different third layer stacking sequence is in contrast to other systems such as Cu/Ru(0001) [41], Ir/Ir(111) [43,44], or Ag/Ru(0001) [45] where stacking faults were observed to heal out either by thermal activation, or during further deposition of additional material (Ag/Ru).
It is interesting to examine the dependence of the area-fraction of fcc-versus hcp-regions on the substrate morphology. The ratio of both type of islands depends very clearly on whether the islands grew from underlying Ru step edges by step-flow, or whether they nucleated within atomically flat terraces. Figure 9 shows both types of areas. At 471 K, roughly half of the Co that grows from steps has fcc structure (lower part of figure 9(b)) and the other half has hcp structure. When deposited at slightly lower temperature, 435 K, all the step-flow Co grows hcp. A suggestion to explain this difference is that the fcc structure is energetically preferred, while the presence of the step replicated from the Ru-substrate in the growing film favours nucleation in an hcp stacking sequence that can match the step-edge without additional dislocations. More experiments will be required to confirm this idea.
The ratio of fcc to hcp islands nucleated away of the substrates steps, on atomically flat terraces, is very sensitive to the overall cleanliness of the experiment. When the Co-deposition and the LEEM measurements are done at total pressures below 5 × 11 −11 Torr, then the 3 ML islands grow mostly in the hcp stacking sequence. In this case the orientation of 3 ML islands is reversed compared to the orientation of 2 ML islands that appeared earlier on the same terrace. This can be seen by comparing figures 11(c) and (d). With continued deposition (not shown), we find that 4 ML triangular islands point mostly in the same direction as 3 ML islands, indicating fcc stacking. By contrast, when the residual gas pressure during deposition is higher, then the 3 ML islands mostly grow in fcc structure. In this case, the orientation of most 3 ML islands (as well as thicker islands during further deposition) is the same as for 2 ML islands (figures 11(a) and (b)). These findings suggest that a transition from hcp to fcc structure occurs at 3 ML, unless the film is extremely clean, in which case the transition is delayed until 4 ML. This observation highlights the strong effect that even minute amounts of adsorbates can have on the stacking fault density.
We find that the fcc structure is preferred in Co/Ru(0001) films in the thickness range above 3 or 4 ML. This fact might seem surprising, given that the most stable bulk-Co structure below 690 K is hcp. Nevertheless, we note that the same result, i.e. mostly fcc films, was reported for the growth of Co on Pt(111) [36,46].
Conclusions
In summary, we have studied the growth of the first few layers of Co on Ru(0001) by means of LEEM, LEED and dynamical IV calculations. The large terraces found in the Ru substrate allow Co to grow layer by layer despite the large difference in in-plane lattice parameters. We summarize in figure 12(a) the structures derived in the present work. The first layer grows pseudomorphically and continues the hcp stacking sequence of the Ru(0001) substrate. The shape of the islands is triangular, exposing {100}-type steps, and the orientation rotates 180 • in consecutive terraces, as expected for the hcp substrate. Thicker films reconstruct in order to recover the Co bulk in-plane lattice constant, yielding satellite spots in the diffraction patterns. For 2 and 3 ML films, best IV fits are always obtained for weighted mixtures of hcp and fcc stacking sequences between the lower-most Co layer and the top-most Ru layer. Coexistence of two stacking sequences is consistent with STM observations of Co films [25], which have shown the presence of reconstructions with large unit cells that contain regions with different stacking sequences. The second layer of Co also forms triangular islands. Their orientation is inverted with respect to the 1 ML triangular islands on the same terrace, and there are no stacking faults between the two Co layers. The third layer grows again in triangular islands, but this time, islands with two opposite orientations are observed on the same terrace. By selected area LEED IV-analysis, we confirm the correspondence between island orientation and stacking-sequence in the 3 ML islands: the two experimentally detected regions correspond to two possible stacking sequences of the third layer on top of the 2 ML film. The detailed knowledge of the stacking structure and its relation to island shapes revealed here, along with the interesting magnetic properties of these films [16], makes the Co/Ru system an excellent candidate for basic and applied material research studies. For coverage above 1 ML, the film is reconstructed and two relative registries between the deepest Co layer and the topmost Ru layer coexist. Note that the orientation of the triangles in Frank's notation follows that experimentally found for the triangular islands. In 3 ML films, regions I and II differ only in the stacking of the third Co layer. | 9,853.2 | 2007-03-01T00:00:00.000 | [
"Physics"
] |
Evaluation of the Severity of Mitral Valvular Regurgitation with Doppler Echocardiography Using Proximal Flow Convergence Method
Problem statement: Valvular regurgitation is recognized as the central cause of morbidity and mortality. Even though the clinician can detect the presence of regurgitation by mere physical examination; diagnostic methods become inevitable while estimating the severity of valvular regurgitation and in the transformation of cardiac chambers as in reaction to the volume overload condition. Lately, a promising new technology, the Echocardiography with Doppler is found to facilitate the non-invasive recognition and assessment of the severity and etiology of valvular regurgitation. Accurate measurements of regurgitant volume in patients is of utmost importance since it aids in the estimation of the progression of the disease which in turn is vital for determining the optimal time for surgical repair or replacement. Approach: Color space conversion and anisotropic diffusion segmentation techniques are utilized in this study for the pre-processing stage of the quantification of mitral regurgitation. Flow field measurements are carried out with the aid of proximal flow convergence method. Results: A calculated value of flow rate, regurgitant orifice area, regurgitant fraction and the regurgitant volume for a regurgitant orifice in the cardiovascular system are obtained from the potential Color Doppler visualization of the flow convergence region. Conclusion: The research proposed provides a significant assessment of the echocardiographic and Doppler techniques employed in the evaluation of mitral valvular regurgitation in the patients. Additionally it also proffers the estimation of mildness, severity and eccentricity of mitral valvular regurgitation on basis of the scientific literature and a consensus of a panel of experts.
INTRODUCTION
Medical imaging is referred to as the measures and procedures that are utilized to generate images of the human body (or parts thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science. Broadly medical imaging is a division of biological imaging that encompasses radiology (in the wider sense), radiological sciences, endoscopy, medical thermography, medical photography and microscopy (e.g., for human pathological investigations) [1] . Measurement and recording techniques that do not generate image results principally, like the electroencephalography (EEG) and magneto encephalography (MEG) and others, but generate data that can be represented as maps can be classified as kinds of medical imaging [2] . Contemporary imaging technologies include Computed Tomography, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI) [3] and Ultrasound (Echocardiogram).
The visualization of muscles and internal organs, their size, structures and possible pathologies or lesions of the heart assisting in the detection of a considerable number of heart problems can be achieved with the aid of echocardiogram. Echocardiography aids in the diagnosis of cardiovascular system besides making precise assessment of the velocity of blood and cardiac tissue at any given point using wither pulsed or continuous wave Doppler ultrasound. This permits the estimation of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation) [8] and determination of the cardiac output in addition to the ejection fraction. Flowrelated measurements can be enhanced by combining it with Doppler ultrasound. At present, Doppler echocardiography plays a vital role in the premature determination of the kind of surgery to be carried out for adjusting the mitral valve regurgitation [5,7] .
The severity of the regurgitation is the vital determinant of the development of ventricular dilatation and dysfunction in case of mitral regurgitation [a valvular heart disease also known as mitral insufficiency]. Thus the precise estimation of regurgitant volume in patients having mitral regurgitation is vital for determining the progression of the disease that in turn is beneficial in determining the optimal time for surgical repair or replacement [9] . Diverse analytic approaches and diagnostic technologies have been proposed to help in the clinical assessment of mitral regurgitation. Nevertheless, all techniques in existence have displayed limitations in one form or another.
Our contribution: The aim of this research is to present a new technique based on an Image processing method which can accurately quantify the percentage of backward flow of blood, regurgitant flow rate, regurgitant volume, effective regurgitant orifice area, regurgitant fraction and orifice area in mitral regurgitation by using the Doppler Echocardiography image that works on the color flow Doppler mapping methods like proximal flow convergence. In the preprocessing stage, the color Doppler echocardiography image with RGB color space has been converted into YC b C r.. Subsequently it has been segmented with the aid of non linear anisotropic diffusion method that is used to calculate the percentage of backward flow of blood. The proximal flow convergence method has been exploited to quantify valvular regurgitation by the analysis of the converging flow field proximal to assess the mildness, severity and eccentricity of a mitral regurgitant lesion. Furthermore this research offers a review of Qualitative and quantitative parameters useful in grading the mitral regurgitation severity and utility, advantages and limitations of Echocardiographic and Doppler parameters used in the evaluation of mitral regurgitation severity. Besides that a brief introduction about the concepts: Regurgitation, Mitral Regurgitation, Doppler Echocardiography, Color Doppler and anisotropic diffusion segmentation are detailed respectively.
Regurgitation: Regurgitation is the backward flowing of Blood (flow of blood in the cardiovascular system) [11] . Mathematically, blood flow is described by Darcy's law (which can be viewed as the fluid equivalent of Ohm's law) and approximately by Hagen-Poiseuille equation: Hagen-Poiseuille Equation R = (νL/r 4 ) (8/π) Where: F = Blood flow P = Pressure R = Resistance ν = Fluid viscosity L = Length of tube r = Radius of tube Regurgitation can be categorized into five. They are Aortic [12] , Mitral, Pulmonic [10] , Tricuspid, Valvular.
Mitral regurgitation: Mitral Regurgitation is a kind of Valvular heart disease widely known as mitral insufficiency as well. The heart valve (Mitral) does not function properly because of this disorder thus leading to an abnormal leakage of blood to regurgitate into the left atrium of the heart thereby resulting in ascended blood volume and pressure there. The increased blood pressure in the left atrium results in increased blood pressure in the veins stemming from the lungs to the heart (pulmonary veins) and causes the left atrium to expand to provide room for the extra blood leaking back from the ventricle [13] .
Doppler echocardiography: Echocardiography aids in the recognition of abnormalities in heart wall motion and in the measure of volume of blood being pumped from the heart with each beat. Besides, abnormalities in the heart's structure, such as defective heart valves, birth defects and enlargement of the heart's walls or chambers, as occurs in people with high blood pressure, heart failure, or impairment of the heart's muscular walls (cardiomyopathy) [13] can as well be diagnosed with the aid of Echocardiography.
A widely used technique for the detection and assessment of the severity of valvular regurgitation is Doppler Echocardiography. Numerous indexes for the assessment of severity of regurgitation using color doppler, Pulsed Wave (PW) and Continuous Wave (CW) Doppler have been put forth. The proposed work makes use of Echocardiography image for further investigation and analysis.
Color doppler: Color Doppler is a picture of a blood vessels generated with the aid of standard ultra sound techniques and additionally, a computer transforms the Doppler sounds into colors that are then superimposed on the image of the blood vessel and that characterize the speed and direction of blood flow through the vessel. Detection of regurgitant valve lesions is widely carried out with the aid of Color flow Doppler. The visualization of the origin of the regurgitation jet and its width (vena contracta), the spatial orientation of the regurgitant jet area in the receiving chamber and, in cases of significant regurgitation, flow convergence into the regurgitant orifice is offered by the color flow Doppler technique. The utility, advantages and limitations of Echocardiography and Doppler parameters used in the evaluation of mitral regurgitation severity is shown in the Table 1.
Anisotropic diffusion image segmentation:
The process of partitioning an image into several regions where each one is analogous to a homogeneous surface in a scene is known as Image segmentation. Image segmentation is a low-level image processing task, which intends to divide an image into identical regions [16] . Many applications employ the color image segmentation. Observation reveals that most existing segmentation algorithms treat color features independently; however it is possible that the result may contain false counters. Due to the reason that the inherent multi-features not only contain non linear relation individually but also comprise inter-feature dependency between R, G and B (or Y, C b , C r ), color image segmentation is more tedious in comparison with grey image segmentation.
In order to accomplish image smoothing and segmentation, numerous models of linear and nonlinear diffusion have been put forth in literature. Numerous researchers have projected nonlinear anisotropic diffusion in their works. A diffusive procedure that is suppressed or stopped at boundaries by selection of locally adaptive diffusion strengths is known as Smoothing. Higher levels of image processing utilize the anisotropic diffusion in the preprocessing stage. The anisotropic diffusion smoothes image interiors to emphasize boundaries for segmentation, eliminates spurious detail to improve the response of edge detection algorithms besides proving efficient at eradicating noise from images. Nevertheless, relaxation processes that implement anisotropic diffusion have a tendency to leave the low frequency objects that are complex to disperse without over-processing the image [17] .
MATERIALS AND METHODS
One of the chief goals in clinical cardiology is the estimation of severity of mitral regurgitation. Its severity greatly influences clinical decision making [4] . Numerous echocardiographic techniques have been put forth for enhanced quantification of valvular incompetence [7] . The screening for the existence of mitral regurgitation is commonly carried out with the aid of Color Doppler flow mapping. Prominently, small color flow jets are observed in approximately 40% of healthy normal volunteers and so are regarded as normal variants. Regurgitant jet area, vena contracta and flow convergence (PISA) are three methods of computing MR severity by color flow Doppler mapping. Even though jet area was the primary method utilized for estimating MR severity it has been found to be less accurate than its latter counterparts and thus the proximal flow convergence method using color Doppler has been recognized as a reliable and accurate quantitative approach.
Jet area: A speedy screening of the presence and direction of the regurgitant jet and a semi-quantitative estimation of its severity can be obtained with the aid of visualization of the regurgitant jet area in the receiving chamber. Various technical, physiologic and anatomic factors have an influence on the size of the regurgitant area and thus modify its accuracy as an index of regurgitation severity [15] . Instrument factors, precisely Pulse Repetition Frequency (PRF) and color gain affect the size of a jet. Nyquist limit (aliasing velocity) of 50-60 cm sec −1 and a color gain that just eliminates random color speckle from non-moving regions is made use of in the standard technique.
Commonly, the large jets that extend deep into the Left Atrium (LA) signify more MR than small thin jets that appear just beyond the mitral leaflets. Owing to numerous technical and hemodynamic limitations noted earlier [15] the correlation among jet area and MR severity is poor. Small eccentric color flow jet area might be detected in patients who have low blood pressure and LA pressure due to chronic severe MR whereas the jet area for hypertensive patients with mild MR is huge. The estimation of severity of MR by "eyeballing" or planimetry of the MR color flow jet area only, is not suggested due to the aforesaid considerations. However, small, non-eccentric jets covering an area <4.0 cm2 or <20% of LA area trace or mild MR (Table 2) commonly.
Vena contracta:
The narrowest part of a jet that appears at or just downstream from the orifice is referred to as the vena contracta. High velocity and laminar flow are characteristics of the vena contracta. It is smaller than the anatomic regurgitant orifice to some extent due to boundary effects. The cross-sectional area of the vena contracta denotes a measure of the Effective Regurgitant Orifice Area (EROA), the narrowest area of actual flow. The size of the vena contracta is not influenced by the flow rate and driving pressure for a fixed orifice. Proximal isovelocity surface area (Pisa) or flow convergence method: With the assumption that, in the region proximal to a regurgitant orifice, flow is laminar and accelerates smoothly, forming concentric shells of increasing velocity and decreasing surface area, the proximal convergence method is based on the conservation of mass. The PISA method is derived from the above mentioned principle. This region of higher velocity and smaller flow dimension is known as vena contracta [6] . Hypothetically, the flow convergence region proximal to a discrete regurgitant orifice in a flat planar surface is a hemispheric volume. The flow in this hemispherical volume accelerates toward the regurgitant orifice along radial streamlines. Concentric hemispheric shells of equal and accelerating velocities (velocity isopleths) constitutes this zone of proximal flow acceleration. Color flow mapping offers the ability to image one of these hemispheres that corresponds to the Nyquist limit of the instrument. If a Nyquist limit can be chosen at which the flow convergence has a hemispheric shape, flow rate (mL sec −1 ) through the Regurgitant Orifice (RO) is calculated as the product of the surface area of the hemisphere (2πr 2 ) and the aliasing velocity (V a ) as: Then the regurgitant orifice area ROA is given by: where, r represents the radial distance from the orifice to the first alias. The maximal EROA is derived with the assumption that the maximal PISA radius occurs at the time of peak regurgitant flow and peak regurgitant velocity: EROA = (6.28r 2 *V a )/P k V reg (5) where, P k V reg is the peak velocity of the regurgitant jet obtained by employing CW Doppler. The product of EROA and the velocity time integral of the regurgitant jet is determined to estimate the regurgitant volume. In this approach, EROA determined is the maximal EROA, which may be slightly larger than EROA calculated by other methods, due to the fact that the PISA calculation proffers an instantaneous peak flow rate. According to the continuity principle, blood flow passing through a given hemisphere needs to pass through the narrowed orifice ultimately [4] . Thus, the flow rate through any given hemisphere and the flow rate through the narrowed orifice must be equivalent: Where: A o = The area of the narrowed orifice (cm 2 ) V o = The peak velocity through the narrowed orifice (cm sec −1 ) Thus, the area through the narrowed orifice ( A o ) can be calculated by rearranging the Eq. 5: As mentioned earlier, the regurgitant volume through an incompetent valve and the flow at the regurgitant orifice are equal. Hence, the regurgitant volume can also be calculated from the ROA and the VTI of the regurgitant signal, with the assumption that the regurgitant orifice does not change throughout the period of regurgitant flow: Where: R vol = Regurgitant volume (cc) ROA = Effective regurgitant orifice (cm 2 ) VTI RJ = Velocity time integral of regurgitant jet signal (cm) The regurgitant volume is deliberated from the simplified method when the mitral regurgitation is eccentric. In this case, the ratio between maximum mitral regurgitant velocity and the VTI of the regurgitant signal is 3.25. Thus the regurgitant volume is estimated from the regurgitant flow rate and the constant: where, ForwardFlowThroughTheMitralValve = Mitral orifice area (2πr 2 )×Diastolic velocity integral and FlowThroughTheAorticValve = Aortic orifice area (πd 2 /4)×Systolic velocity integral and the velocity integral of the regurgitant signal is considered to be 3.25.
RESULTS
Let us consider a high-quality color Doppler flow images of the flow convergence region with at least one alias were obtained for the calculation of regurgitant flow rate, effective regurgitant flow rate, regurgitant volume and the percentage of backward flow of blood in Mitral regurgitation. In the preprocessing stage, the color Doppler images with RGB color space is converted into YC b C r color space. Followed by, in the second stage, the converted color space image is segmented using non linear anisotropic diffusion method which is used to segment the color spaces exactly. With this the percentage of backward blood flow in mitral regurgitation has been calculated. Figure 1 shows the output representation of anisotropic diffusion segmentation process. The research shows that the effective ROA calculated based on the analysis of the proximal flow convergence zone, displayed by color Doppler flow mapping image, is feasible and correlates very closely with the true effective regurgitant orifice for a range of different orifice. To apply proximal flow convergence method in the clinical setting, we have calculated effective ROA based on the analysis of the proximal flow convergence zone with mitral regurgitation. The effective measurements of above mentioned parameters for Mild, Severe centric and severe Eccentric mitral regurgitation are detailed in Table 3 and the color Doppler images used for the quantification of mitral regurgitation is shown in Fig. 3.
DISCUSSION
Basically, the hemodynamic estimation of valvular regurgitation is restricted to a semi quantitative grading of invasive or noninvasive parameters which have exposed some correlation with the regurgitant volume. Furthermore, the regurgitant stroke volume, regurgitant flow rate, or regurgitant fraction is determined with the aid of quantitative angiographic technique or the newly developed quantitative color Doppler flow methods.
According to our research, ROA has been established as the elementary parameter in the estimation of valvular incompetence. Nonetheless, lately, the proximal flow convergence method has been put forth as a potential procedure to quantitate regurgitant lesions. Besides supporting the calculation of regurgitant volume and regurgitant flow rate, this technique also aids in obtaining the ROA. With the aid of the principle of conservation of mass, it is possible to determine the instantaneous orifice flow rate on basis of the examination of the converging flow field adjacent to the orifice.
A boundary layer of low velocity flow might result due to viscous forces besides a decrease in the effective surface area of the implicit isovelocity hemisphere. Flow progresses at an angle to the direction of examination on axial images and may cause the underestimation of velocities. Here, we determine the radial distance r with the velocity acquired from the isovelocity hemisphere. This might be responsible for the estimation of flow rate when the first alias was nearer to the orifice. The results of proposed research put forward that the supposition of an isovelocity hemisphere is applicable in the flow convergence zone far away from the orifice where precise estimates of flow rate were achieved. With the aid of these precise measurements we determine the flow rate, effective regurgitant orifice area and the regurgitant volume by utilizing flow convergence method in the mitral regurgitation. An evaluation of the limitations of PISA has been carried out [14] . It has been found to be more precise for eccentric jets and for regurgitation having a circular orifice. The effortless identification of the aliasing line of the hemisphere can be done once the image resolution supports the flow convergence to be clearly visible and a Nyquist limit is chosen such that the flow convergence has a hemispheric shape.
To summarize, we have demonstrated that quantitative assessment of the valve regurgitation is achievable with the aid of Doppler flow mapping of the zone of flow convergence proximal to a regurgitant orifice. The utilization of flow convergence region might seem more attractive on theoretical basis rather than the utilization of the features of a turbulent downstream jet.
CONCLUSION
The precise determination of the severity of the disease is of vital significance with regard to the clinical decision making process for mitral regurgitation. The possibility of surgery necessitates confirmation of severity by a complementary procedure. A novel approach that could precisely and safely quantify mitral regurgitation in theory with the aid of proximal flow convergence method has been proposed in this study. A comparatively greater accuracy was obtained in the quantification mitral regurgitation Doppler image due to the anisotropic diffusion segmentation in the preprocessing stage. We would conclude from our researchers that determination of cardiac output non-invasively by Doppler echocardiography with the aid of the flow convergence method is beneficial. Experimental results have been found to correlate with the several other procedures in existence for cardiac output measurement.
The dynamic nature of lesion and the influence of various hemodynamic and physiologic conditions on it pose a huge challenge for most of the diagnostic techniques working on regurgitation. A comparison of sequential images can now be carried out readily alongside for a more accurate estimation of interval change in the aforesaid adaptive processes and for enhanced timing to carry out the surgery with the aid of the advancements in digital echocardiography. The developments in imaging technologies would facilitate the availability of spatial distribution of the valve regurgitation readily so as to improve measurements of flow convergence, vena contracta and the regurgitant eventually leading to enhancements in the quantization of valvular regurgitation. | 4,912.6 | 2009-02-28T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Multiscale in modelling and validation for solar photovoltaics
. Photovoltaics is amongst the most important technologies for renewable energy sources, and plays a key role in the development of a society with a smaller environmental footprint. Key parameters for solar cells are their energy conversion ef fi ciency, their operating lifetime, and the cost of the energy obtained from a photovoltaic systemcomparedtoothersources.Theoptimizationoftheseaspectsinvolvestheexploitationofnewmaterials and developmentofnovelsolarcellconceptsanddesigns.Boththeoreticalmodelingandcharacterizationofsuchdevicesrequireacomprehensiveviewincludingallscalesfromtheatomictothemacroscopicandindustrialscale.The different length scales of the electronic and optical degrees of freedoms speci fi cally lead to an intrinsic need for multiscale simulation, which is accentuated in many advanced photovoltaics concepts including nanostructured regions. Therefore, multiscale modeling has found particular interest in the photovoltaics community, as a tool to advance the fi eld beyond its current limits. In this article, we review the fi eld of multiscale techniques applied to photovoltaics, and we discuss opportunities and remaining challenges.
Introduction
The European Union (EU) and their public company sector are taking the leading role in the global challenge of increasing energy production from renewable sources. The EU is aiming to fulfill at least 20% of its total energy needs with renewables by 2020-to be achieved through the attainment of individual national targets [1,2]. All EU countries must also ensure that at least 10% of their transport fuels come from renewable sources by 2020. In its revised proposal for a "Directive Of The European Parliament And Of The Council On The Promotion Of The Use Of Energy From Renewable Sources" [3] the EU pledge to become a global leader in renewable energy and ensure that the target of at least 27% renewables in the final energy consumption in the EU by 2030 will be met. It was followed by the ambitious binding target, voted by the European Parliament in January 2018, that the renewable energy sources should account for 35% of total energy consumption by 2030. The 10% bio-fuel target has also been revised to a 6% de-carbonization target in transport.
Since solar photovoltaics (PV) started to take on a globally significant role [4], the cost of PV power has fallen dramatically. Following reports of the International Energy Agency À Photovoltaic System Program (IEA-PVPS) [5], the Renewable Energy Policy for the 21st Century (REN21) [6], and Joint Research Centre (JRC) of the European Commission (EC) [7], the PV market grew significantly in 2017. In total, at least 98 GW of PV capacities were installed in the IEA PVPS countries and in other major markets during 2017. The total installed capacity in the IEA PVPS countries and key markets has risen to 402 GW. Solar PV technology continued to expand in 2017 thanks to the rapid development in China, India and some emerging markets. In the meantime, the US and Japanese market went down, while Europe experienced a slow rebirth partially hidden by the decline of the UK market. In other words, the global PV market outside of China grew by 4 GW to 45 GW while China drove the global numbers up to at least 96 GW. In the same way, the distributed PV market grew significantly for the first time since 2011, with 38 GW compared to 19 GW one year before. More specifically, the cost of the solar cell (SC) by itself is no longer dominant in many terrestrial systems. For example, emerging organic photovoltaic (OPV) technology has the potential to provide cheap solar electricity, given advances in low-cost production and module efficiency and lifetime, and could compete with the established technologies in both roof-and ground-mounted systems if it can achieve a 10-year lifetime [8]. Renewable policies in many countries are moving from government-set tariffs to competitive auctions with long-term power purchase agreements. Increased competition has reduced remuneration levels for solar PV. This competitive price mechanism has squeezed costs along the entire value chain making tenders a cost-effective policy option for governments. Still, the average costs for solar PV remain relatively high. While auction need to be verified over time, they suggest that expanding competitive pricing could result in even lower average costs in coming years [9]. Somewhat counterintuitively, the rapid fall in costs is accelerating incentives in the EU to reach higher efficiencies as the quick growth of PV generated power globally increases its resources. Therefore, the race to close the gap between theoretical limits of solar cell efficiencies and those achieved at the laboratory scale and at industrial level has gained increased impetus.
In this context, research efforts in mainly three directions aim at increasing the cell efficiencies and reducing fabrication costs. Crystalline silicon (c-Si) is by far leading the PV market and there is still an intense research activity in Si based solar cells. Compared to the historical first generation c-Si solar cells that had a very simple design, with an n-type front emitter on a p-type c-Si wafer along with an Al back surface field, significant improvements have been achieved introducing passivation schemes leading to the so-called PERC (Passivated Emitter and Rear Cell) and PERL (Passivated Emitter, Rear Locally-diffused) cells with a record efficiency of 24.7% (that has been reevaluated to 25%) [10] obtained on a small size cell (few cm 2 ). This record has been unbeaten for more than 15 years, but it has been broken several times recently using various new concepts, namely the interdigitated back contact cell (IBC) architecture [11], and, very recently, the so-called TopCon concept, tunnel-oxide passivated rear contact and high-quality top surface passivation [12]. Another very powerful concept is the silicon heterojunction (SHJ) combining crystalline silicon with very thin layers of hydrogenated amorphous silicon (a-Si:H). Doped a-Si:H, p-type and n-type, is used to produce the front emitter junction and the back surface field, respectively, on an n-type crystalline silicon absorber. A very thin undoped a-Si:H (so-called intrinsic) is inserted between the doped a-Si:H layers and the c-Si wafer to achieve outstanding surface passivation. Double side contacted SHJ solar cells have demonstrated efficiencies of 25.1% on 160 mm thick c-Si [13], and record open circuit voltages of 750 mV on 100 mm thick c-Si [14]. Finally, the present record efficiencies are held by a technology combining the interdigitated back contact structure with the SHJ concept. A value of 26.3% has been published in [15], while a record value of 26.6% has been recorded in the best research-cell efficiency chart from the US National Renewable Energy Laboratory (NREL) [16].
The second research direction, designated as the 2nd solar cell generation, pursues technologies that hold promise of major advances in costs and fabrication. These include notably thin films and organic PVs, for large-area fabrication with low material and fabrication costs.
The last direction, known as 3rd solar cell generation, is aiming at technologies promising major advances in efficiencies to overcome the Shockley-Queisser limit [17]. In fact, conventional single energy gap SCs have an ultimate efficiency limit that was established by Shockley and Queisser based on detailed balance arguments. The "balance" in the model comes from the fact that it quantitatively accounts for two opposing fundamental processes that occur in any SC: absorption and emission. For a SC at room temperature the maximum efficiency is further reduced to 40.7% for a band gap of E g = 1.1 eV under maximum concentration condition, and to 31% with E g = 1.3 eV, at one sun (i.e., when the solid angle subtended by the sun shining on a cell at normal incidence is taken into account). The main reason underlying those values is that only photons with an energy close to that of the semiconductor band gap are effectively converted. Photons with lower energy than E g are simply lost (the semiconductor is transparent to them), and photons with higher energy (>E g ) convert their energy at best partially into electricity, wasting the excess energy into heat.
A number of concepts that all share the goal of managing thermalization and optical losses have been devised. These range from multi-junction solar cells (MJSC), which have proved record efficiencies higher than the Shockley-Queisser limit [18], to the research exploiting novel materials and concepts such as the class of nanostructured cells in the fields of hot carrier cells, intermediate band solar cells (IBSC), multiexciton-generation solar cells (MEG), or luminescent solar concentrators (LSC), to name a few, which conceptually promise enormous potential. Their development into real devices is spurring the competition between leading experimental groups worldwide.
All these research directions are heavily guided and supported by tremendous advances in theory and computer modeling. There has been much progress in all relevant computational modeling approaches, ranging from ab initio evaluation of material properties [19], to mesoscale methods describing nanoscale structures and their dynamical electronic and optical properties including processes at interfaces [20,21], to combining these concepts at the device scale [22,23], and finally industrial applications thereof. More generally, the importance of nanostructures in scientific and industrial applications has been constantly increasing during the last decades. The study of the specific behavior of such structures in terms of electronic, optical and thermal properties and its influence on macroscopic device characteristics requires the combination of modeling and characterization techniques at different scales. Indeed, there has been much effort in sharing lessons learned across this range of physical scales, and the resulting field of multiscale modeling and characterization is very active [23][24][25]. Figure 1 shows schematically the main four scales involved, which reach from the atomistic scale up to module level.
Despite major efforts in the development of novel PV concepts, the efficiency reached in their practical implementation is most often substantially lower than what is estimated by theory. Thus, an increased level of realism especially concerning unavoidable losses associated with nanostructures is one of the motivations driving the multiscale simulation efforts in this field.
In response to this, the MultiscaleSolar COST Action network was established in 2015 [25,26] bringing together academic and industrial partners. It is structured into four workgroups reflecting atomistic, mesoscopic, device and industrial scales. The main aim of the network is to explore the challenging implementation of next generation solar cell architectures that require novel multiscale modeling and characterization approaches that capture both the peculiar features at the nanoscale and their impact on the optoelectronic performance at device level.
In this review, we discuss ongoing trends and remaining challenges on the different length scales including active research directions in the network. Therefore, the paper is organized as follows: We first introduce the four distinct scales sketched above and their respective main issues and difficulties to be addressed by the research community, and then give details of selected topics that have evolved into particularly intense activity in MultiscaleSolar.
2 From atomic to industrial scale 2.1 Atomic scale and nanostructure states The understanding of materials for photovoltaics at the atomic level, and parameterization strategies at the mesoscopic scale to further study mesoscopic carrier dynamics in nanostructures on the next length scale is the first crucial step for a multiscalar approach. Modeling on prospective materials or designs is also targeted to improve basic material properties in order to find new strategies for cost reduction leading to next generation solar cells. The main semiconductor materials for PV are studied, including silicon, III-V semiconductors, chalcopyrites, and the halide perovskites.
Different atomistic approaches are used to clarify bulk material properties over different phases, and at surfaces and material interfaces [27]. One has to choose a specific level of theory depending on the targeted property. In most cases, the density functional theory (DFT) [28] is a tool well-suited to characterize the mechanical or vibrational properties of the bulk material. Concerning electronic properties, a more sophisticated level of theory is added on top of the DFT such as hybrid corrections, either addressing the nonlocality of fermionic interaction (B3LYP) [29,30], or screening the long-range Coulomb interaction (HSE) [31] or even better many body corrections (GW) [32]. Finally, the optical properties are strictly obtained by using the time dependent DFT (TDDFT) [33] or DFT plus HSE/GW and on top the Bethe-Salpeter corrections (BSE) [34], with recent results on new photovoltaic materials [35]. In parallel, the simplified, so-called DFT-1/2 method, an approximate quasiparticle method [36], is explored. This methodology is computationally less demanding compared to the HSE or GW many-body corrections. Very recent progress [36] shows that DFT-1/2 method yields accurate band gaps of hybrid perovskites with the precision of the GW method with no more computational cost than standard DFT. This opens the possibilities of accurate electronic structure prediction of sophisticated halide perovskite structures and new materials design for lead free materials. In practice, various and massively parallel codes e.g. VASP [37,38], ABINIT [39], SIESTA [40], Quantum-Expresso [41], CRYSTAL [42] and CASTEP [43] are used to carry out such simulations.
The strategy for the discovery of new materials, such as the 2D, 2D/3D, and 3D hybrid perovskite halides, is based on an accurate description of the electronic properties (band structures) for academic cases such as 2H-PbI 2 , CsPbI 3 and MAPbI 3 . Based on our previous studies [44][45][46][47], the most accurate level of theory (DFT + SOC + HSE or DFT + SOC + GW if possible) will be necessary. In parallel, a comparison between the benchmark results on these materials is mandatory to validate the DFT-1/2 method for this class of materials. A preliminary study is underway. However, close to the band gap energy, it already starts to reveal a good accordance on the band structures and more on the effective masses. If the comparison is a success, a huge execution time improvement will be achieved.
For the industrially most relevant case of the SHJ solar cell, the amorphous component of the interface region has to be created using ab initio molecular dynamics (MD) prior to characterization by DFT methods, and hightemperature annealing via MD drastically affects the coordination of the atoms and resulting density of localized gap states at the interface [48].
Other subjects explored relate to absorbing materials for thin film solar cells and transparent conducting oxides (TCO) [49,50]. The strategy for doping, formation of impurities, stable and ordered defective phases, dielectric, and optical properties of technologically important Cuchalcogenides, either in chalcopyrite or wurtzite phase, is examined at the level of theory that encompasses DFT + HF + BYLYP or DFT + sXC + SCO + TDDFT [51][52][53], or even GW [54]. A similar methodology is used for reduced graphene oxide and similar 2D allotropes as materials for TCO [55].
In our overall strategy, for the wide range of absorbing materials relevant for PV applications, a campaign to parameterize semi-empirical (k · p or tight-binding) [56][57][58] simulation has been proposed. The DFT plus GW corrections helps to get the correct offset of the heterostructure through the Van de Walle and Martin method [59]. Moreover, the energy gaps, optical dipoles, deformation potentials, elastic and piezoelectric properties will be extracted by DFT [60]. In this task the kppw code, parallel implementation and symmetry adapted [61], plane waves based k · p code [53] and the real space parallel k · p code as implemented in TiberCAD [62] will be used. Our efforts to parameterize k · p Hamiltonian has also lead to a new Brillouin zone interpolation scheme, which is being used to resolve fine features of the dielectric functions with drastically reduced computational costs [63]. Another objective is to calculate carrier mobilities including electron-phonon interactions, e.g. in PbTe, and other materials [64].
Complementing this, expertise in tight-binding modeling [65,62,[66][67][68] is used to build empirical Hamiltonians for novel materials and heterostructures. Input from DFT calculations such as piezoelectric tensors can also be introduced to simulate the electromechanical properties of these heterostructures. Simple hybrid perovskite halide materials serve as an academic study on this point.
Mesoscopic carrier dynamics in nanostructures
On the mesoscale, carrier dynamics in nanostructures addresses the critical impact of nanostructure-based solar cell device components on the performance of the entire photovoltaic device. Since on the one hand, nanostructure properties depend on the actual microscopic configuration in terms of size, shape and composition, and on the other hand, they need to be propagated to the device level in order to assess the impact on device characteristics, multiscale approaches are crucial in both implementation and exploitation of mesoscale simulation and characterization.
The role of the mesoscale dynamics as the linking element between microscale material properties and the macroscale device characteristics defines natural interfaces between research groups working on the atomic and device scales, respectively. On the side of the atomistic activities, local mesoscopic models are parameterized from microscopic information in terms of basis functions for the representation of mesoscopic Hamiltonians. The mesoscopic models form the basis to compute local rates for different dynamical processes involving nanostructure states, such as generation, recombination and transport of charge carriers, relying on (quantum)-kinetic methods such as the non-equilibrium Green's function formalism (NEGF) [69] or kinetic Monte Carlo (KMC) [70], but also basic application of Fermi's golden rule. On the macroscale side, reviewed in the next section, the local rates and mobilities are inserted in macroscopic continuum models for electronic transport such as the standard driftdiffusion-Poisson equations coupled to Maxwell solvers for the light propagation, which allow for the multi-physics modeling of realistically extended solar cell device T. Abu Hamed et al.: EPJ Photovoltaics 9, 10 (2018) structures including complex contact geometries or absorber morphologies. In terms of experimental characterization, the mesoscale information concerns the nanostructure aspects (in contrast to atomically resolved or bulk properties), and can be used for both validation and parameterization of (empirical) mesoscale models in the case where ab initio parameterization is not available.
Several different third-generation or high-efficiency photovoltaic device concepts with implementation approaches based on nanostructure components are currently investigated. For example, there are ongoing activities on mesoscale carrier dynamics in the context of the SHJ solar cell, III-V quantum dot solar cells (QDSC) for multi-junction and intermediate band (IB) applications and in organic photovoltaics (OPV).
In the SHJ case, the nanoscale region of the interface between amorphous and crystalline silicon with decisive impact on the device properties is created atomistically and from first principles using ab initio molecular dynamics, and the electronic structure is evaluated with density functional theory [48]. The charge transport and recombination across the interface is simulated using NEGF, and complex interdigitated contact configuration as well as light management via textures and antireflection coatings are considered in an integrated 3D TCAD approach [24].
For QDSC, the electronic structure of InAs-GaAs quantum dots is determined using microscopic continuum k · p, plane wave based methodology [71], which provides a coarse-grained localized basis in terms of quantum dot Wannier functions [72]. The mesoscopic Hamiltonian of the finite quantum dot array using such a basis captures both the effects of dot-to-dot variation in the couplings and the impact of contacts [73], and it provides both carrier dynamics and device characteristics at the mesoscale via the NEGF formalism.
In the OPV application, the complex organic blend morphologies can be generated via the Metropolis Monte Carlo technique [74,75]. Subsequently, effective hopping rates, carrier lifetimes and mobilities can be extracted from KMC simulations and inserted into a macroscopic driftdiffusion-Poisson solver, thus linking from mesoscopic to macroscopic scales [76]. The electronic structure and polarization at organic interfaces can also be studied on ab initio level using the charge patching method within DFT [77] or DFT-based tight-binding [78], and exciton formation as well as ultra-fast separation of photogenerated charge carriers at such interfaces have been assessed based on a density matrix formalism [79,80].
On the characterization side, surface photovoltage spectroscopy is applied to the study of the absorption of dilute nitride films with application in multijunction devices and compared to both photoluminescence (PL) and the dielectric response as provided by Fermi's Golden Rule based on electronic structure computation in the empirical tight-binding approach [81,82]. In addition, dilute nitrides have been studied by X-ray photoelectron spectroscopy (XPS), Raman and PL spectroscopy to determine the degree of atomic ordering in the quaternary alloy InGaAsN during the liquid phase epitaxial growth at near thermodynamic equilibrium conditions and its influence on the band gap formation [83].
Macroscopic device characteristics
The third length scale studies solar cell characteristics at the macroscopic that is on device level both from a theoretical and characterization point of view. Modeling of PV devices is performed using different types of models, ranging from physics based Technology CAD simulations to analytic models. Special focus is given to the definition of multiscale strategies, in particular coupling device scale modeling to the modeling of PV materials performed at mesoscopic scale, in order to develop simulation approaches that can provide a higher degree of accuracy and predictability. This is particularly important in 3rd generation PV, which includes a range of concepts. For example, possible polarization domains or mobile lattice defects in hybrid perovskites need to be taken into account, but also particular properties of nanostructures like quantum dots, such as carrier confinement and modified scattering rates, need to be considered in device models. Similarly, complex morphologies for enhancing light absorption and carrier generation (such as in organic bulk-heterojuntion solar cells), and novel concepts based on intermediate bands, hot carrier generation or multiexciton generation, which are not modeled out-of-the-box in current commercially available device simulation software, require special care and a combination of modeling on multiple scales.
Several TCAD software industrial providers with interest in multiscale and multiphysics approaches, among which are Silvaco [84], Nextnano [85] and TiberLab [86], are actively involved in solar cell device simulations and in the development of linking approaches to the mesoscale dynamics as described in the previous section. This is not limited to electronic transport alone, but the concept is also applied to optical device performance and light management.
A series of specific issues on the macroscopic device characteristics level have been identified in the PV community. Currently under study are, for example, modeling and characterization of c-Si based PV devices and tandem including device architectures combining c-Si and Si nanowire solar cells [87], tandems with metal oxide cells on top of Si cells [88], and potentially other tandem architectures. A cross-cutting major issues is the characterization and modeling of defects, which is also a multiscale axis of interaction with research on other length scales [89,90].
Of particular interest in the field of organic photovoltaics is the coupling to the mesoscopic and microscopic modeling in order to obtain reliable material parameters such as densities of states, electronic and excitonic transport parameters and interface related properties. For bulk heterojunctions specifically, there is a need to include the effects of the morphology of the material blend in device level simulations beyond the effective medium approximation [91]. Further, extraction of transport and carrier dynamics parameters from kinetic Monte Carlo models (KMC), or concurrent coupling of KMC with semi-classical drift-diffusion type models to study local details in full, potentially 3D device structures is of interest.
The study and exploitation of optical effects such as plasmonic enhancement, scattering at nanostructured particle layers and up-/down-conversion for PV applications both from an experimental and theoretical perspective has raised much interest. Modeling is in particular related to improve the description of quantum effects in plasmonic structures using the input from atomistic models. Objects of study are for example regularly or randomly ordered dispersed nanoparticle distributions, or nanowires for light management.
Furthermore, activity is underway on modeling of quantum confined structures by combining mesoscopic, quantum kinetic models with semi-classical or analytical models. This includes for example the combination of nonequilibrium Green's function based approaches with driftdiffusion models, for a locally accurate description of carriers in quantum confined regions.
One of our goals when dealing with macroscopic device characteristics is to define the interfaces to link to the mesoscopic and atomistic models. This will allow the implementation of device level modeling approaches including microscopic details, necessary to fully assess the potential of 3rd generation PV concepts, in a similar way as has been demonstrated e.g. for III-nitride based LEDs [92] or quantum information processing (QIP) devices [93]. It will then be possible to calculate the key solar cell performance parameters needed at module level to evaluate the industrial perspectives discussed next of the different PV concepts.
Industrial perspectives
The status and evolution of different PV technologies, their corresponding markets and implementation plans are addressed through a number of reports, roadmaps, and white books. The most relevant are those published by the European PV Industry Association (now SolarPower Europe), the European PV technology Platform (ETIP-PV), national implementation plans (such as the ADEME Roadmap in France), the Global PV Industry & Technology Platform (SolarUnited, former International Photovoltaic Equipment Association IPVEA), and PV program of the International Energy Agency (IEA PVPS).
For instance, the 2017 SET-Plan Declaration on strategic targets of ETIP-PV in the context of an initiative for global leadership in photovoltaics includes an implementation plan that contains concrete R&I activities for achieving these targets. The scope of these R&I activities is however large and covers the whole value chain from advanced materials and technologies to multiscale system integration and usages. All roadmaps point to the need to pursue intense research on silicon technologies (PERC, HIT, PERL, IBC) to increase the efficiency up to the Shockley-Queisser limit for a single junction. Even if the cost of production of the photovoltaic modules continues to drop, and even if the cost of installation makes PV a competitive energy, each percentage of yield and each year of lifetime earned remains a criterion for photovoltaic operation. From this point of view, modeling makes it possible to realistically determine the functional characteristics of silicon solar cells and modules, but is not able to predict their evolution over time. For quality and reliability issues, it is then necessary to use standardized tests and experimental test-error methods.
In order to be predictive enough and therefore useful for industrial end-users, simulation and modeling efforts are critically depending on input from industrial partners, both for established and emerging PV technologies. It is therefore important to build bridges between academia and industry for mutual exchanges on which technologies hold industrial promise, what the latest technical and industrial developments are, and which partnerships to build. This delivers vital feedback to the research community. One of the aims of this industry oriented research collaboration is to gather and exchange technical data, best practices, and standards in order to yield multiscale modeling of the structures recommended for study. It allows also a coordination of multiscale characterization in order to validate the modeling methods and approaches developed.
MultiscaleSolar has applied COST short term missions to these ends by allowing researchers to investigate academic and industrial priorities on site in academia and in industry. Research has involved formal exchanges, interviews, online questionnaires/survey and other methods known in social science and market research studies. These outreach research activities connect the photovoltaic community, industries and their customers to understand their products, services, requirements and visions. Thus, PV roadmaps and the management of costs in fabrication can be improved with the help of a large number of experts. While gathering high impact knowledge in closely related fields, we also spread these acquired skills and discoveries to increase general awareness. A key means to achieving this impact is through our industrial collaborators which span multinationals such as Electricité Industry-near projects such as light optimization, 3D printed optics, characterization on module level and modeling activities, such as optical (Ray-Tracing) and photonic (FEM, FDTD, PWE) simulations can be coordinated by partners at the interface between fundamental research and industry to close the research and marketing gaps. This includes keeping track of the impact of recent research results on simulation, design, fabrication and applications, and also scrutinizing promising 3rd generation concepts with respect to their industrial feasibility and possible commercialization.
From these activities primarily progress in optical aspects has resulted. This involves fabrication methods on the one hand, with research on 3D printing developed by academic MultiscaleSolar partners in particular being singled out as a strong candidate for implementation in industrial contexts. On the optical front, the outreach T. Abu Hamed et al.: EPJ Photovoltaics 9, 10 (2018) research missions in industry have been complemented by technical research missions. One example we want to single out is a mission on the links between structural and optical disorder in distributed Bragg reflectors. This work has yielded methods to characterize imperfections in optical resonators of great interest for industry in PV and beyond.
A second example related to building integrated PV (BIPV) is the development of optimal color-forming structures for maximal efficiency colored photovoltaics. This showcases the exchange between the mesoscale optical modelling activities, the device level consequences, and the application to industry, involving on-site development of the concept with industrial partners. This work has been submitted to the IEEE Journal of Light Wave Technology.
A concluding recent example of excellence in applied research emerging from our Action is significant progress in using inkjet printing to accelerate the process of attaching light absorbing dyes to a nanocrystalline TiO 2 photoelectrode [104].
Selected activities in MultiscaleSolar
In the following sections, we present several specific research directions within the network. Activities across all material platforms have emerged. For organic solar cells, we evaluate solar cell performance using a nanosized, intermixed morphology of acceptor and donor materials simulated with Monte Carlo and finite element methods. We discuss research on intermediate band gap cells and give a summary on ongoing optimization and design studies of multijunction solar cells. A further emphasis is put on the study of CdSe/CdTe type-II QDs based solar cells for multiple exciton generation. Optical properties of solar cells employing nanostructured front surface layers made from dielectric and plasmonic materials are studied with Rigorous Coupled Wave Analysis enhanced with results from microscopic theories regarding strong coupling and other non-classical effects.
Simulation of organic solar cells including bulk heterojunction morphology
Today's organic solar cells (OSC) are characterized by a complex interface between donor and acceptor materials in order to efficiently split the photogenerated excitons in the organic semiconductors [105]. This is known as the bulkheterojunction architecture (BHJ). Similarly, in dye sensitized solar cells (DSCs) two materials, an electron and hole transporter, respectively, are intermixed and at the common interface a molecular dye is inserted in order to absorb light. Finally, perovskite solar cells often have a mesoporous layer, which seems to improve stability [106]. All three device types share a common feature: the active layer is partially or totally composed by intermixing two materials, which have a mutual complex interface where fundamental processes take place. In an OSC, for example, the interface is where photogenerated excitons split into free charge carriers, but also where undesired recombination processes occur.
A fundamental challenge for device efficiency in BHJ is to find and control the optimal phase separation scale for exciton dissociation interfaces and charge transport channels. In many cases microscopic investigations on a working device is experimentally challenging. As a result, simulation comes into play for verifying the proposed models and guiding the device optimization. One way to model charge transport properties in such devices is to solve the drift-diffusion (DD) equations [91,107]. They offer an approach at a macroscopic continuum level, low computational effort, and with good agreement to experimental data.
The multiscale aspect in this specific type of simulation is given by a suitable inclusion of the real blend morphology in the macroscopic device simulation, which goes beyond the commonly used effective medium approximation (EMA). The latter treats different intermixed materials in the same region as one effective material [108,109]. However, as shown in [110], EMA may introduce drastic approximations. The reason is that the internal interface plays a fundamental role in BHJ solar cells and effective medium approximations completely neglect this fact. Therefore, a tool has been implemented in the Multi-scaleSolar network to generate a model of the real internal morphology of a BHJ which can be combined with a DD model to simulate the device including the real internal structure of the blend [110,111].
One of the key issues is the generation of a suitable morphology and its subdivision into polyhedra for subsequent finite element analysis. To generate the morphology of an arbitrary, randomly intermixing blend a simple stochastic method commonly used for kinetic Monte Carlo simulation has been adopted [70,112]. A 3-dimensional spin system, where equal number of spin-up and spin-down represent the two materials, respectively, is annealed using a Metropolis Monte Carlo algorithm.
In such an approach, the average size of nucleated spin clusters can be tuned by limiting the number of spin-swap steps. The resulting morphology (referred to as MMC morphology hereafter) needs to be repaired and optimized for the sake of numerical simulation. This is achieved sequentially on voxel level and mesh level. On voxel level, the raw MMC morphology is cleared via the removal of island spins and condensing spurious spins at the cluster boundaries. Furthermore, features like sharp corners or thin wedges near the volume boundary are eliminated by suitable algorithms.
Next, the interface (iso-surface) of the MMC morphology is extracted using a Marching Cubes algorithm, resulting in a triangle mesh. Optimization at the mesh level is performed by smearing out the bumpiness using a modified Laplacian smoothing algorithm, which preserves the volumes and prevents from mesh distortions [113,114]. After smoothing, the interface is further remeshed mainly for two purposes: 1. resampling the mesh for controlling the mesh resolution, and 2. improving the mesh quality, i.e. making triangles as equilateral as possible. To generate finally the 3-dimensional finite elements on the modeled geometry, the mesh morphology can be readily fed into the majority of finite element mesh generators, such as Gmsh [115].
Intermediate band solar cells
In order to increase the efficiency of solar cells, the principal aim must be to make better use of the solar spectrum [18,117,118]. One such improvement is to take advantage of the incident photons with sub-band gap energy to be absorbed and contribute to increase photo-current, while in the same time the output voltage of the device would ideally be preserved at its maximal value that is determined by the largest E g that exists in the system (i.e., host material energy gap). A possible solution to that problem emerged in the form of the intermediate band solar cell (IBSC) scheme [119,120]. The limiting efficiency of the IBSC concept for full concentration and at room temperature is 63.2%, with optimized absorption energies at ∼1.2 eV, ∼0.7 eV and ∼ 1.9 eV [119], therefore significantly overcoming the Shockley-Queisser limit of 40.7% for a conventional single-gap SC under the same operating conditions.
Conceptually, an IBSC is manufactured by sandwiching an intermediate band (IB) material between two selective contacts, of p and of n type (see Fig. 3). The IB material is characterized by the existence of an electronic energy band of allowed states within the conventional energy band gap Eg of the host material, splitting it into two sub-gaps, E gL and E gH . This band allows the creation of additional electron-hole pairs from the absorption of two sub-band gap energy photons. Under this assumption, first photon (1) pumps an electron from the valence band (VB) to the IB, and a second photon (2) pumps an electron from the IB to the conduction band (CB). To this end, it is necessary that the IB is half-filled with electrons so that it can supply electrons to the CB as well as receive them from the VB. This two-photon absorption process is illustrated in Figure 2 and has been experimentally detected in IBSCs based on quantum dots [121]. The electron-hole pairs generated in this way add up to the [59,62,66], (b) array of QD as an absorbing material of the IBSC is treated at semi-empirical mesoscopic quantum mechanical level [20,72], (c) using methods of quantum engineering from (b) the material with new targeted functionality is designed [119], (d) such QD array material is described and the device level by drift diffusion model informed from stages (b and c) in order to predict the IV characteristics and efficiencies of IBSC [20,73], (e) the design is ultimately leading to the fabrication of actual CPV IBSC module consisting of 4 IBSC [22,122]. conventionally generated ones by the absorption of a single photon (3), the third one, pumping an electron from the VB to the CB. Therefore, the photocurrent of the solar cell, and ultimately its efficiency, are enhanced since this increment in photocurrent occurs without degradation of the output voltage of the cell. The output voltage is given by the split between electron and hole quasi-Fermi levels, E FC and E FV , that is still limited by the total band gap E g . The robustness of the IBSC concept allows finding various energy gap combinations that provide for very similar efficiency. This is of particular importance for QD based designs as it opens up a much larger design space for IB solar cells.
Multijunction solar cells
Thermalization loss in solar cells as the main limit on efficiency has one classic solution, which is the multijunction solar cell (MJSC) [123]. Put succinctly, this design absorbs photons in semiconductor regions with bandgaps equal to the photons' energies, thereby eliminating thermalization. The theoretical limiting efficiency in this case is 86.6% [124]. Two main approaches are mechanical stacking where sub-cells are opticaly coupled, and monolithic integration where subcells are coupled in series optically and electrically.
The fundamental multiscale questions linking the research topics in this field is the mechanical, electronic, and optical coupling between the subcells. On the structural coupling front is the integration of heterogeneous materials, including for example III-V semiconductor integration on Si where atomistic to device scales are linked by material property modifications due to heterojunctions including strain effects, and to nanostructuring including quantum scale effects. This section sketches the technological issues before summarizing activities in MultiscaleSolar addressing these optical and electronic coupling issues.
In monolithic configuration, subcells are connected in series rather than in parallel. As a consequence, all subcell photocurrents must be equal at the operating point for maximum efficiency, requiring careful current-matching. A second consequence is that compatible materials must be found, that is, semiconductor materials of successively smaller bandgaps which can be grown in stacks. The major incompatibility is in many cases a substantial difference in semiconductor lattice constants, which leads to unacceptably high densities of defects in most materials. While a detailed discussion of this is beyond the scope of this article, we only mention that lattice matched materials combinations dominate this field, while lattice-mismatched solutions using sacrificial relaxed buffer layers achieve results that are nearly equal to the best lattice matched results. We mention finally the use of nanostructured materials in the guise of superlattices, where strain balancing between lattice-mismatched layers may also be used as a solution. An example of this is the multiple-quantum-well MJSC. At present, the series connected monolithic MJSC design is by far the most efficient design available.
Within the consortium, MJSCs are a priority for a number of partners. While a detailed review of these activities is beyond the scope of this article, we mention activities in this field, which are underway. The first is a three terminal device combining high efficiency silicon cells and nanowire layers. The physical novelty of the proposed structure being developed is derived from concepts of front surface passivation combined with band-gap engineering and light scattering and is in the early stages of development.
A second silicon based approach is the development of III-V/Si tandem and triple junction structures [125]. This uses innovative defect-free III-V on Si growth methods [126]. Current activity within MultiscaleSolar is exploring participation by MultiscaleSolar partners in order to develop optical and transport questions due to the nanostructured nature of the III-V component in the proposed structure. A third active approach is the monolithic ultra-high-efficiency approach being developed by partners working on triple junction [127] and four junction solar cells.
Recently, we have also developed an automated tool for the design of MJSCs, in particular for III-V materials technology, and for efficiency optimization, following the paradigm of heuristic global optimization methods based on genetic algorithms [128].
We conclude this brief summary by noting that, guided by theoretical studies (see for example [129]), the novel materials studied from multiscale perspectives are being studied for multijunction designs. This includes both organic materials and the rapidly developing field of perovskite solar cells, in combination in particular with Si. This field of multijunction solar cells, being the designs achieving efficiencies over 45% (albeit under concentration) are a major growth area from the perspective of materials as well as next generation nanostructured designs.
Multi Exciton Generation solar cells
In a standard solar cell, all of the energy of an absorbed photon in excess of the effective bandgap of the material is dissipated as heat and essentially wasted. In colloidal quantum dots (QD) (for example, made of CdSe, CdTe, PbSe etc.), this excess photon energy can be utilized via a process known as Multi Exciton Generation (MEG) or direct carrier multiplication (CM). In this process, the high-energy photon creates a high-energy exciton that can decay into a biexciton. For this process to occur, and under the assumption that the electron mass is much lower than the hole mass, the energy of the exciton has to be at least twice as big as the energy of the effective optical gap, i.e., E en ÀE h0 ≥ 2 |E e ÀE h0 |, where e0 and h0 denote the electron and hole ground states and en is a state higher in the conduction band. This allows for greater utilization of highenergy photons and dramatically increases solar cell efficiency. The MEG process competes with other radiative and nonradiative recombination and relaxation processes, most of all with Auger cooling [130][131][132].
Within a simplified model, optical excitation of a QD preserves the symmetry of the wave function and, hence, both a photoexcited electron and a hole are characterized by the same set of quantum numbers that determine the angular momentum (l) and the number of nodes in its radial component (n) [133]. As a result, the energies of photoexcited electrons (E e ) and holes (E h ) indicates that the energy of a photon in excess of the energy gap, (h w À E g ), is distributed between the electron and the hole in inverse proportion to their effective masses m e and m h , i.e. E e / E h = m h /m e .
Energy conservation requires, that the promotion of the secondary electron across the energy gap can only occur if the greater of two energies E e and E h is equal to the gap E g , which leads to the following expression for the CM threshold [134]: In the specific case of m e = m h , it predicts a CM at a threshold of 3 E g . This value is smaller than for bulk semiconductors (4 E g ), which is a direct consequence of the fact that for QDs the secondary-electron excitation step is not subject to translational momentum conservation.
To further increase the solar cell efficiency, it is necessary to optimize the shape and composition of the QDs in order to maximize the ratio of MEG to cooling processes. Theoretical predictions indicate that MEG has the potential to enhance the efficiency of a single-gap cell from 33% to 44% [135,136]. Full realization of this potential requires that the energy threshold for MEG be minimized. An attractive interaction between excitons reduces the threshold by the biexciton binding energy B XX , but this has been found to be small (À10 meV) for type-I QDs.
Colloidal type-II CdSe/CdTe QDs, offer extra degree of freedom in designing MEG devices [137,138]. Previous calculations of BXX in type-II CdSe/CdTe QDs, have found a large repulsion between excitons, while experiment suggests the opposite, i.e., stronger attraction between excitons in the biexciton. To resolve this ambiguity and to gain deeper insight into the excitonic structure of colloidal core/shell CdSe/CdTe type-II QDs, many-body effects like correlation and exchange on the excitonic structure in this class of QDs are investigated in MultiscaleSolar. In addition, the effect of the reduction of the MEG threshold by strong biexciton binding on the ultimate efficiency of an ideal solar cell is object of study. Finally, the extraction of the photogenerated excitons, which has not been well studied so far, is ideally suited for the application of multiscale modeling approaches.
Optical modeling at the nanoscale
Within optical modelling and optimization of solar devices with nanostructured front surfaces, we concentrate on improving existing computational schemes beyond common nanophotonics, including non-classical interaction effects on the ultimate nanometer scale and strong local coupling into large-scale simulations and interaction with, e.g., lattice and hybrid photon-plasmon modes.
Nanostructures and nanoparticle arrays allow efficient forward scattering of incident light, see Figure 4, increasing the exposure of an underlying solar cell to photons. Current research efforts concentrate on functionalized layers in addition to standard antireflection coatings to optimize light trapping and exploit local field enhancement of metal nanoparticles via plasmon modes. Plasmon-assisted processes such as direct increase of the charge carrier generation or indirect enhancement of energy conversion effects from neighboring nanocrystal structures have received much interest [139][140][141][142][143][144][145].
Fabrication techniques have made tremendous advances in the past decades, in chemical synthesis (etching) [146][147][148][149][150], lithography [151][152][153], self-assembly through (laser) annealing [152,154,155], and nanoimprint [156][157][158][159]. Typically, self-assembly and chemical synthesis yield random nanoparticle (NP) layers at reduced costs, while lithography techniques allow for high precision in size, shape, and placement with nanometer resolution [152]. Nanoimprint in particular is a promising route to combine the best of both worlds [158] as it allows keeping costs low using a single imprint template that in turn can be the result of a complex optimization procedure.
Aging and oxidization effects in metals and the shortrange of field effects pose practical challenges. Bio-markers and molecular rulers [160] allow to bring photo-active materials such as rare-earths [143], quantum dots [140,142] and dyes close to NP surfaces. Plasmonic field enhancement as well as scattering effects of nanoparticles can be tuned to enhance photon upconversion in rare-earth ions [161], enabling the conversion of two or more low-energy photons into one higher-energy photon, capable of electron-hole-pair generation in the photovoltaic cell [162]. While upconverting materials are generally placed on the back, downshifting or down converting materials can be placed on the front side to lower the energy of high energy (UV) photons and thereby reducing e.g. thermalization losses [163].
Within computational nanophotonics, a wealth of analytic and numerical tools are available to describe the optical properties of NPs, including arbitrary shapes [148,149,164], particle clusters [165], two-dimensional particle arrays [164,166,167] as well as three-dimensional photonic crystal structures [168].
The key task is the integration of electro-optical effects at the nanoscale and combined coupling of excited nanocrystals with complex energy transfer mechanisms. Mesoscale electron dynamics, surface and thermal effects are not captured in classical electrodynamics and semiclassical approaches are pursued that allow maintaining the advantages of computational nanophotonics through extended theories [167,168,170,171]. Though such effects are highly localized, optical coupling can lead to an impact on a larger device via retardation and lattice effects [166].
First principle theories can address charge carriers and their mutual interaction with light in detail [172,173]. However, the computational effort increases rapidly with the system size. Microscopic theories such as the RPA (Random Phase Approximation) allow investigating fundamental damping mechanisms arising from electron scattering in the bulk material, with the particle surface and with other electrons. Moreover, it allows addressing electron irradiation effects stemming from the accelerated movement of the oscillating electrons forming the plasmon excitation [170,171,174,175].
Spatial dispersion of electron-electron coupling has been studied in semiclassical methods mostly with the hydrodynamic approach [169,176,177], where the dynamics of the electron plasma is separated from polarization effects of bound electrons. This theory yields an additional wave solution, longitudinal in character, and can be solved for different geometries leading to nonlocal extensions of e.g. Mie and Fresnel coefficients [169,178]. The main observations of nonlocal theories in nanosized particles are a blueshift of the plasmon resonance with respect to the common local approximation and plasmon broadening.
The advantage in semiclassical models is their mostly analytic formulation and thus compatibility with existing numerical procedures.
Multiple scattering techniques [165] allow studying particle clusters based on the scattering matrix of a specified particle type [179]. This is in particular interesting for layers with random distributions [152]. The scattering matrix for NPs of arbitrary shapes can be obtained via e.g. BEM (Boundary Element Method), DDA (Discrete Dipole Approximation) or FEM (Finite Element Method) [143,148,164]. Extensions for devices and large-scale nanostructures including the aforementioned mesoscale electron dynamics were studied [180].
Several theoretical approaches exist to describe nanostructured layers [168]. Complex layered systems are best modelled within a scattering or transfer matrix approach that for homogeneous layers relies on Fresnel equations. These are available with quantum corrections [178]. For dielectric particles, the RCWA (Rigorous Coupled Wave Analysis) or FMM (Fourier Modal Method) is a fast and reliable computational approach [167,181] that casts the electromagnetic wave equation into an eigenvalue problem via expansion in plane waves and Fourier transform of material parameters. Here again, formulations including quantum corrections exist [166]. These can be coupled with DD equations [91,107] as discussed in previous sections.
Metal NPs, however, pose limitations to these methods due to the high refractive index contrast with their environment. FDTD (Finite Domain Time Difference) and FEM are fully numerical alternative tools to investigate complex architectures. Hence, we investigate alternative materials with plasmonic properties such as conductive nitrides [182].
Conclusions
In this article we have collected together the four complementary, relevant scales from the atomistic up to industrial scale, sketched the progress and remaining challenges on these length scales as well as interactions between research in these areas yielding a multiscale research analysis of next generation photovoltaic energy concepts. We have sketched the main issues and solutions obtained, illustrated by selected examples of results obtained.
Concerning the atomic scale, we have seen that the central issues are two fold, involving both issues of the specific level of theory (DFT, corrections thereto, time dependent generalizations for optical properties and so on), and computational constraints. These methodological issues can in part be solved by coupling to mesoscopic models. The work reported has led to new Brillouin zone interpolation schemes significantly reducing computational requirements, while extending calculations to physical properties. The properties reported include calculations of optical, transport, and electromechanical constants from atomic scale materials modelling which have been compared to microstructural characterization. Solutions to the main issue of materials properties modelling and parameterization as output to other scales has been demonstrated as a result.
The main issue on the mesoscopic and nanostructure scale is the propagation of materials properties based on inputs from atomic scale modeling up to the device scale. The work reported here successfully developed tailored solutions for important structures defined at the device scale, but also by industrial level requirements. These tailored solutions consist of parameterizing material properties from ab-initio atomic models and applying a number of techniques (principally KMC and NEGF) to compute optical and transport rates for the structures defined on device level. The issues for the mesosopic scale of acting as a means of implementing atomic level outputs for application at device level have, in the work reported, shown progress in multiscale modelling at device level as a result, reported in detail in resulting publications referred to in the main text.
We have seen that the main focus of research at the device scale is on 3rd generation photovoltaics and, naturally, nanostructure states. Issues in modelling include both analytical and numerical TCAD approaches, where the perspective of the device level is helping to define structures of interest for the mesoscopic or nanostructure scale modelling. As a consequence, device modelling reported in this paper has benefited from coupling nanostructure state models (including KMC and NEGF) with classical DD models. This still preliminary coupling is the subject of ongoing work and future development.
From the point of view of industrial perspectives, the over-riding issue is the identification of promising new routes bearing potential of positive societal impacts. Identification of promising and viable technologies is at the root of the modelling hierarchy, ranging from device level down to the atomistic material modelling. Therefore, we have presented methods put into practice to identify relevant industrial trends. As reported in the main text, this has included elements of industrial activity within MultiscaleSolar on modelling and fabrication fronts, but has most importantly taken the form of surveys of industrial trends. For this purpose, on-site studies by researchers from industrial and academic contexts has proven as valid tool, informing the choice of structures investigated on the device scale in particular.
In this review of the main issues and the main achievements of MultiscaleSolar, we see a developing knowledge exchange between researchers working on different scales, and in particular between mesoscale and macroscale device physics. On the materials based topics, we see input from ab-initio materials properties modeling feeding in to both optical and transport simulation for next generation structures, with the microscale research providing direction on the development of materials and characterization feeding through the mesoscale and macroscale up to the application-oriented research on industrial perspectives.
In addition to the scientific and technical work carried out, MultiscaleSolar, through the interactions shown by the contributors to this paper, is combining research expertise and infrastructure across the European Union. The complementarity of groups in multiscale analysis is thrown into sharp focus by the range of expertise which is to be found in different academic and industrial traditions.
However, although this joint paper shows progress on the range of organic and inorganic materials, and on tried and tested solar cell designs as well as third generation concepts, we conclude that there remains much to be done. The theoretical efficiency limits for third-generation concepts remain far above what has been achieved to date. One of the expected outcomes of MultiscaleSolar is bringing multiscale techniques to bear to understanding what the roadblocks to achieving these efficiencies are. Ultimately to either injecting greater realism in achievable efficiencies, or identifying routes to achieving them in these emerging technologies. | 12,574.8 | 2018-10-23T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics",
"Materials Science"
] |
Note on the Difference between the Principal Balance Analysis with NearestBalance and Constrained Methods
This letter is about a particular case of approximation of a compositional vector by the nearest balance—its application to principal balance analysis (PBA). It compares two methods: our NearestBalance approach (1) and the constrained method suggested previously in references 2 and 3. We recognize that they have the same underlying idea, and we apologize for having missed this fact in our original paper. Still, due to algorithmic details the constrained method provides a suboptimal solution, while the NearestBalance approach guarantees the minimization of the approximation error. This letter was motivated by a discussion at the conference on compositional data analysis CoDaWork2022. We presented our nearest balance approach to approximation of a compositional vector (1) and mentioned its application to principal balance analysis (PBA). J. A. Martin-Fernández asked what the difference is between this method and the constrained algorithm suggested in reference 2 and applied to PBA in reference 3. Indeed, both of them state that they approximate the principal components by the nearest balances. Here, we would like to clarify the difference—by emphasizing that the authors of reference 3 were the first to suggest the idea itself but showing that the algorithm they used is suboptimal, while ours (1) provides the exact solution. As a first step, we made note of the fact that results of the methods do not coincide, though they solve exactly the same problem. Our paper (1) contains application of the algorithms to a Crohn’s disease data set (4). Implementation of the constrained method is taken from the coda.base package, and that of the other one is from the NearestBalance package. The algorithms provided a slightly different explanation of the variance by the first principal balance (25% by NearestBalance versus 24.71% by the constrained method). We extended the comparison to check whether there was a difference in the balances themselves and their angle to the first principal component (PC1). It was actually present: the number of taxa included in balances is different (nNB = 27 versus nC = 29), and the NearestBalance provides a smaller angle to PC1 (aNB = 25.14° versus aC = 26.14°). As both methods target minimizing this angle, the constrained algorithm does not exactly reach the goal. The difference in results is explained by the differences in the algorithms themselves. In brief, both of them are based on expression of the angle cosine:
T his letter is about a particular case of approximation of a compositional vector by the nearest balance-its application to principal balance analysis (PBA). It compares two methods: our NearestBalance approach (1) and the constrained method suggested previously in references 2 and 3. We recognize that they have the same underlying idea, and we apologize for having missed this fact in our original paper. Still, due to algorithmic details the constrained method provides a suboptimal solution, while the NearestBalance approach guarantees the minimization of the approximation error. This letter was motivated by a discussion at the conference on compositional data analysis CoDaWork2022. We presented our nearest balance approach to approximation of a compositional vector (1) and mentioned its application to principal balance analysis (PBA). J. A. Martin-Fernández asked what the difference is between this method and the constrained algorithm suggested in reference 2 and applied to PBA in reference 3. Indeed, both of them state that they approximate the principal components by the nearest balances. Here, we would like to clarify the difference-by emphasizing that the authors of reference 3 were the first to suggest the idea itself but showing that the algorithm they used is suboptimal, while ours (1) provides the exact solution.
As a first step, we made note of the fact that results of the methods do not coincide, though they solve exactly the same problem. Our paper (1) contains application of the algorithms to a Crohn's disease data set (4). Implementation of the constrained method is taken from the coda.base package, and that of the other one is from the NearestBalance package. The algorithms provided a slightly different explanation of the variance by the first principal balance (25% by NearestBalance versus 24.71% by the constrained method). We extended the comparison to check whether there was a difference in the balances themselves and their angle to the first principal component (PC1). It was actually present: the number of taxa included in balances is different (n NB = 27 versus n C = 29), and the NearestBalance provides a smaller angle to PC1 (a NB = 25.14°versus a C = 26.14°). As both methods target minimizing this angle, the constrained algorithm does not exactly reach the goal.
The difference in results is explained by the differences in the algorithms themselves. In brief, both of them are based on expression of the angle cosine: where r and s are the number of parts in the numerator and denominator of the balance,v 1 i (i = 1, . . ., r) andv 2 j (j = 1, . . ., s) are clr-components of the vector (PC1) related to them, and kÁk denotes the Euclidean norm which can be removed from the objective function. The NearestBalance algorithm searches through all possible sizes of r and s of the two groups of parts included in the balance; on each step, the r and s are fixed, and thus, the cosine is maximized by including r parts with maximal clr-components of PC1 in one group and s parts with minimum ones in the other group. The constrained algorithm searches through all possible total numbers of parts in a balance, i.e., through values of n = r 1 s from 2 to D. At each step the parts are sequentially included in the balance in the order of their absolute values of PC1 clr-components, and they are related to the numerator or denominator with respect to the sign of the components. The only exception is the first step, when the two-component composition is constructed of the parts with the highest and the lowest clr-components of PC1 whichever absolute values they have. Both methods calculate the cosine at each step of their search, and then the optimal number of parts is selected. The main difference is that the constraint algorithm does not compare all variants of r and s that sum to fixed n; it takes the balance obtained on the previous step (r 1 s = n 2 1) and adds the part of composition with the highest absolute value of the approximated vector's clr-component.
Thus, the constrained algorithm searches through a substantially smaller subset of balances. Figure 1 illustrates the difference on the Crohn's disease data set. The incompleteness of the search is the source of suboptimality of the constrained method: it does not find the optimal solution because it does not include the appropriate (r, s) pair in the comparison.
On the other hand, this incompleteness makes the constrained algorithm substantially faster. It needs D 2 1 steps for a D-part composition; the complexity of NearestBalance is proportional to (D 2 1) 2 .
To sum it up, the two algorithms-NearestBalance and constrained-aim to find the nearest balance to a compositional vector. Only the NearestBalance actually provides it, while the constrained method finds a suboptimal approximation. However, in practice the constrained algorithm may yield a result quite similar to the optimal one, it is faster (especially for high-dimensional compositions), and additionally it creates a complete orthonormal log-ratio basis, while the NearestBalance in its current implementation provides only the two first principal balances.
Noteworthy, the authors of reference 3 were the first ones who suggested using the nearest balance for PBA. We apologize for not acknowledging this fact in reference 1. Our paper (1) suggests an algorithm which finds exactly the nearest balance, suggests a wider use to the approach, and provides grounds for a special case of combination with regression analysis.
Function find_nearest_balance_clr() from the NearestBalance R package was used for approximation of the first principal balance by the algorithm described in reference 1. Function pb_basis() with the 'constrained' method from the coda.base package was Letter to the Editor mSystems used for the constrained PBA. The constrained algorithm was additionally implemented as a standalone function which takes a CLR-vector as an input, because the coda.base package contains only its application to PBA. We ensured that for the Crohn's disease data set, the new function returns exactly the same result as pb_basis(). The comparison code is available at https://bitbucket.org/knomics/nearest_balance_for _paper. | 1,978.2 | 2023-03-13T00:00:00.000 | [
"Mathematics"
] |
Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems
Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7x more parameters and run 2.5x slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus
Introduction
Answer Sentence Selection (AS2) is a core task for designing efficient retrieval-based Web QA systems: given a question and a set of answer sentence candidates (e.g., retrieved by a search engine), AS2 models select the sentence that correctly answers the question with the highest probability.
AS2 research originated from the TREC competitions (Wang et al., 2007), which targeted large amounts of unstructured text.AS2 models are very efficient, and can enable Web-powered question The model consists of a shared encoder body and multiple ranking heads.CERBERUS independently scores up to hundreds candidate answers a i for question q; The one with highest likelihood is selected as answer.
answering systems of real-world virtual assistants such as Alexa, Google Home, Siri, and others.
As most research areas in text processing and retrieval, AS2 has been dominated by the use of ever larger transformer model architectures (Vaswani et al., 2017).These models are typically pre-trained using language modeling tasks on large amounts of text (Devlin et al., 2019;Liu et al., 2019;Conneau et al., 2019), and then fine-tuned on specific downstream tasks (Wang et al., 2018(Wang et al., , 2019;;Hu et al., 2020).Garg et al. (2020) achieved an impressive accuracy by fine-tuning pre-trained Transformers to the AS2 task on the target datasets.They established the new state of the art performance for AS2 using a RoBERTa LARGE model.
Unfortunately, larger transformer models come at a cost: they require large computing resources, consume a lot of energy (critically impacting the environment (Strubell et al., 2019)), and may have unacceptable latency and/or memory usage.These downsides are critical for AS2 applications, where, for any given query, a model is required to score hundreds or thousands of candidates to select the top-k answers.Therefore, in this work, we investigate how AS2 models can be made more accurate without significantly increasing their complexity.
Previous work has addressed the general problem of high computational cost of transformer models by developing techniques for reducing their overall size while maintaining most of their performance (Polino et al., 2018;Liu et al., 2018;Li et al., 2020).In particular, Knowledge Distillation (KD) techniques have been shown to be particularly effective (Sanh et al., 2019;Turc et al., 2019;Sun et al., 2019Sun et al., , 2020;;Yang et al., 2020;Jiao et al., 2020).KD techniques use a larger model, known as a teacher, to obtain a smaller and thus more efficient model, known as a student (Hinton et al., 2015).The student is trained to mimic the output of the teacher.However, we empirically show that, at least for AS2, BASE models trained through distillation are still significantly behind the state of the art, i.e., models based on LARGE transformers.
In this paper, we introduce a new transformer model for AS2 that matches the state of the art while being dramatically more efficient.Our main idea is based on the following considerations: first, in recent years, several transformer model families have been introduced, each pretrained using different datasets and modeling techniques (Rogers et al., 2021).Second, ensembling several diverse models has shown to be an effective way to improve performance in many question answering and ranking tasks (Xu et al., 2020;Zhang et al., 2020;Liu et al., 2020;Lin and Durrett, 2020).Our contribution lies in a new approach to approximate a computationally expensive ranking ensemble into a single efficient architecture for AS2 tasks.
More specifically, our investigation proceeds as follows.First, we optimize ranking architectures for AS2 by training k student models to replicate k unique teacher architectures.When ensembled, we show that they achieve better performance than any standalone models at the cost of increased computational burden.Then, to preserve the accuracy of this ensemble while achieving lower complexity, we propose a new Multiple Heads Student architecture, which we refer to as CERBERUS.As shown in Fig. 1, CERBERUS is composed of a shared encoder body and multiple ranking heads.The encoder body is designed to derive a shared representation of input sequences, which gets fed to ranking heads.We show that if each ranking head is trained to mimic a unique teacher distribution, it is possible to achieve the desirable diversity through ensemble model while being significantly more efficient.
We train a CERBERUS model using three different teachers: RoBERTa (Liu et al., 2019), ELEC-TRA (Clark et al., 2019), and ALBERT (Lan et al., 2019).We conduct experiments on three AS2 datasets: ASNQ (Garg et al., 2020), WikiQA (Yang et al., 2015), and an internal corpus (IAS2).Our results show that CERBERUS consistently improves over all models trained with single teachers, rivaling performance of much larger models including multiple variants of ensemble models; further, CER-BERUS matches current state-of-the-art AS2 models (TANDA by Garg et al. (2020)), while saving 64% and 60% in model size and latency, respectively.
In summary, our contribution is four-fold: (i) We propose CERBERUS, an efficient architecture specifically designed to distill an ensemble of heterogeneous transformer models into a single transformer model for AS2 tasks while preserving ensemble diversity.
(ii) We conduct large-scale experiments with multiple transformer model families and show that CERBERUS improves performance of equally sized distilled model, rivaling much larger ensemble and state-of-the-art AS2 models.
(iii) We discuss various training methods for CER-BERUS and show three key factors to improve AS2 performance: (a) multiple ranking heads in CERBERUS, (b) multiple teachers, and (c) heterogeneity in teacher models.
(iv) We present a comprehensive analysis of the CERBERUS, both in terms of ranking behavior and efficiency, highlighting the effect of several design decisions on its performance.
2 Related Work
Answer Sentence Selection (AS2)
Several approaches for AS2 have been proposed in recent years.Severyn and Moschitti (2015) used CNNs to learn and score question and answer representations, while others proposed alignment networks (Shen et al., 2017;Tran et al., 2018;Tay et al., 2018).Compare-and-aggregate architectures have also been extensively studied (Wang and Jiang, 2016;Bian et al., 2017;Yoon et al., 2019;Matsubara et al., 2020 Previous studies on transformer distillation have also leveraged its intermediate representation (Sun et al., 2019(Sun et al., , 2020;;Jiao et al., 2020;Mukherjee and Awadallah, 2020;Liang et al., 2020).These approaches typically lead to more accurate performance, but severely limit which pairing of teacher and students can be used (e.g., same transformer family/tokenization, identical hidden dimensions).
Ensemble Distillation
Yang et al. (2020) discussed two-stage multiteacher knowledge distillation for QA tasks.Similarly, Jiao et al. (2020) used BERT models as teachers for their proposed model, TinyBERT, in a two-stage learning strategy.Unlike their twostage approach, our study focuses on distilling the knowledge of multiple teachers while preserving the individual teacher distributions.Furthermore, we explore several pretrained transformer models for knowledge distillation instead of focusing on a specific architecture.More recently, Allen-Zhu and Li (2020) formally proved that an ensemble of models of the same family can be distilled into a single model while retaining the same performance of the ensemble; however, their experiments are exclusively focus on ResNet models for image classification tasks.Kwon et al. (2020) tried to dynamically select, for each training sample, one among a set of teachers.These studies focus distillation on models that strictly share the same architecture and training strategy, which we show not achieving the same accuracy as our CERBERUS model.
Multi-head Transformers
To the best of our knowledge, no previous work discusses multi-head transformer models for ranking problems; however, some related works exist for classification tasks.TwinBERT (Lu et al., 2020)
Methodology
We build up to introducing CERBERUS by first formalizing the AS2 task (Section 3.1), and then summarizing typical transformer distillation and ensembling techniques (Section 3.2).Finally, details of the CERBERUS approach are explained in Section 3.3.
Training Transformer Models for Answer Sentence Selection (AS2)
The AS2 task consists of selecting the correct answer from a set of candidate sentences for a given question.Like many other ranking problems, it can be formulated as a max element selection task: given a query q ∈ Q and a set of candidates select a j that is an optimal element for q.We can model the task as a selector function π : Q × P(A) → A, defined as π(q, A) = a j , where P(A) is the powerset of A, j = argmax i (p(a i |q)), and p(a i |q) is the probability of a i to be the required element for q.In this work, we evaluate CERBERUS, as well as all our baselines, as an estimator for p(a i |q) for the AS2 task.In the remainder of this work, we formally refer to an estimator by using a uppercase calligraphy letter and a set of model parameters Θ, e.g., M Θ .We fine-tune three models to be used as a teacher T Θ : RoBERTa LARGE , ELECTRA LARGE , and ALBERT XXLARGE .The first two share the same architecture, consisting of 24 layers and a hidden dimension of 1,024, while ALBERT XXLARGE is wider (4,096 hidden units) but shallower (12 layers).All three models are optimized using cross entropy loss in a point-wise setting, i.e., they are trained to maximize the log likelihood of the binary relevance label for each answer separately.
While approaches that optimize the ranking over multiple samples (such as pair-wise or listwise methods) could also be used (Bian et al., 2017), they would not change the overall findings of our study; further, point-wise methods have been shown to achieve competitive performance for transformer models (MacAvaney et al., 2019).
When training models for the IAS2 and WikiQA datasets, we follow the TANDA technique introduced by Garg et al. (2020): models are first finetuned on ASNQ to transfer to the QA domain, and then adapted to the target task.
Besides the three teacher models, we also train their equivalent BASE version, namely RoBERTa BASE , ELECTRA BASE , and ALBERT BASE .These baselines serve as a useful comparison for measuring the effectiveness of distillation techniques.
Distilled Models and Ensembles
Knowledge distillation (KD), as defined by Hinton et al. (2015), is a training technique which a larger, more powerful teacher model T Θ is used to train a smaller, more efficient model, often dubbed as student model S Θ .S Θ is typically trained to minimize the difference between its output distribution and the teacher's.If labeled data is available, it is often used in conjunction with the teacher output as it often leads to improved performance (Ba and Caruana, 2014).In these cases, we train S Θ using a soft loss with respect to its teacher and a hard loss with respect to the human-annotated labels.
To distill the three LARGE models introduced in Section 3.1, we use the loss formulation from Hinton et al. (2015), as it performs comparably to other, more recent distillation techniques (Tian et al., 2019).Given a pair of input sequence x and the target label y, it is defined as follows: where α and τ indicate a balancing factor and temperature for distillation, respectively.We independently tune hyperparameters α ∈ {0.0, 0.1, 0.5, 0.9} and τ ∈ {1, 3, 5} for each dataset on their respective dev sets.As previously mentioned, we use cross entropy as hard loss L H for all our experiments.L S is a soft loss function based on the Kullback-Leibler divergence KL(p(x), q(x)), where p(x) and q(x) are softened-probability distributions of teacher T Θ and student S Θ models for a given input x, that is, ] defined as follows: where C indicates a set of class labels.
Using the technique described above, we distill three LARGE models into their corresponding BASE counterparts: i.e., ALBERT BASE from ALBERT XXLARGE , and so on.Furthermore, we create an ensemble of BASE models by linearly combining their outputs; hyperparameters for ensembles were tuned by Optuna (Akiba et al., 2019).
Finally, we build another ensemble model of three ELECTRA BASE distilled from the three LARGE models mentioned above.As we will show in Section 4, ELECTRA BASE outperforms all other BASE models; therefore, we are interested in measuring whether it could be used for inter transformer family model distillation.Once again, Optuna was used to tune the ensemble model.
We note that the ensemble of the three LARGE models is not used as a teacher.In our preliminary experiment, we found that the ensemble is not a good teacher, as the model was too confident in its prediction, a trend that is studied by Panagiotatos et al. (2019).Most softmaxed categoryprobabilities by the ensemble model are close to either 0 or 1 and behave like hard-target rather than soft-target, which did not improve over the KD baselines (rows 7-9) in Table 2.
CERBERUS: Multiple-Heads Student
As mentioned in the previous section, students trained using different teachers can be trivially en- sembled using a linear combination of their outputs.However, this results in a drastic increase in model size, as well as a synchronization latency overhead, which are both undesirable properties in many applications.In this section, we introduce CERBERUS, a transformer architecture designed to emulate the properties of an ensemble of distilled models while being more efficient.As illustrated in Fig. 2, our CERBERUS model consists of two components: (i) an input encoder comprised of stacked transformer layers, and (ii) a set of k ranking heads, each designed to be trained with respect to a specific teacher.Each ranking head is comprised of one or more transformer layers; it receives as input the output of the shared encoder, and produces classification output.To obtain its final prediction, the CERBERUS averages the outputs of its ranking heads.A schematic representation of CERBERUS is shown in Figure 2.
Formally, let M Θ be a pretrained transformer1 of n layers.To obtain a CERBERUS model, we first split the model into two groups: the first b blocks are used for the shared encoder body B b , while the next h = (n − b) blocks are replicated and assigned as initial states for each head H i h , i = {1, . . ., k}.To compute the output for the i th head, we first encode an input x using B b , and then use it as input to H i h .To train CERBERUS, we use a linear combination of k loss functions, each of which uses output from a different ranking head: ) where λ i and L i are the weight and loss function for the i-th head in the CERBERUS model.Specifically, we apply the loss function of Equation 1 to each head, i.e., L i = L KD for the i th head-teacher pair.We note that, while the encoder body and all ranking heads are trained jointly, each head is optimized only by its own loss.Conversely, when backpropagating L CERBERUS , the parameters of the encoder body are affected by the output of all k ranking heads.This ensures that each head learns faithfully from their teacher while the parameters of the encoder body remain suitable for the entire model.
For inference, a single score for CERBERUS is obtained by averaging the outputs of all ranking heads: score In our experiments, we use k = 3 heads, each trained with one of the LARGE models described in Section 3.1.We discuss a variety of combination for values of b and h; the performance for each configuration is analyzed in Section 5.4.For training, we set λ i = 1 for all i = {1, . . ., k} and reuse the search space of the hyperparameters α and τ for knowledge distillation (see Section 3.2).
Datasets
While many studies on Transformer-based models (Devlin et al., 2019;Liu et al., 2019;Clark et al., 2019;Lan et al., 2019) are assessed for GLUE tasks (10 classification and 1 regression tasks), our interests are in ranking problems for question answering such as AS2.To fairly assess the AS2 performance of our proposed method against conventional distillation techniques, we report experimental results on a set of three diverse English AS2 datasets: Wik-iQA (Yang et al., 2015), a small academic dataset that has been widely used; ASNQ (Garg et al., 2020), a much larger corpus (3 orders of magnitude larger than WikiQA) that allow us to assess models' performance in data-unbalanced settings; finally, we measure performance on IAS2, an internal dataset we constructed for AS2.Compared to the other two corpora, IAS2 contains noisier data and is much closer to a real-world AS2 setting.Table 1 reports the statistics of the datasets, and more details are described in Appendix.
Evaluation Metrics
We assess AS2 performance on ASNQ, WikiQA and IAS2 using three metrics: mean average precision (MAP), mean reciprocal rank (MRR), and precision at top-1 candidate (P@1).The first two metrics are commonly used to measure overall performance of ranking systems, while P@1 is a stricter metric that captures effectiveness of high-precision applications such as AS2.
Our models are implemented with PyTorch 1.6 (Paszke et al., 2019) using Hugging Face Transformers 3.0.2(Wolf et al., 2020); all models are trained on a machine with 4 NVIDIA Tesla V100 GPUs, each with 16GB of memory.Latency benchmarks are executed on a single GPU to eliminate variability due to inter-accelerator communication.
Results
Here we present our main experimental findings.In Section 5.1, we compare CERBERUS to stateof-the-art models and other distillation techniques using three datasets (IAS2, ASNQ, WikiQA).In Sections 5.2 -5.4,we motivate our design and hyperparameter choices for CERBERUS by empirically validating them.Finally, in Section 5.5, we discuss inference latency of CERBERUS comparing to other transformer models.
Answer Sentence Selection Performance
The performance of CERBERUS on IAS2, ASNQ, and WikiQA datasets are reported in Table 2. Specifically, we compared our approach (row 14) to four groups of baselines: larger transformerbased models (rows 1-3), including the state-ofthe-art AS2 models by Garg et al. (2020) (rows 2 and 5); equivalently sized models, either directly fine-tuned on target datasets (rows 4-6), or distilled using their corresponding LARGE model as teacher (rows 7-9); ensembles of BASE models (rows 10-12).We also adapted the ensembling technique of Hydra (Tran et al., 2020), which is originally designed for image recognition, to work in our AS2 setting2 and used it as a baseline (row 13).All the comparisons are done with respect to a B 11 3H 1 CERBERUS model initialized from an ELECTRA BASE model: performance of other model configurations are discussed in Section 5.4.Due to the volume of experiments, we train a model with a random seed for each model given a set of hyperparameters and report the AS2 performance with the best hyperparameter set according to each dev set.
Vs. TANDA (BASE) & Single-Model Distillation
We find BASE models trained by TANDA (rows 4-6), the state-of-the-art training method for AS2 tasks, are further improved (rows 7-9) by introducing knowledge distillation to its 2nd fine-tuning stage.Our CERBERUS achieves a significantly improvement over all single BASE models for all the considered datasets (Wilcoxon signed-rank test, p < 0.01).We empirically show in Section 5.2 that this significant improvement was achieved by both the architecture of our CERBERUS and using heterogeneous teacher models rather than a small amount of extra parameters.
Vs. Ensembles & Hydra
For all the datasets we considered, our CER-BERUS achieves similar or better performance of much larger ensemble models, including an ensemble of ALBERT BASE , RoBERTa BASE , and ELECTRA BASE trained with and without distillation (rows 10 and 11), as well as the ensemble of three ELECTRA BASE models each trained using ALBERT XXLARGE , RoBERTa LARGE , and ELECTRA LARGE as teachers (row 12).We also note that CERBERUS outperforms our adaptation of Hydra (Tran et al., 2020) (row 13), which emphasizes the importance of using heterogeneous teacher models for AS2.
Are Multiple Ranking Heads and Heterogeneous Teachers Necessary?
Using the heterogeneous teacher models shown in Table 2, we discuss how AS2 performance varies when using different combinations of teachers for knowledge distillation.The first method, KD Sum , simply combines loss values from multiple teachers to train a single transformer model, similarly to the task-specific distillation stage with multiple teachers in Yang et al. (2020).In the second method, KD RR , we switch teacher models for each training batch in a round-robin style; i.e., the student transformer model will be trained with the first teacher model in the first batch, with the second teacher model in the second batch, and so forth.
Table 3 compares performance of the multipleteacher knowledge distillation strategies described above to that of our proposed method; we also evaluate the effect of using one teacher per head, rather than a single teacher (ELECTRA LARGE ), on CERBERUS.For ELECTRA BASE , we found that KD Sum method slightly outperforms KD RR ; this result highlights the importance of leveraging multiple teachers for knowledge distillation in the same mini-batch.For CERBERUS, we found that using multiple heterogeneous teachers (specifically, one per ranking head) is crucial in achieving the best performance; without it, CERBERUS B 11 3H 1 achieves the same performance of ELECTRA BASE despite having more parameters.Besides these two trends, the results of rows 13 and 14 in emphasize the importance of heterogeneity in the set of teacher models.As a result, CERBERUS B 11 3H 1 performs the best and achieves the comparable performance with some of the teacher (LARGE) models, while saving between 45% and 63% of model parameters.From the aforementioned three trends, we can confirm that the improved AS2 performance was achieved thanks to the multiple ranking heads in CERBERUS, the use of multiple teachers, and heterogeneity in teacher model families; on the other hand, the slightly increased parameters compared to ELECTRA BASE did not contribute to performance uplift.
Do Heads Resemble Their Teachers?
To better understand the relationship between CER-BERUS's ranking heads and the teachers used to train them, we analyze the top candidates chosen by each of teacher and student models.Figure 3 shows how often each CERBERUS head agrees with its respective teacher model.To calculate agreement, we normalize number of correct candidates heads and teachers agree on by the total number of correct answer for each head.
Intuitively, we might expect that ranking heads would agree the most with their respective teachers; however, in practice, we notice that the highest agreement for all heads is measured with ELECTRA LARGE .However, one should consider that the agreement measurement is confounded by the fact that all heads are more likely to agree with the head that is correct the most (ELECTRA LARGE ).Furthermore, in all our experiments, CERBERUS is initialized from a pretrained ELECTRA BASE , which also increase the likelihood of agreement with ELECTRA LARGE .Nevertheless, we do note that both the head distilled from ALBERT XXLARGE and from RoBERTa BASE achieve high agreement with their teachers, suggesting that CERBERUS ranking heads do indeed resemble their teachers.
In our experiments, we also observed that CER-BERUS is able to mimic the behavior of an ensemble comprised of the three large models; for example, on the WikiQA dataset, CERBERUS always predicts the correct label when all three models are correct (197/243 queries), it follows majority voting in 17 cases, and in one case it overrides the majority voting when one of the teachers is very confident.In the remaining cases, either only a minority or no teachers are correct, or the confidence of the majority is low.
How Many Blocks Should Heads Have?
In Table 2, we examined the performance of a CER-BERUS model with configuration B 11 3H 1 ; that is, a body composed of 11 blocks, and 3 ranking heads with one transformer block each.In order to understand how specific hyperparameters setting for CERBERUS influences model performance, we examine different CERBERUS configurations in this section.Due to space constraints, we only report results on IAS2; we observed similar trends on ASNQ and IAS2.In order to keep a latency comparable to that of other BASE models, we keep the total depth of CERBERUS constant, and vary the number of blocks in the ranking heads and shared encoder body.Table 4 shows the results for alternative CER-BERUS configurations.Overall, we noticed that the performance is not significantly affected by the specific configuration of CERBERUS, which yields consistent results regardless of the number of transformer layers used (1 to 6, B 11 3H 1 to B 6 3H 6 ).All CERBERUS models are trained with a combination of hard and soft losses, which makes it more likely to have different configurations converge on a set of stable but similar configurations.Despite the similar performance, we note that B 6 3H 6 is comprised of significantly more parameters than our leanest configuration, B 11 3H 1 (199M vs 124M).Given the lack of improvement from the additional parametrization, all experiments in this work were conducted by using 11 shared body blocks and 3 heads, each of which consists 1 block (B 11 3H 1 ).
Benchmarking Inference Latency
Besides AS2 performance, we examine the inference latency for CERBERUS and models evaluated in Section 5.1, using an NVIDIA Tesla V100 GPU.The results are summarized in Table 5.For a fair comparison between the models, we used the same batch size (128) for all benchmarks, and ignored any tokenization and CPU/GPU communication overhead while recording wall clock time.Overall, we confirm that CERBERUS achieves a comparable latency of other BASE models.All four are within the one standard deviation of each other.
All the LARGE models including the state-ofthe-art AS2 model (RoBERTa LARGE by Garg et al. (2020)) produce significantly higher latency, (on average, 3.4× slower than CERBERUS); specifically, ALBERT XXLARGE , which is comprised of 12 very wide transformer blocks, shows the worst latency among single models.Further, the latency of the two ensemble models are comparable to some of the LARGE models, thus supporting our argument that they are not suitable for high performance
Conclusions and Future Work
In this work, we introduce a technique for obtaining a single efficient AS2 model from an ensemble of heterogeneous transformer models.This efficient approach, which we call CERBERUS, consists of a sequence of transformer blocks, followed by multiple ranking heads; each head is trained with a unique teacher, ensuring proper distillation of the ensemble.Results show that the proposed model outperforms traditional, single teacher techniques, rivaling state-of-the-art AS2 models while saving 64% and 60% in model size and latency, respectively.CERBERUS enables LARGE-like AS2 accuracy while maintaining BASE-like efficiency.Further analysis demonstrates that reported improvements in AS2 performance are due to to three key factors: (i) multiple ranking heads, (ii) multiple teachers, and (iii) heterogeneity in teacher models.
Future work would focus on two key aspects: how CERBERUS performs on non-ranking tasks, and whether it could achieve similar improvements in ranking tasks outside QA.For the former, we remark that, while the core idea of CERBERUS can be extended to tasks such as those in the GLUE benchmark (Wang et al., 2018), further investigation is necessary in establishing the best set of trade-offs for different objectives and metrics.A similar concern exists in the case of extending CERBERUS to ranking tasks, such as ad-hoc retrieval.
Limitations
In this study, we discussed the experimental results and empirically showed the effectiveness of our proposed approach for English datasets only.While this is a major limitation of the study, our approach is not specific to English, thus it could be extended in the future using models in other languages, although improvements might not translate to less resource-rich languages.
As described in Section 4.2, our experiments are compute-intensive and have been conducted on 4 NVIDIA V100 GPUs.Thus, researchers with less compute might not be able to replicate CERBERUS.
Next, all models we present in this work are trained to optimize answer relevance to a given question.Therefore, they might be unfair towards protected categories (race, gender, sex, nationality, etc.) or present answers from a biased point of view.Our work does not address this challenge.
Finally, we evaluated our approach only in the context of answer sentence ranking; thus, the reader might be left wondering whether such an approach would work for other tasks.We note that, although a study on the general applicability of our approach is very interesting and needed, it would require more space than a conference submission has in order to be accurately described and evaluated.Therefore, we leave further investigation of CERBERUS on other domains and tasks as future work.Soldaini and Moschitti (2020).
C IAS2
This is an in-house dataset, called Internal Answer Sentence Selection, we built as part of our efforts of understanding and benchmarking web-based question answering systems.To obtain questions, we first collected a non-representative sample of queries from traffic log of our commercial virtual assistant system.We then used a retrieval system containing hundreds of million of web pages to obtain up to 100 web pages for each question.From the set of retrieved documents, we extracted all candidate sentences and ranked them using AS2 models trained by TANDA Garg et al. (2020); at least top-25 candidates for each question are annotated by humans.Overall, IAS2 contains 6,939 questions and 283,855 candidate answers.We reserve 3,000 questions for evaluation, 808 for development, and use the rest for training.Compared to ASNQ and WikiQA, whose candidate answers are mostly from Wikipedia pages, IAS2 contains answers that are from a diverse set of pages, which allow us to better estimate robustness with respect to content obtained from the web.
D Common Training Configurations
Besides the method-specific hyperparameters described in Sections 3.2 and 3.3, we describe training strategies and hyperparameters commonly used to train AS2 models in this study.Unless we specified, we used Adam optimizer (Kingma and Ba, 2015) with a linear learning rate scheduler with warm up3 to train AS2 models.The number of training iterations was 20,000, and we assess a AS2 model every 250 iterations using the dev set for validation.If the dev MAP is not improved within the last 50 validations4 , we terminate the training session.As described in Section 5.1, we independently tuned hyperparameters based on the dev set for each dataset, including an initial learning rate {10 −6 , 10 −5 } and batch size {8, 16, 24, 32, 64}.Note that we train AS2 models on the ASNQ dataset for 200,000 iterations due to the size of the dataset.
For model configurations, we used the default configurations available in Hugging Face Transformers 3.0.2(Wolf et al., 2020).For instance, the number of attention heads are 12 and 64 for ALBERT BASE and ALBERT XXLARGE , 12 and 16 for RoBERTa BASE and RoBERTa LARGE , and 12 and 16 for ELECTRA BASE and ELECTRA LARGE , respectively.In this paper, we designed CERBERUS leveraging the default ELECTRA BASE architecture, thus the number of attention heads is 12.
Figure 1 :
Figure1: CERBERUS model for answer sentence selection.The model consists of a shared encoder body and multiple ranking heads.CERBERUS independently scores up to hundreds candidate answers a i for question q; The one with highest likelihood is selected as answer.
Figure 2 :
Figure 2: Detailed overview of CERBERUS model that consists of a shared encoder body of b transformer layers, followed by k ranking heads of h layers each; we use notation B b kH h to identify a CERBERUS configuration.All heads are jointly trained, but each head learns from a unique teacher model; at inference time, predictions from heads are combined by a pooler layer.
Figure 3 :
Figure 3: Agreement between heads and their teacher model in CERBERUS.It is obtained by diving the number of correct candidates each head and teacher agree on by the total number of correct answer for each head.
Table 3 :
Comparison of single and multiple teachers distillation for ELECTRA BASE and CERBERUS B 11 3H 1 models on the IAS2 test set.Overall, we found that combining the CERBERUS architecture with multiple teachers is essential to achieve the best performance.
Garg et al. (2020)., 2019)8, Lisbon, Portugal.Association for Computational Linguistics.Garg et al. (2020)introduced Answer Sentence Natural Questions, a large-scale answer sentence selection dataset.It was derived from the Google Natural Questions (NQ)(Kwiatkowski et al., 2019), and contains over 57k questions and 23M answer candidates.Its large-scale (at least two orders of magnitude larger than any other AS2 dataset) and class imbalance (approximately one correct answer every 400 candidates) properties make it particularly suitable to evaluate how well our models generalize.Samples in Google NQ consist of tuples ⟨question, answer long , answer short , label⟩, where answer long contains multiple sentences, answer short is fragment of a sentence, and label indicates whether answer long is correct.Google NQ has long and short answers for each question.To construct ASNQ,Garg et al. (2020)labeled any sentence from answer long that contains answer short as positive; all other sentences are labeled as negative.The original release of ANSQ only contains train and development splits; We use the dev and test splits introduced by B ASNQ | 7,784 | 2022-01-15T00:00:00.000 | [
"Computer Science"
] |
The Role of Epsilon Near Zero and Hot Electrons in Enhanced Dynamic THz Emission from Nonlinear Metasurfaces
We study theoretically and experimentally the nonlinear THz emission from plasmonic metasurfaces and show that a thin indium-tin oxide (ITO) film significantly affects the nonlinear dynamics of the system. Specifically, the presence of the ITO film leads to 2 orders of magnitude stronger THz emission compared to a metasurface on glass. It also shows a different power law, signifying different dominant emission mechanisms. In addition, we find that the hot-electron dynamics in the system strongly modify the coupling between the plasmonic metasurface and the free electrons in the ITO at the picosecond time scale. This results in striking dynamic THz emission phenomena that were not observed to date. Specifically, we show that the generated THz pulse can be shortened in time and thus broadened in frequency with twice the bandwidth compared to previous studies and to an uncoupled system. Our findings open the door to design efficient and dynamic metasurface THz emitters.
R ecently, surprisingly efficient THz emission following femtosecond laser excitation of nonlinear plasmonic metasurfaces has been reported. 1 The magnitude of the field emitted from an ultrathin gold metasurface was shown to be comparable to that emitted from an orders of magnitude thicker zinc telluride (ZnTe) nonlinear crystal. Taking advantage of this effect, metasurfaces allowing phase control for generation of spatiotemporally tailored THz wavepackets have been demonstrated. 2−5 However, the underlying physical mechanisms that enable such efficient THz emission are still not fully understood. Several processes, such as ponderomotive acceleration of photoejected electrons, either by multiphoton ionization or tunneling ionization, as well as optical rectification (OR) were proposed as the dominant mechanisms in the THz emission. 1,6−14 Yet, a deeper understanding is still required to fully account for all the observations. In many works that study THz emission from plasmonic metasurfaces, the metasurfaces are fabricated on thin ITO films, which are commonly used in the electron beam lithography process. 1−4 Until recently, this layer was generally disregarded, and the nonlinear emission was considered to arise solely from the plasmonic nanostructures. 1,8,13,14 However, the permittivity of ITO changes its sign from positive to negative in the near-infrared (NIR) region. 15−17 It is also tunable and can be shifted up to the mid-infrared range by annealing in various atmospheric oxygen environments. 18,19 This zero-crossing point coincides with the excitation wavelengths of some of the studied nonlinear metasurfaces and their resonant response. It was shown that at the epsilon near zero (ENZ) region, ITO as well as other materials possess strong optical nonlinearities and exhibit unusual properties, thus make promising candidates for new applications in both linear and nonlinear optics. 20−27 A plethora of enhanced nonlinear effects were demonstrated in ITO films, such as second-harmonic generation (SHG), 28 high harmonic generation, 29 a nonlinear Kerr effect, 30 and very recently also THz generation. 31 However, since the amplification of the nonlinear effects is attributed to the enhancement of the normal component of the electric field at the ENZ region, they can only be observed when pumped at oblique incidence. To circumvent this constraint, hybrid metasurfaces constructed from plasmonic nanoantennas coupled to the ENZ material were designed and showed remarkably large second-and third-order nonlinearities. 5,32,33 Here, we study the role of ITO and hot-electron dynamics in the THz emission from nonlinear plasmonic metasurfaces. To get better insight, we compare the emission from gold split ring resonator array (SRRs) metasurfaces, fabricated on a thin layer (∼20 nm) of ITO (referred to as SRR-ITO throughout this work) and on a bare SiO 2 substrate (referred to as SRR-Glass). Figure 1a illustrates the unit cell structure. More details on the fabrication process along with SEM images of the fabricated samples are given in the Supplementary Notes 1 and 2.
We start by characterizing the linear response of the samples. Figure. 1(b,c) presents the polarized transmission spectra of the SRR-ITO and SRR-Glass samples, respectively. It can be seen that the SRR-Glass metasurface exhibits one resonance at λ res ≈ 1500 nm when irradiated along the base of the SRRs (E in x) and no resonance when excited along the arms (E in y). On the other hand, the SRR-ITO metasurface exhibits two resonances at 1240 and 1550 nm for x-polarized illumination and a single-resonance dip around 1400 nm for y-polarized illumination. These results suggest that the nanoantenna modes of the SRRs are strongly coupled with the ITO mode. To further examine the purported coupling, we simulated the linear transmission as a function of the ITO thickness for different SRR dimensions (see Supplementary Note 2). These simulations indicate the strong coupling of the SRR mode and the ITO ENZ mode. The coupling is affected by both the widths of the arms and the ITO thickness. Therefore, by tuning the dimensions of the nanoparticles and the thickness of the ITO, it is possible to control the coupling of the system. For this reason, several previous works observed single-resonance transmission spectra, although the metasurfaces used were fabricated on ITO substrates 1,2 Having described the linear properties of the metasurfaces, we turn to the nonlinear ones. We excite the metasurfaces with NIR femtosecond pulses (see Supplementary Note 3), to generate single-cycle THz signals. The emitted signal is characterized by a time domain spectroscopy system (TDS) based on electro-optic sampling (Figure 2a). THz emission is an even order nonlinear effect requiring breaking of inversion symmetry. 34 It had been shown that due to the mirror symmetry along the base of the SRRs, the excitation configuration gives rise to nonlinear currents along the arms, 1,13,35 and therefore, the emitted field is highly polarized along y. Figure 2b,c shows the time-and frequency-resolved THz signal respectively, emitted from 1 × 1 mm 2 uniform SRR-ITO and SRR-Glass metasurfaces, following pumping with 30 mW at a central wavelength of λ p = 1500 nm. The emitted signal shows a single-cycle THz pulse with a pulse duration of about 1 ps. The signal peaks at ∼0.75 THz and extends to above 2.5 THz. It can be seen that the THz field generated from the SRR-Glass sample is significantly weaker than that of the SRR-ITO sample. This can be attributed to the strong nonlinearities of the ITO and field enhancement in the ENZ mode, 26,27 as further discussed below.
The dependence of the generated THz intensity on the pumping power from the different samples reveals some unique properties, as shown in Figure 2d. First, it can be seen that the thin ITO film enhances the THz intensity up to 2 orders of magnitude (for a pumping power of 30 mW). In addition, the power law dependencies of the THz emission from metasurfaces on glass and ITO are different. This can be attributed to different dominant THz generation mechanisms in the two cases. The SRR-Glass sample shows a fourth power (x 4 ) dependency, which can be explained by ponderomotive acceleration of photoejected electrons as was previously proposed. 8 On the other hand, in the case of SRR-ITO, we observe a quadratic power dependence for up to ∼40 mW pump power. This suggests a second-order nonlinear process, optical rectification, as has been previously suggested, 1,12 and settles the disagreement between the previous reports 1, 8 (see also supplementary of ref 1).
At higher pumping power of the SRR-ITO sample, saturation is observed (Figure 2d). This saturation behavior is reversible and may point on dynamic effects involved in the generation process. To verify that these observations are unique to the THz emission, we also measured the secondharmonic generation from the samples. The results of the SHG measurements are shown in Figure 2e. In both samples, the intensity of the generated SH exhibits a quadratic dependence on the pump power, as expected. In addition, the SRR-ITO metasurface enhances the SH generation compared to SRR-Glass (up to ∼4 fold), which agrees with previous reports. 33 This comparison shows that the ITO plays a more dominant role in the THz enhancement and may be explained by the large intrinsic nonlinearities arising from hot carriers in the THz regime that far exceed the fast nonlinearities in ITO. 26 In order to explain the unique observations of the emission from the coupled SRR-ITO metasurface, we consider a nonlinear hydrodynamic model, which treats the electrons in the material as a fluid that obeys Euler's equation. 14,35 Using this model, we describe the second-order nonlinearity that arises in the metal nanoparticles as well as in the ITO layer (see Supplementary Note 4). The nonlinear currents generated by the OR act as the driving source of the THz emission.
Using this method, we are able to correctly predict the spectrum of the emitted field at low pumping powers, as presented in Figure 3a, as well as the quadratic power dependence (Figure 3b). We note that the spectrum predicted using this method does not depend on the pumping power and remains unchanged (thus referred to as "static model"). In addition, since this model only describes the OR process, it shows a quadratic power dependence and does not capture the saturation observed in Figure 2d.
Moreover, the simulations of the system confirm that the strong THz emission recorded is due to the existence of the thin ITO layer rather than solely by the gold nanoparticles as was considered in previous studies. 13,14 This large enhancement originates from the OR process in the ITO-SRR metasurface. The free carriers in the ITO are subject to strong asymmetric driving fields in the system, which are enhanced by field confinement, due to the SRRs and due to excitation wavelengths where the permittivity is near zero (see Supplementary Notes 5 and 6). Furthermore, the SRRs couple to the ITO to enable emission at normal incidence illumination.
Next, we examine more carefully the THz emission from the SRR-ITO sample. We see that pumping at either x̂or yp olarizations result in strong THz emission (see Supplementary Note 7). Figure 4 shows the generated signal when pumped with a fundamental wavelength of λ p = 1300 nm (Figure 4a,c) and λ p = 1500 nm (Figure 4b,d). The temporal and spectral shape of the pulse remains unchanged while pumping the weakly coupled resonance along ŷ(see Supplementary Notes 8 and 9). However, pumping the strongly coupled resonance along x̂results in shortening of the THz pulse and a broadening of the emitted THz bandwidth (Figure 4c,d). We measured a broadening of up to twice the bandwidth compared to previous studies and to
Nano Letters pubs.acs.org/NanoLett
Letter the weakly coupled system (E in y). In addition, pumping the strongly coupled system results not only in shorter pulses when increasing the pumping power, but the pulse shape changes as well. Also pumping at different wavelengths generates a different THz signal. This behavior may be explained by a phase difference between THz signals generated by short and long wavelengths (further explanation is given in Supplementary Note 8).
To understand the saturation in the THz generated at high pumping powers and broadening of the spectrum, we take into account temporal dynamics that occur due to hot-electron generation in the ITO (see the schematic illustration in Figure 5a). The semiclassical two-temperature model (TTM) is used to calculate the spatiotemporal temperature distribution of the hot electrons in the gold nanoparticle and in the ITO layer together with their energy transfer to the lattice. In this model, the pumping NIR ultrashort laser excites the electrons, which then thermalize and also transfer the heat to the lattice. As a result of the fast electron and lattice heating, the effective mass changes due to the nonparabolicity of the conduction band. This leads to fast changes of the plasma frequency and the permittivity of the ITO. Therefore, the optical response is temporally changed at the sub-picosecond time scale. We account for this temporal thermo-optical change by altering the permittivity ϵ(T e , T l ), which is dependent on the electrons (T e ) and lattice (T l ) temperatures (see Supplementary Notes 10− 13 for more details on the theoretical model). These thermooptical modifications dynamically change the coupling between the SRR resonance and the ITO ENZ mode, and therefore are evident for x-polarized excitation. Finally, the observed spectrum is highly dependent on the optical properties of the metasurface, which determine the frequencies that will radiate to the far field. Therefore, since the ultrafast heating process occurs at the subpicosecond time scale, the generated THz pulse is affected, thus resulting in the significant
Nano Letters pubs.acs.org/NanoLett
Letter broadening of the emission spectrum. On the other hand, the generated SH is an almost instantaneous process and therefore remains unchanged by the delayed temporal dynamics. Using full wave simulations with a finite element method commercially available software 14 accounting for these dynamics, we are able to reproduce the saturation behavior shown in Figure 2d. We show in Figure 5b that the quadratic power dependence of the THz emission observed at low pumping powers saturates at increasing powers when the heating effects become dominant. Saturation occurs due to a combined effect of a shift in the ENZ point 31 and increase of the heated electron effective mass with pumping power, which reduces the strength of the ITO response due to a reduction in mobility. In addition, our framework also captures the broadening of the spectrum at increasing powers. Simulations of the generated THz spectrum when pumping the strongly coupled system are presented in Figure 5c,d and are in good agreement with the measurements (Figure 4c,d, respectively). Results for pumping the weakly coupled system are shown in Supplementary Note 9 and agree with the measured results as well.
In conclusion, we have shown that the strong THz emission from plasmonic metasurfaces is due to a thin film of ITO. This ∼20 nm thin layer enhances the THz emission by up to 2 orders of magnitude. In addition, we have shown that the strongly coupled SRR-ITO metasurface exhibits previously unreported dynamic phenomena. Specifically, broadening of the generated THz spectrum by a factor of 2 compared to previous reports and to an uncoupled system. To account for this behavior, we developed a dynamic theoretical framework which combines the hydrodynamic model as the source of the nonlinear THz emission with electron and lattice temperaturedependent permittivity. We see that our model agrees well the experimental results. These concepts unveil the fine fundamental physical dynamics of THz emission from nonlinear plasmonic metasurfaces. In addition, our work can advance the field toward efficient, active, integrated, and ultracompact optical elements for generating and controlling THz radiation. ■ ASSOCIATED CONTENT | 3,382.4 | 2022-07-28T00:00:00.000 | [
"Physics"
] |
Computational Fluid Simulation of Fibrinogen around Dental Implant Surfaces
Ultraviolet treatment of titanium implants makes their surfaces hydrophilic and enhances osseointegration. However, the mechanism is not fully understood. This study hypothesizes that the recruitment of fibrinogen, a critical molecule for blood clot formation and wound healing, is influenced by the degrees of hydrophilicity/hydrophobicity of the implant surfaces. Computational fluid dynamics (CFD) implant models were created for fluid flow simulation. The hydrophilicity level was expressed by the contact angle between the implant surface and blood plasma, ranging from 5° (superhydrophilic), 30° (hydrophilic) to 50° and 70° (hydrophobic), and 100° (hydrorepellent). The mass of fibrinogen flowing into the implant interfacial zone (fibrinogen infiltration) increased in a time dependent manner, with a steeper slope for surfaces with greater hydrophilicity. The mass of blood plasma absorbed into the interfacial zone (blood plasma infiltration) was also promoted by the hydrophilic surfaces but it was rapid and non-time-dependent. There was no linear correlation between the fibrinogen infiltration rate and the blood plasma infiltration rate. These results suggest that hydrophilic implant surfaces promote both fibrinogen and blood plasma infiltration to their interface. However, the infiltration of the two components were not proportional, implying a selectively enhanced recruitment of fibrinogen by hydrophilic implant surfaces.
Introduction
Biomaterials implanted in the human body will initially interact with blood. Consequently, the exposed biomaterial surface will be covered by host plasma proteins [1]. Protein adsorption determines the capability of material surfaces to attract cells and controls the cascade of events leading Int. J. Mol. Sci. 2020, 21, 660 2 of 13 to the expression of specific cellular phenotypes necessary for wound healing [2,3]. Fibrinogen and its byproduct, fibrin, play a crucial role in the initial phase of wound healing, particularly during blood clotting, cell recruitment, and angiogenesis.
In titanium implant therapy in the field of dental and orthopedic surgical restoration and reconstruction, bone-to-titanium integration, referred to as osseointegration, is a necessary process for the successful treatment outcome. Osseointegration is the wound healing around titanium, triggered by surface protein adsorption, complement activation, and fabrication of a fibrinogen and fibrin matrix [4]. Several studies have suggested that platelet adhesion and activation are particularly affected by adsorbed fibrinogen via its direct interaction with platelet receptors [5][6][7]. Moreover, fibrinogen is clinically used in fibrin gel complex such as plate-rich fibrin (PRF) and autologous fibrin gel (AFG) to promote bone and soft tissue regeneration around titanium implants [8,9].
The level of hydrophobicity/hydrophilicity influences the bioactivity of the titanium surfaces. As-received titanium surfaces, including the surfaces of titanium-based commercial implant products, are hydrophobic, with their contact angle for water being higher than 70 • [10]. Recently, the treatment of titanium with UV light of a particular strength and wavelength, referred to UV photo-functionalization, was discovered as a measure to convert the titanium surfaces to superhydrophilic with a contact angle of 5 • or less [11]. The generation of superhydrophilicity is explained by the removal of hydrocarbons that had been unavoidably accumulated on titanium surfaces [12]. The UV treated surfaces show a greater capability of recruiting osteogenic cells and eventually promote osseointegration [13]. However, there is a critical gap in understanding why UV-induced superhydrophilic titanium surfaces have such an excellent cellular affinity.
Due to the significant advancement in computer science, there has been a remarkable progress in numerical simulations. In the field of dentistry, these simulations have been mostly performed for stress analysis using finite element method [14][15][16]. Computational fluid dynamics (CFD) is widely used to simulate fluid flow which cannot be experimentally reproduced. In the fields of neurosurgery and cardiovascular surgery, CFD allows hemodynamic parameters to be assessed non-invasively as an alternative to experiments on living bodies [17][18][19][20][21]. With help from modern mathematical modeling, we believe we are able to simulate the interaction between blood and biomaterial surfaces. There have been no reports on the application of CFD to blood flow analysis around dental implants. This study hypothesizes that the recruitment of fibrinogen to the implant surfaces is influenced by the degrees of hydrophilicity/hydrophobicity of the surface. The objective of this study was to examine the progressive fibrinogen infiltration to the implant surfaces with different degrees of hydrophilicity/hydrophobicity using CFD models. To determine the potential collateral effect by the supposedly enhanced blood plasma flow by hydrophilic implant surfaces, the blood plasma infiltration to the implant interface was also examined. To create a CFD model of titanium implants, an arbitrary dimension of a regular-size dental implant was used.
Fibrinogen Infiltration to the Interfacial Zone of Implants with Different Contact Angles
We performed the analysis using ANSYS Fluent, a commercially available fluid simulation software package (2019 R1, ANSYS Inc., Canonsburg, PA). We set the contact angle between the implant surface and the blood plasma (CAIS) to a value of 5 • (superhydrophilic) or 30 • (hydrophilic), 50 • or 70 • (hydrophobic), and 100 • (hydrorepellent), and analyzed the flow of fibrinogen and blood plasma during a period of 3 s after mock placement of an implant in the bone. The analysis time for each implant of different CAIS was approximately 180 min. Movies were made based on the results of the analyses for the implant surfaces with the CAIS of 5 • ("The flow of fibrinogen on hydrophilic surface"), 70 • ("The flow of fibrinogen on hydrophobic surface"), and 100 • ("The flow of fibrinogen on hydrorepellent surface") using CFD-Post (2019 R1, ANSYS Inc., Canonsburg, PA). These movies can be watched as supplementary materials (see Supplementary Materials). In this study, we divided the geometrical model into two areas, interfacial zone and outer zone, using a vertical line that connects of the peaks of implant threads ( Figure 1). In the present study we defined infiltration as the mass of fibrinogen or blood plasma in the interfacial zone for each time step. We focused on the analysis of the interfacial zone, which is thought to be the most important zone for osseointegration. We also used the outer zone as a reference to calculate the fibrinogen infiltration rate and blood plasma infiltration rate. The infiltration rate is defined as the mass of the fibrinogen and blood plasma in the interfacial zone divided by the total amount of fibrinogen and blood plasma present in the fluid zone for each time step, respectively. connects of the peaks of implant threads ( Figure 1). In the present study we defined infiltration as the mass of fibrinogen or blood plasma in the interfacial zone for each time step. We focused on the analysis of the interfacial zone, which is thought to be the most important zone for osseointegration. We also used the outer zone as a reference to calculate the fibrinogen infiltration rate and blood plasma infiltration rate. The infiltration rate is defined as the mass of the fibrinogen and blood plasma in the interfacial zone divided by the total amount of fibrinogen and blood plasma present in the fluid zone for each time step, respectively. Figure 2 shows the volume rendering images indicating the mass of fibrinogen in the fluid zone. The analysis was performed for 3 s, with the first and last seconds defined as early and late stages, respectively. Volume rendering images revealed that more fibrinogen reached the interfacial zone at the implant surface with the CAIS of 5° than at the implant surface with the CAIS of 70° ( Figure 2). Most of the implant threads were filled with fibrinogen even after 1 s when the CAIS was 5°. In contrast, the implant threads were largely left blank even after 3 s around the implant of 100° CAIS. Figure 3 is the quantitative presentation of the fibrinogen infiltration. The average of fibrinogen infiltration over time during the 3 s period was 1.6, 1.2, 1.2, 1.1, and 0.6 mg when the CAIS values were 5°, 30°, 50°, 70°, and 100°, respectively. Table 1 shows the time integrals of fibrinogen infiltration and the ratio of the time integral to the value when the CAIS was 5°. Fibrinogen infiltration increased with time from the early, mid, to late stages, regardless of the CAIS (Figure 3a and Table 1). Table 1. Time integrals of fibrinogen infiltration and ratio of the time integral to the value when the contact angle between the implant surface and the blood plasma (CAIS) was 5°. The more hydrophilic the implant surface was, the higher was the rate of the time-dependent increase. The rate of increase was the highest for the CAIS of 5° and the lowest for 100°. The histogram The analysis was performed for 3 s, with the first and last seconds defined as early and late stages, respectively. Volume rendering images revealed that more fibrinogen reached the interfacial zone at the implant surface with the CAIS of 5 • than at the implant surface with the CAIS of 70 • (Figure 2). Most of the implant threads were filled with fibrinogen even after 1 s when the CAIS was 5 • . In contrast, the implant threads were largely left blank even after 3 s around the implant of 100 • CAIS. Figure 3 is the quantitative presentation of the fibrinogen infiltration. The average of fibrinogen infiltration over time during the 3 s period was 1.6, 1.2, 1.2, 1.1, and 0.6 mg when the CAIS values were 5 • , 30 • , 50 • , 70 • , and 100 • , respectively. Table 1 shows the time integrals of fibrinogen infiltration and the ratio of the time integral to the value when the CAIS was 5 • . Fibrinogen infiltration increased with time from the early, mid, to late stages, regardless of the CAIS (Figure 3a and Table 1). The more hydrophilic the implant surface was, the higher was the rate of the time-dependent increase. The rate of increase was the highest for the CAIS of 5 • and the lowest for 100 • . The histogram created for the average fibrinogen infiltration during each of the three stages clearly showed the progressive increase of fibrinogen and its remarkable enhancement when the CAIS was 5 • (Figure 3b). created for the average fibrinogen infiltration during each of the three stages clearly showed the progressive increase of fibrinogen and its remarkable enhancement when the CAIS was 5° (Figure 3b). We calculated the infiltration rate for fibrinogen and blood plasma during each time step based on the mass of fibrinogen and blood plasma that were obtained in the analysis. Figure 4 shows the change over time of the fibrinogen infiltration rate. The average infiltration rate from the 3 s analysis was 20.4%, 16.2%, 17.9%, 15.3%, and 6.6% when the CAIS was 5°, 30°, 50°, 70°, and 100°, respectively, thus showing the substantially increased and decreased infiltration rates in the extreme conditions of 5° and 100°, respectively. Particularly, the 5° implant surface showed a substantial increase from the mid-to-late stage, whereas the 100° implant surface, surprisingly showed the progressive decrease with time. created for the average fibrinogen infiltration during each of the three stages clearly showed the progressive increase of fibrinogen and its remarkable enhancement when the CAIS was 5° (Figure 3b). We calculated the infiltration rate for fibrinogen and blood plasma during each time step based on the mass of fibrinogen and blood plasma that were obtained in the analysis. Figure 4 shows the change over time of the fibrinogen infiltration rate. The average infiltration rate from the 3 s analysis was 20.4%, 16.2%, 17.9%, 15.3%, and 6.6% when the CAIS was 5°, 30°, 50°, 70°, and 100°, respectively, thus showing the substantially increased and decreased infiltration rates in the extreme conditions of 5° and 100°, respectively. Particularly, the 5° implant surface showed a substantial increase from the mid-to-late stage, whereas the 100° implant surface, surprisingly showed the progressive decrease with time. We calculated the infiltration rate for fibrinogen and blood plasma during each time step based on the mass of fibrinogen and blood plasma that were obtained in the analysis. Figure 4 shows the change over time of the fibrinogen infiltration rate. The average infiltration rate from the 3 s analysis was 20.4%, 16.2%, 17.9%, 15.3%, and 6.6% when the CAIS was 5 • , 30 • , 50 • , 70 • , and 100 • , respectively, thus showing the substantially increased and decreased infiltration rates in the extreme conditions of 5 • and 100 • , respectively. Particularly, the 5 • implant surface showed a substantial increase from the mid-to-late stage, whereas the 100 • implant surface, surprisingly showed the progressive decrease with time. Figure 5 shows the volume rendering images showing the mass of blood plasma in the fluid zone. Most of the implant threads were filled with blood plasma even after 1 s when the CAIS was 5° and 70°, while most of the implant threads were left blank at 1 s and even after 3 s when the CAIS was 100°. Figure 5 shows the volume rendering images showing the mass of blood plasma in the fluid zone. Most of the implant threads were filled with blood plasma even after 1 s when the CAIS was 5° and 70°, while most of the implant threads were left blank at 1 s and even after 3 s when the CAIS was 100°. Figure 6a shows that the more hydrophilic the surface, the more blood plasma reached to the interfacial zone. However, the time-dependent increase of the infiltration was not observed unlike fibrinogen regardless of the CAIS. The blood plasma infiltration rapidly increases in the beginning of the early stage and reached plateau during the early stage when the CAIS was 5 • , 30 • , 50 • , and 70. While the blood plasma filtration rapidly decreased and remained low even at 3 s when the CAIS was 100 • (Figure 6a). CAIS values were 5°, 30°, 50°, 70°, and 100°, respectively. Figure 6a shows that the more hydrophilic the surface, the more blood plasma reached to the interfacial zone. However, the time-dependent increase of the infiltration was not observed unlike fibrinogen regardless of the CAIS. The blood plasma infiltration rapidly increases in the beginning of the early stage and reached plateau during the early stage when the CAIS was 5°, 30°, 50°, and 70. While the blood plasma filtration rapidly decreased and remained low even at 3 s when the CAIS was 100° (Figure 6a). Table 2 shows the time integrals of blood plasma infiltration and the ratio of the time integral to the value when the CAIS was 5° at each time stage. The changes in the blood plasma infiltration among three time stages were smaller than those of fibrinogen ( Figure 6b and Table 2). Figure 7 shows the blood plasma infiltration rate. The average blood plasma infiltration rate over time during the 3 s was 30.9%, 30.3%, 29.5%, 24.3%, and 10.6% when the CAIS values were 5°, 30°, 50°, 70°, and 100°, respectively. The rate increased in the beginning of the early stage and reached plateau during the early stage when the CAIS was 5°, 30°, 50°, and 70° while it decreased when the CAIS was 100°. Thus, the blood plasma infiltration rate showed the same tendency as the blood plasma infiltration. Table 2 shows the time integrals of blood plasma infiltration and the ratio of the time integral to the value when the CAIS was 5 • at each time stage. The changes in the blood plasma infiltration among three time stages were smaller than those of fibrinogen ( Figure 6b and Table 2). Figure 7 shows the blood plasma infiltration rate. The average blood plasma infiltration rate over time during the 3 s was 30.9%, 30.3%, 29.5%, 24.3%, and 10.6% when the CAIS values were 5 • , 30 • , 50 • , 70 • , and 100 • , respectively. The rate increased in the beginning of the early stage and reached plateau during the early stage when the CAIS was 5 • , 30 • , 50 • , and 70 • while it decreased when the CAIS was 100 • . Thus, the blood plasma infiltration rate showed the same tendency as the blood plasma infiltration. Figure 8 shows the scatter plots of the fibrinogen infiltration rate and blood plasma infiltration rate during the whole duration of the analysis for each CAIS. The correlation coefficients between two rates was −0.0896 (p < 0.001), 0.2041 (p < 0.001), 0.4474 (p < 0.001), 0.3700 (p < 0.001), and −0.2566 (p < 0.001) when the CAIS values were 5°, 30°, 50°, 70°, and 100°, respectively. There was no linear correlation between the two infiltration rates regardless of the CAIS. Figure 8 shows the scatter plots of the fibrinogen infiltration rate and blood plasma infiltration rate during the whole duration of the analysis for each CAIS. The correlation coefficients between two rates was −0.0896 (p < 0.001), 0.2041 (p < 0.001), 0.4474 (p < 0.001), 0.3700 (p < 0.001), and −0.2566 (p < 0.001) when the CAIS values were 5°, 30°, 50°, 70°, and 100°, respectively. There was no linear correlation between the two infiltration rates regardless of the CAIS.
Discussion
Recent studies found that immediately after processing and regardless of the type of processing, titanium surfaces show a contact angle to water of either 0 • or less than 5 • [22][23][24][25][26][27]. In the latter case, these surfaces are superhydrophilic. This superhydrophilic nature gradually attenuates, and the surface becomes hydrophobic in 2 weeks as the contact angle changes to over 40 • . The contact angle for a 4-week-old acid-etched surface is more than 60 • [28]. Aita et al. exposed titanium disks with machined and acid-etched surfaces to UV light, and compared the contact angle to an H 2 O droplet to that of an unexposed disk. The contact angles for the machined and acid-etched surfaces that were not UV-treated were 53.5 • and 88.4 • , respectively, and those for the UV-treated surfaces were 0 • for both cases. The results show that the UV exposure can transform a hydrophobic titanium surface into a superhydrophilic surface [13]. The researchers also showed that the adsorption of albumin and fibronectin, which are proteins in serum, increased significantly in titanium disks treated with UV light compared with untreated counterparts. [29] They hypothesized that UV treatment of dental implants encourages attachment of the osteoblasts through the interaction between the proteins adsorbed on the titanium and the integrins on the cell membranes. However, currently, it is not possible to observe the actual behavior of serum proteins around dental implants in living organisms. Furthermore, since it is difficult to recreate arbitrary contact angles between biomaterial surfaces and the surrounding tissue fluids or H 2 O both in vitro and in vivo, none of the existing reports can verify the impact of hydrophilicity/hydrophobicity of the implant surface on protein adsorption. Therefore, in this study, we used CFD in order to investigate the impact of hydrophilicity/hydrophobicity of the implant surface on the fibrinogen infiltration to the interfacial zone.
The results of the analysis performed in this study showed that as the CAIS decreases, the mass of fibrinogen flowing into the zone nearest to the implant surface (fibrinogen infiltration) increases. The average fibrinogen infiltration during the 3 s period was 1.5 times higher when the CAIS was 5 • (hydrophilic) than when it was 70 • (hydrophobic). The value was 3.0 times higher when the CAIS was 5 • than when it was 100 • (hydrorepellent). Furthermore, the change in the time integral of fibrinogen infiltration during each stage plotted in Figure 3 shows that the value increases over time for all values of the CAIS. However, the difference in this value increased with time when the CAIS values were 50 • , 70 • , and 100 • compared to when the CAIS was 5 • . When considering the ratio of the mass of fibrinogen to the value when the CAIS was 5 • (Table 1), the difference increased over time from 0.17 to 0.27 when the CAIS was 50 • , from 0.17 to 0.35 when it was 70 • , and from 0.50 to 0.73 when it was 100 • . When the CAIS was 30 • , the difference in the value compared to the case in which it was 5 • changed from 0.33 to 0.19. The value decreased with time. Therefore, as long as the CAIS is under 30 • , the adsorbed amount of fibrinogen is expected to increase over time in comparison to when the CAIS is more than 50 • .
Similar to the fibrinogen, the mass of blood plasma flowing into the interfacial zone (blood plasma infiltration) increased as the CAIS decreased. The average value during the 3 s period was 1.3 times higher when the CAIS was 5 • (hydrophilic) than when it was 70 • (hydrophobic). The value was 3.2 times higher when the CAIS was 5 • than when it was 100 • (hydrorepellent). However, the behavior of the ratio of the blood plasma infiltration to the value when the CAIS was 5 • was different from that of fibrinogen. The amplitude of the variation changed from 0.00 to 0.07 when the CAIS was 30 • , from 0.03 to 0.11 when it was 50 • , from 0.18 to 0.23 when it was 70 • , and from 0.67 to 0.70 when it was 100 • ( Table 2). The change was smaller than the value for fibrinogen (Table 1) for all cases. The change in the time integral of fibrinogen infiltration deviated from the value for blood plasma. These results show that the increase in the fibrinogen infiltration over time was not proportional to the increase in the blood plasma infiltration. Furthermore, the correlation coefficients between the fibrinogen infiltration rate and blood plasma infiltration rate calculated from the plots (Figure 8) show that there was no linear correlation between the two quantities for all time steps during the analysis. Therefore, the results show that the fibrinogen infiltration surrounding the dental implant behaves independently of the blood plasma infiltration. In other words, this result implies that an implant surface with a small CAIS selectively gathers the fibrinogen within the blood plasma towards the interfacial zone.
As shown in Section 4, the flow field was laminar since the Reynolds number was smaller than the boundary value for a turbulent flow (2800). The mass fraction (Yi in Equation (1)) of fibrinogen is governed by advection and diffusion only. Furthermore, the diffusion coefficient of fibrinogen to blood plasma (0.23 × 10 −10 m 2 /s) was extremely small compared to the time scale (3 s) and spatial scale (1.5 × 10 −3 m) of the analysis. Therefore, it is clear that the mass fraction is governed more strongly by the advection rather than the diffusion. Accordingly, we surmise that the fibrinogen mass flowing into the interfacial zone increased as the CAIS decreased. This is because a fluid field in which the fibrinogen flowed into the interfacial zone was formed, and the mass was transported due to the advection. UV treatment of titanium dental implants encourages serum protein adsorption due to the change in the electric charge [29]. Researchers have shown that changing a titanium surface from electronegative to electropositive by applying a UV treatment causes adsorption of negatively charged proteins [24,30,31]. Since the effect of electric charge was not considered in this analysis, our study shows that the UV treatment of titanium dental implants encourages protein adsorption. This may be attributed to changes in the fluid field of the blood around the implant that cause protein adsorption as well as the effect of electric charge.
In order to improve our understanding of the phenomenon, it is necessary to perform the analysis for a longer time period, such as several tens of seconds or even minutes. This is because in an actual phenomenon, the whole blood increases its viscosity over time and changes into a blood clot in a few minutes after the placement of implants. However, in that case, it would be necessary to model the changes in viscosity and density of the fluid due to blood coagulation. Furthermore, although we considered only fibrinogen amongst the serum proteins in this study, it is also necessary to analyze the distributions of other major serum proteins, including albumin and fibronectin, which are involved in osseointegration. We anticipate the analysis to be complicated, since the relationships between these proteins such as impact on the diffusion coefficient of each protein, and impact on density and viscosity are unknown.
Geometrical Model
A two-dimensional geometrical model was generated using ANSYS Design Modeler (2019 R1, ANSYS Inc., Canonsburg, PA, USA) to mimic a bone-implant interface (Figure 1). The height and width of the model were 10.0 and 1.5 mm, respectively. The model was comprised of four boundaries and one fluid zone. The blood inlet, alveolar bone, implant surface, and blood outlet were demarcated as the boundaries. Whole blood flowed from the blood inlet and alveolar bone to blood outlet in the analysis. The implant surface had 10 threads. The height and width of each thread were 1.0 and 0.5 mm, respectively.
Mesh Generation
The computational mesh was generated from the geometrical model using ANSYS Meshing (2019 R1, Ansys Inc., Canonsburg, PA, USA; Figure 1). The mesh consisted of 14,610 quadrilateral cells.
Numerical Methods for Blood Flow Simulation
The volumes of fraction (VOF) for blood plasma, red blood cells (RBCs), and the mass fractions for fibrinogen in blood plasma were calculated with the VOF model and species transportation model in ANSYS Fluent (2019 R1, ANSYS Inc., Canonsburg, PA, USA), respectively. Using the VOF model in ANSYS Fluent, the distribution of the VOF of each fluid in the fluid zone can be analyzed by solving the transport equation of the VOF, the equation for momentum conservation, and the equation for mass conservation. The contact angle between the boundary of the computational mesh and fluids can be defined. In this study, whole blood was assumed to consist of blood plasma and RBCs. Blood plasma and RBCs were set as the primary phase and secondary phase, respectively. They were treated as continuum fluids. We focused on the behavior of fibrinogen and RBCs in the order of millimeters rather than microns. Moreover, the width of the fluid zone (1.5 mm) was significantly larger than the diameter of an RBC, which is approximately 7 µm. The density and viscosity of the RBCs were set to 1125 kg/m 3 [32] and 0.0050 Pa·S [33], respectively. The density and viscosity of blood plasma are defined in Section 4.4. The continuum surface force model in ANSYS Fluent was used to define the interaction between the blood plasma and RBCs. The value of interfacial tension was set to 0.021 N/m [34].
Numerical Methods for Fibrinogen Flow Simulation
The concentration (mass fraction) of fibrinogen in blood plasma was calculated by the advection-diffusion equation (Equation (1)), as follows: where Y i is the concentration (mass fraction) of the species. The subscript i is the species number, and the numbers 0 and 1 indicate fibrinogen and blood plasma, respectively. The mass fraction of the fibrinogen (Y 0 ) was calculated by Equation (1). While, the mass fraction of the blood plasma (Y 1 ) was calculated as the difference between 1 and Y 0 . As the sum of the Y 0 and Y 1 is always 1. ρ m (kg/m 3 ) is the density of the mixture of blood plasma and fibrinogen and is described below.
→ F i is the diffusion flux of the species. Fick's law (dilute approximation) was used to express the mass diffusion caused by the mass fraction gradient, and the diffusion flux is expressed using Equation (2) under the law: where D i,m is the diffusion coefficient (m 2 /s) of species i in the mixture. Currently, no measurements exist for the diffusion coefficient of fibrinogen to blood plasma. In this study, the diffusion coefficient of fibrinogen to water (0.23 × 10 −10 m 2 /s [35]) was used as the diffusion coefficient of fibrinogen to blood plasma. The density for whole blood plasma (including fibrinogen, ρ m ) was defined as a function of Y i according to the volume-weighted mixing law.
where the density of fibrinogen (ρ 0 ) and blood plasma (not including fibrinogen, ρ 1 ) were set as 1400 [36] and 1025 kg/m 3 [32], respectively. The viscosity of the whole blood plasma, including fibrinogen (µ m Pa·S), was defined as a function of fibrinogen concentration, and expressed as Equation (4), which was derived from a previous study [37]. Figure 9 shows the relationship between µ m and the concentration of fibrinogen.
where C is the fibrinogen concentration (g/100 mL) in blood plasma.
where is the fibrinogen concentration (g/100 mL) in blood plasma. Figure 9. The relationship between viscosity of blood plasma and fibrinogen concentration. Figure 9. The relationship between viscosity of blood plasma and fibrinogen concentration.
Numerical Conditions
We analyzed the distribution of blood plasma and fibrinogen during the 3-s period following the implantation of the dental implant into the jaw bone using a non-steady analysis. We used a double precision solver and a coupled scheme for the connection between the velocity and pressure. Discretization was performed to a second-order accuracy. The time step size was set to 0.0001 s. Convergence was determined by monitoring the mass (kg) of fibrinogen in the fluid zone. The analysis was determined to have converged when the change in the value for each time step was below 1 × 10 −9 kg.
The analysis model included two velocity inlets (blood inlet and alveolar bone) and one pressure outlet (blood outlet). A velocity of 0.01 m/s was applied to the velocity inlets. A free stream boundary condition was applied to the outlet. The volume fraction of the RBCs at the velocity inlets was set to 45% (the hematocrit level of a healthy adult), and the volume fraction of the blood plasma was set to 55%. The mass fraction of the fibrinogen at the velocity inlets was set to 0.0029. The value was gained by dividing the reference value of human serum concentration of fibrinogen (300 mg/dL = 3 kg/m 3 ) by the density of blood serum (1024 kg/m 3 [38]). The contact angle between the implant surface and the blood plasma (CAIS) in the VOF model was varied to 5 • (Superhydrophilic) or 30 • (hydrophilic), 50 • or 70 • (hydrophobic), and 100 • (hydrorepellent). The analysis was performed for each of these conditions. The Reynolds number at the inlet was 6. Since the Reynolds number was smaller than the value at which the flow field transitions to a turbulent flow (namely 2800), the flow field within the fluid zone could be considered to be laminar.
The analysis was performed on a single computer running the Microsoft Windows operating system (Microsoft Windows 10 Professional, Microsoft Corp., Redmond, WA, USA).
Conclusions
We analyzed the fibrinogen and blood plasma infiltration around a dental implant using CFD. The results show that as the contact angle between the implant surface and the blood plasma decreases, the fibrinogen infiltration increases. Furthermore, the results show that there is no linear correlation between the fibrinogen infiltration rate and blood plasma infiltration rate. This implies that there is a possibility that the hydrophilic implant surface selectively draws fibrinogen from the blood plasma towards the zone nearest to the interface. This study demonstrated the usefulness of CFD for investigating the interaction between blood and dental material surfaces. In order to improve our understanding of this phenomenon, it is necessary to model the blood coagulation and perform the analysis for a longer period of time. | 7,386.4 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Biology"
] |
Comparison of cavitation bubbles evolution in viscous media
There have been tried many types of liquids with different ranges of viscosity values that have been tested to form a single cavitation bubble. The purpose of these experiments was to observe the behaviour of cavitation bubbles in media with different ranges of absorbance. The most of the method was based on spark to induced superheat limit of liquid. Here we used arrangement of the laser-induced breakdown (LIB) method. There were described the set cavitation setting that affects the size bubble in media with different absorbance. We visualized the cavitation bubble with a 60 kHz high speed camera. We used here shadowgraphy setup for the bubble visualization. There were observed time development and bubble extinction in various media, where the size of the bubble in the silicone oil was extremely small, due to the absorbance size of silicon oil.
Introduction
The cavitation phenomena is mainly known for its undesirable effects in turbo machinery, however there is a great potential for its utilization in the industry, medicine, biology, pharmacy, or tissue engineering.
The research in the field of cavitation was previously focused mainly on the investigation of the bubble behavior in the vicinity of rigid or flexible boundaries; [1][2][3][4] however the current investigations require including the description of response of the impacted material itself.[1][2] The definition of the material response can help in development of new, more resistant structures or layers which better withstand the action of cavitation bubbles and prolong the service life of products.The key in the understanding of the cavitation interaction with various materials is the investigation of the impact of individual bubbles and its structures [1].
Cavitation can be defined as a collection of effects connected to the origin, activities, and collapse of macroscopic bubbles in liquid.Cavitation bubbles are usually not separated in real applications.The bubbles create structures, which acts collectively, however essential elements of these structures are individual bubbles [2].
Although the current state of technique is at very high level, we are still not able to produce one controllable individual bubble by pressure decrease in the liquid volume, following the cavitation definition.All other methods as the spark or laser generated bubbles are closer to the boiling as these are based on the evaporation of small volume of liquid.The bubbles generated by the ultrasonic field satisfy the cavitation definition, however to produce one single bubble is almost impossible.Anyway, we dealing with great technique LIB, there still exists a lack of information in experimental part of cavitation bubble investigation [1].
The one of the method for bubble generation is spark in liquid, or a heated top of a wire.Once the bubble is stable, and of certain volume, it is either over heated, or expose to force impact.The convenient method for generating single cavitation bubble that can be very precisely geometrically placed in the volume of the liquid, and close to the sample, is Laser Induced Breakdown (LIB).[5][6][7][8][9] The LIB method enables to generate cavitation bubble using ultrashort pulses of milijoules energies.This method is thermal breakdown based on natural plasma generation.This plasma is in form of optical breakdown, if the pulse exposure is from microseconds to femtoseconds' time.There occurs direct, multiphoton, and cascade ionization during the LIB.The significant role plays impurities of the medium, spot size, light wavelength, and pulse width during the breakdown.The whole mechanism of the ionization is very well explained by Kennedy [6].
The research of bubble behavior focuses primarily on the movement of bubbles in viscous liquids.The motion of bubbles in fluids is of great importance in various gas-liquid reactors and processes, as well as numerous natural phenomena.As a result, extensive studies have been conducted in the past (see reviews by Clift et al. (1978), Magnaudet and Eames (2000), and Kulkarni and Joshi (2005)), although various aspects still remain indeterminate, particularly in relation to the dynamic behaviour of the bubbles in liquids [10].
Bubble generated in a higher viscosity liquid is being formation in two differs stages.For the evolution of the volume of the bubble in the breakup stage is always greater than that in the expansion stage, due to the different role of the liquid phase in the evolution of the bubble for the two stages [11].
Furthermore, the effect of absorbance and absorption of light on bubble generation in the liquid is important, where absorption of light in the sample is described in the Lambert-Beer Law.The measured values are expressed as absorbance or transmittance.Absorption is usually measure in a linear arrangement.Absorption spectrometers: single-beam or two-beam.The intensity of the beam passing through the sample also weakens scattering.The measurement gives reasonable results for the absorbance in the interval (0.005 -1) [12].
Absorbance takes values from 0 to infinity.If the absorbance has a zero value, the substance does not absorb any radiation and the permeability is equal to one.If the absorbance is infinite, the substance under investigation absorbs all the radiation and the throughput is zero.It should also be noted that less than 10% of the light passes through the absorbance of more than 1, and a small error in passing radiation can cause a large error in the determination of the absorbance.Most determinations are therefore performed so that the maximum absorbance value lies around the value of 1 [13].
Experimental
The bubbles can be generated by several physically different mechanisms.The most obvious in nature is the hydrodynamics cavitation, where bubbles are produced due to local pressure decrease caused by the flow acceleration in vicinity of obstacles.Acoustic cavitation is produced by imposing an intensive acoustic field into the bulk of liquid [5].
Here, we used laser induced breakdown for cavitation bubble generating in liquid.Experiments run in optical glass cuvette.
Laser Induced Breakdown
Optical breakdown in liquid is usually produced by focusing of the laser light trough suitably designed optics.Laser induced breakdown in aqueous media and its collateral effects are described in detail by Kennedy in [6].The energy distribution during the growth and collapse of laser induced bubble was described e.g. by Vogel in [14] and [15].Authors investigated the influence of laser pulse duration and input laser energy on the bubble dynamics and shock waves emission.
Here we chosen green light as exciting light pulse.The absorption coefficient of the 532 nm laser light is only 0.02 m -1 in distillated pure water.This wave length requires higher energy input to the system to induce cavitation process.For the single bubble generation we used here the setup for the LIB.The 10 ns short laser pulse was generated using Q-switched Nd: YAG New Wave Gemini pulse laser.This laser worked with one cavity for single shot generation on the wavelength 532 nm.The Q-switch signal synchronized the high speed camera running in triggering mode.
LIB was set as an optical direct way.The outlet diameter of laser beam was 5mm with Gaussian characteristics of the intensity.This setup was followed with concave lens f s = 200 mm, and convex lens f s =25 mm of 1inch diameter.The focused laser beam created the laser point -probe (diameter < 0.1 mm).Due the losses in the optical path on each of optical elements, comparable to this, we had to increase the energy level that enters the whole system.The set output energy of the laser is taken in account in the relation to the bubble diameter.The energies that are required for the bubble generation are seen in Fig. 1.For the relation between input laser energy and the bubble size that was generated with our optical setup, we performed the set of calibration and size measurement.The energy of the laser beam was adjusted and measured with Ophir pyroelectric energy sensor.We measured the laser energy in three positions.There is seen the characteristic of laser light energy calibrated to the position of attenuator.NewWave Gemini laser enables setting of light intensity adjusting flash lamp power and fine adjustment using attenuator.The attenuator enables controllable backward loop, as an actual value is seen on the display.We set the laser on full flash lamp power and adjust the output energy using attenuator.It is seen the energy disorder in Fig. 2, when the attenuator is 0, the output laser energy is higher than for attenuator 50.We neglect this effect as we are working on the attenuator position starting on 120 to generate cavitation bubble, anyway.We suppose this is laser attenuator characteristic.
Laser beam energy [mJ]
Attenuator position, Full Flash lamp According to Kennedy [16], plasma temperature shows asymptotic dependence on laser pulse energy as well.With higher laser energy we recognized negative influence of impurities and presence of segmentations on the bubble surface.
Optical setup
Here we used two kind of setup for LIB: the direct way with 50 mm Plano-Convex Lens and 30 mm Plano-Convex Lens.The direct optical setup enables easy handling and replacement of single optical component.
Here we work with the lowest possible number of optical components, so we expecting lowest light intensity lost in system.The setup and used optical components is seen in Fig. 3.
Visualization setup
We used here shadowgraphy setup for the bubble visualization.This setup consists of continuous LED matrix daylight lamp Veritas, MiniConstellation 120 -5000 K of illuminance 92 klux in 0.5 m, set with optical diffuse filter.Opposite to the light source was placed high speed CMOS camera SpeedSense.This camera is working on frequency 60 kHz with resolution of (128 x 128) px or lower frequency with higher resolution up to (1280 x 800) px, and the dynamic range 12 bit.The camera exposure time was 1µs.The sub-pixel resolution was 20µm.The camera was mounted with optical lens system INFINIPROBE™ TS-160 universal macro/micro imaging system that enables 4x, and 16x magnification.The camera was mounted with edge pass and long pass filter cut-on wavelength 550 nm low pass optical filter to reduce the backward laser flashes to the camera and also to eliminate the flash generated while plasmatic breakdown.
We also used a light reflecting setup for visualization of the outer bubble structures.We placed the light under 45° from the camera direction.
Absorbance
The Lambert-Beer law expresses the relationship between the absorbance and the concentration of the investigated substance.
where A is the absorbance, c molar concentration of the test sample (mol/l), 1 cell thickness (cm) and ε molar absorption coefficient (1.mol -1 .cm - ).The molecular absorption coefficient is a constant for a given substance at a given wavelength [13].
Results
There is generated the plasma in the spot due the concentration of the laser energy.The plasma is visible due the emission.The temperature in liquid increase to 10 3 K, and pressure to 10 3 bar in the spot volume.This leads to plasma expansion at supersonic velocities, producing an acoustic shock wave following with cavitation bubble effect.It is important to mark for the following figures 5 -7 that the laser light comes from the up side on the pictures.
There have been discovered that at the same energy of the laser the bubble has a different lifetime depending on the type of liquid.Here we expected the bubble generated in liquid with the lower absorption coefficient to have the longest lifetime.Figure 5 shows the development of the bubble in Glycerine and Ethyl Alcohol, where the size of the bubble is identical but its lifetime is different.Figure 6 shows the development of the bubble in Ethylene Glycol and water, where the size of the bubble and its lifetime is different.
It can be seen that the input laser beam differs in Figures 5 and 6 at time t 0 at the same laser energy.Figure 7 shows the bubble generated in Ethylene Glycol at double the energy of the laser and an interesting chain effect of bubbles orientation is seen.The plasma is avalanched and formed to almost line geometry with increasing energy from the source.This corresponds to temporal and spatial plasma evolution.[6] The energy in the spot is not distributed continuously that is also shown in bubble formation -from the small to the bigger one.
The parts of bubbles are joined together.The part of small bubbles perishes independently on the rest.The multiple bubbles develop also in reverse focusing setup, but more compact.
Time
Ethylene Glycol Water They are further joined into big bubble that behaves as single one.As the bubbles were distributed at the beginning of cavitation process it is seen separated during the phase of implosion.Anyway the bubbles joined into single one, they are segregated again.In Figure 7, a primary show micro-jet creation through collapsing cavitation bubbles.It was well caught a specifically at time t 8 -t 14 .
The bubble has been generated with the same energy of the laser 5,6mJ, where depending on the type of liquid was measured different gas volumes of bubbles.The largest bubble was formed in Ethylene Glycol as seen in Figure 9.It is further shown in Figure 9 that the bubble formed in Ethyl Alcohol had the collapse of the bubble most rapidly to 90μs.The higher the viscosity of the liquid, the slower the bubble collapsed.Furthermore, in a liquid with higher absorbance was formed a bubble with a smaller volume of gas.
Here we managed to acquire a technique and procedure for evaluating bubble behavior over time that can be quantified and compared.This technique and procedure is based on image analysis.This first step is important for further investigation, but requires further optimization, such as approximating the bubble in a suitable shape.
Fig. 1 .
Fig. 1.The graph of maximal bubble diameter on laser energy.
Fig. 2 .
Fig. 2. The calibration curve of output energy on laser attenuator; Position 1 was at the outcome of the laser head,
Fig. 3 .
Fig. 3.The optical setup of LIB technique for generating cavitation bubble.
Fig. 4 .
Fig. 4. The experimental setup of the laser, and the visualization system.
Fig. 5 .
Fig. 5. Comparison of LIB generated bubble in Glycerine and Ethyl Alcohol for laser input energy 5,6mJ.
Fig. 6 .
Fig. 6.Comparison of LIB generated bubble in Ethylene Glycol and water for laser input energy 5,6mJ. | 3,229.8 | 2018-06-01T00:00:00.000 | [
"Physics"
] |
THE EFFECT OF HEAT TREATMENT AND PRESSING AT 400 °C WITH COCONUT SHELL CHARCOAL MEDIA ON THE HARDNESS, MICROSTRUCTURE, AND DENSITY OF AL-SI ALLOYS
This study aims to determine the effect of the heat treatment process and pressing on hardness, density, and morphological changes in the microstructure of Al-Si alloys. In this study, the medium used was coconut shell charcoal with a mesh size of 80 at a heat treatment process of 400 °C and a holding time of 75 minutes. The pressing was carried out with a load of 150 N. The result of this research is an increase in the hardness of the Al-Si alloy with an average value of 133 VHN after the Heat Treatment and pressing process. In the microstructure, there is a morphological change in the Al-Si alloy with the reduction of Si elements and also an increase in the density value after the heat treatment process.
INTRODUCTION
Currently, aluminum is a non-ferrous metal that is most widely used in many branches of industry because of its superior and superior properties, especially Aluminum-silicon (Al-Si) alloys. In its development, Aluminum-silicon is widely used as a material for making a product because it has excellent castability, good weldability, good thermal conductivity, high strength at high temperatures, and excellent corrosion resistance [1][2][3][4][5][6].
The strength of aluminum alloys can be increased through a process, one method that can be taken to increase the strength of a metal is through a heat treatment process. Therefore, it is necessary to conduct research on aluminum as a result of the sand casting method with the addition of the carburizing process and the pressing process before the artificial aging process [7][8][9][10][11][12]. So that the results of this research can be used by the industry as a consideration in the selection of casting methods and with the aim of increasing the economic value of the product.
Supriyono [13] conducted a study on the effect of pack carburizing using charcoal on the properties of mild steel. These properties are represented by the results of the microstructure, hardness test, and tensile test. The carburizing process is carried out at 930 °C which is the austenitic temperature of mild steel. The carbon source is charcoal. The specimens were held for 2, 3, and 4 hours at carburizing temperature. The carbon content of the raw material is 0.17%. The raw material is hypo eutectoid steel with a microstructure of ferrite and pearlite phases. After the carburizing process, the microstructure can be divided into two zones. Case zone and core zone. The case zone consists of hypereutectoid, eutectoid, and hypoeutectoid sub-zones. The core zone is the same as the raw material. The longer the holding time, the deeper the case zone and the stronger the material.
Gubicza et al. [14] processed the ultrafine-grained (UFG) Al-4.8%Zn-1.2%Mg-0.14%Zr alloy by high-pressure torsion (HPT) technique and then aged at 120 and 170 °C for 2 hours. These microstructural changes due to artificial aging were studied by X-ray diffraction and transmission electron microscopy. It was found that the HPT-processed alloy had a small grain size of about 200 nm and a high dislocation density of about 891014 m -2 . The majority of the deposits after HPT are in the Guinier-Preston (GP) zone with a size of 2 nm, and only a few large particles are formed at the grain boundaries. Annealing at temperatures of 120 and 170 C for 2 hours resulted in the formation of stable MgZn2 deposits from some of the GP zones. It was found that for higher temperatures the MgZn2 phase fraction was larger and the dislocation density in the Al matrix was lower. Changes in precipitates (precipitation reactions) and density changes in shape due to aging correlate with the evolution of hardness. It was found that most of the reduction in hardness during aging was due to crushing deformation and some grain growth at 170 °C. The effects of aging on the microstructure and hardness of the HPT-processed specimens were compared with those observed for UFG samples processed with the same channel angle pressure. It was revealed that in the HPT samples, fewer secondary phase particles were formed at the grain boundaries, and a higher amount of precipitate in the interior of the grains resulted in higher hardness even after aging.
Tensile tests on smooth and notched cylindrical samples were used by Westermann et al. [15] to investigate the work-hardening and ductility of an artificially aged AA6060 aluminum alloy. The alloy was tested following three processing steps, each of which was followed by artificial aging. Casting and homogenization, extrusion, cold rolling, and heat treatment were used to achieve a recrystallized grain structure. Following each of these processing steps, the material was tested in underaged, peak aged, and overaged conditions. A laser-based measurement system was used to determine the true stress-strain curve to failure. To estimate the equivalent stress-strain curves, the Bridgman correction was used, and the work-hardening behavior was examined using an extended Voce approach. Fractography was used to investigate the failure mechanisms of materials exposed to various processing steps and temper treatments. Finite element simulations using the Gurson model were used to evaluate the use of the Bridgman correction and to investigate the notch strengthening effect observed experimentally. The experimental study shows the effects of thermomechanical processing and artificial aging on the alloy's stress-strain behavior and tensile failure strain.
Friction stir welding (FSW) of heat treatable Al alloy causes thermal cycle degradation of strength properties in the as-weld condition. As a result, post-weld heat treatment is used to restore the lost joint properties. The effect of artificial aging on microstructure characteristics was scientifically investigated by Joseph et al. [16] in this study. A microscope was used to examine the microstructural features. The grain size was found to be related to the strength and hardness properties. Due to the formation of equiaxed grains and fine precipitates, the aged joint exhibited higher lap shear strength than the weld joint.
Hardness is one of the important mechanical properties [17][18][19][20]. Hardness can be increased by artificial aging and carburizing heat treatment processes. Therefore, this research was conducted with the aim of increasing the hardness of Al-Si alloys.
MATERIALS AND METHODS
This research used materials: Al-Si alloy, coconut shell charcoal powder, and sodium carbonate (NaCO3). Then raw material was cast. After that, Al-Si alloy with coconut shell charcoal powder was heated to a temperature of 400 °C and held for 75 minutes. Furthermore, the furnace is turned off, and pressing with a load of 150 N is given until the temperature drops to 170 °C. After that, artificial aging is carried out at a temperature of 200 °C (Figure 1).
RESULTS AND DISCUSSIONS
From the photo of the microstructure in Figure 2, it can be explained that the element formed was an Al-Si alloy with a silicon content of 13.14% which was hypereutectic. In the picture, the Al element is light gray while the Si element is dark gray. The hypereutectic Si matrix forms small, thin, short, and dense flakes. In Figure 3, after the casting process, micro-photos are obtained that were different from the raw material. In the photo of the microstructure after casting, the shape of the microstructure is not homogeneous, this is because the density level decreases differently from the raw material. The specimen also contains hyperuetectic elements which are slightly elongated, uneven, and slightly thickened. In Figure 4, the photo analysis of the microstructure test on the surface of the Al-Si specimen after the heat treatment process obtained photos with the bright part which is the phase, namely the Al element, while the dark part is the phase, namely the Si element. It can also be noted that after the heat treatment and pressing process, there was a grouping of elements and some elements experienced a decrease in value, one of which was Si which after casting had a content value of 13.08% to 12.83%. This can be caused by the heat treatment process and the emphasis changes the density of the specimen and there is grouping of elements and the distance between elements is not tight. For specimens after casting, the solidification process took a long time, causing the formed phase to be not very clear. From Figure 6. It can be explained that the raw material specimen has a density value of 2.80 gr/cm 3 with a mass of 84 grams and a raw material volume of 30 cm 3 . After casting, there was a decrease in the density value of 2,598 gr/cm 3 with a mass of 42.35 gr and a specimen volume of 16.30 cm 3 . After the carburizing process and pressing, the density value increased slightly by 2,599 gr/cm 3 with a mass of 42.36 gr and a specimen volume of 16.30 cm 3 .
Then, it can also be explained that the highest value is when the specimen is still a raw material with a value of 2.80 gr/cm 3 . This is because the piston forming process uses the metal forming method, which through this method can increase the density value. After the casting process, the density value decreases until it reaches a value of 2,598 gr/cm 3 . This is caused by the porosity of the specimen which causes a decrease in the density value. After the carburizing and pressing process, the density value increased, although not significantly, with a value of 2,599 gr/cm 3 . This can occur due to the process of adding carbon and also due to the process of pressing the specimen during heat treatment.
Figure 6. Density test results.
Based on metallographic testing with Scanning Electron Microscope (SEM) and Energy X-Ray Spectroscopy (EDS), photos were obtained which showed that there was an element of carbon attached to the surface of the specimen (Figure 7.). At spectrum 1 of the EDS scan, it was found that there was element C with a concentration of 24.7 wt% and Al with a concentration of 67.1 wt%. Then, at spectrum 2 of the EDS scan, it was found element C with a concentration of 51.3 wt%, Al with a concentration of 12.4 wt%, and Si with a concentration of 1.7 wt%. The presence of carbon elements on the surface of the Al-Si specimen after the heat treatment and pressing process indicates that the heat treatment and suppression processes affect the addition of elements contained in the specimen.
CONCLUSION
From the analysis of research that has been carried out and the test results have been obtained from each material, it can be concluded that the results of testing the hardness of the specimen from the raw material, after casting, and after the Heat Treatment process the values obtained are as follows, the raw material of Al-Si castings has a hardness value. an average of 121.1 VHN, after casting the average hardness value is 76.32 VHN, and after the Heat Treatment and pressing process has an average hardness value of 133 VHN, which was the highest hardness value compared to raw material specimens and specimens after casting. In density testing, Heat Treatment has an influence on the density value. The density value of the raw material is 2.80 gr/cm 3 , which is the highest value. After casting, the density value decreased to 2,598 gr/cm 3 and after going through the Heat Treatment process it increased although not significantly, which was 2,599 gr/cm 3 . | 2,654.4 | 2022-07-31T00:00:00.000 | [
"Materials Science"
] |
Selected Scattering on Quasi-Ordered Hexagonal Close-Packed Al Nanodents for Tunable Output of White LEDs
: Quasi-ordered hexagonal close-packed Al nanodents, with depths of 30 nm and top-diameters of 300 nm prepared by electrochemical anodizing, are used to manage the output spectrum of white Light Emitting Diodes (LEDs). Significant short wavelength light, with a peak of 450 nm, displays significant scattering enhancements on these Al nanodents with the increment of the angle of the incidence, while long wavelength light, with a peak of 550 nm, shows weaker scattering on Al nanodents with the increment of theincidence angle. Near-field and far-field simulations reveal the e ff ect of light coupling in the holes of Al nanodents on the selected scattering. This work could provide a striking new way to make use of cheap white LEDs.
Introduction
The invention of high efficiency white Light Emitting Diodes (LEDs) has created a new chapter for the use of energy and lighting [1][2][3]. As the white LEDs have been widely used in interior lighting, the fact that short wavelength light causes damage to eyes due to the wave crest in the blue band of the LED spectrum, especially the cheap ones, has drawn great attention [4][5][6][7]. The common resolution of the wave crest in the blue band of the LED spectrum is through the control of the red, green and blue (RGB) or the use of fluorescent substances to absorb the excessive blue light [8][9][10][11]. However, the former way needs expensive electronic devices to achieve the matching of each component of the RGB; the latter method has the output of the white LED fixed, which limits their application ranges.
Recently, the enhanced selected light transmission in the visible spectrum of metal nanostructures such as Ag, Au, Al, and Cu has been designed and widely used in the antireflection layer of solar cells, the transmission-increasing layer in LEDs, and even in the highest possible resolution color filters [12][13][14][15]. Furthermore, a cheap way of fabricating large-scale quasi-ordered hexagonal close-packed Al nanodents on Al foil using electrochemical etching has been reported [16,17]. These Al nanodents show optical characteristics similar to those of sub-wavelength grating [18], exhibiting strong near-field enhancement and selected light scattering properties. As a result, the application of Al nanodents has become a new choice for light filters in LED sources.
Herein, quasi-ordered hexagonal close-packed Al nanodents were prepared and used to manage the output spectrum of white LEDs. The selected scattering enhancement of short wavelength light with a peak of 450 nm was found on Al nanodents, with the increment of the incidence angle ranging from 30 • to 70 • . However, long wavelength light with a peak of 550 nm was found to show weaker scattering under the same conditions. Meanwhile, slight peak shifts in the output of long wavelength light were observed. The reason why Al nanodents exhibit selected scattering characteristics was clarified by finite difference time domain (FDTD) simulations.
Materials and Methods
Following the anodic oxidation process reported [19,20], Al nanodents were prepared by electrochemically anodizing Al foil in a mixture of 0.3 M aqueous solution of oxalic acid and ethanol with a ratio of 1:3 for 3 h at 0 • C. Anodic aluminum oxides were etched out in a mixture of phosphoric acid (6 wt%) and chromic acid (1.8 wt%) at 60 • C for 12 h. Quasi-ordered hexagonal close-packed nanodents were obtained on the Al foils. The thickness of the Al foil was about 0.5 mm, making it completely optically opaque. To show the universality of the selected scattering of Al nanodents, a cheap white LED composed of a blue LED @ with yellow phosphor was used as the lighting source.
Simulations based on FDTD of the near-field light enhancement and the far-field scattering on Al nanodents were performed [21]. In our model, the electromagnetic pulse fixed at 450 and 550 nm for the incidence light (from 0 • to 60 • ) was launched into a box containing the target Al nanodents to simulate a propagating plane wave interacting with the nanostructure. The overlap region of the gold tip was divided into 5 nm meshes. The refractive index of the surrounding medium was taken to be 1.0. The Al nanodents were modeled at 30 nm depth with 300 nm top diameters according to the atomic force microscope (AFM) measurements, the far-field scattering of Al nanodents was prepared, and the scattering light was collected on a hemispherical surface with a diameter of 1 m.
The morphology of Al nanodents was characterized with a field emission scanning electron microscope (SEM, Hitachi UHR FE-SEM SU8010). The depth of the Al nanodents were characterized with an AFM (Agilent 5500) and the far-field scattering light was characterized with an optical fiber spectrometer (Ocean USB 4000).
Results and Discussion
As shown in Figure 1a, a white LED source was used to illuminate Al nanodents from different angles in the range of 0 • to 70 • . The divergence angles of LED sources ranged from 13 • to 15 • . Through the lens collimation, the divergence angle was reduced to less than 5 • in our case. The far-field light output of the LED was collected and characterized by an optical fiber spectrometer, keeping a distance of 2 cm from the sample, as shown in Figure 1a. Similar operations were used to measure the scattering spectra in the opposite direction. The source spectrum is presented in the inset in Figure 1a and shows two typical peaks around 450 and 550 nm. As shown in Figure 1b, Al nanodents exhibit blue scattering across the whole foil under vertical illumination of the white LED source. The morphology of Al nanodents was obtained with SEM and AFM, as shown in Figure 1c,d. The average size of these ordered hexagonal close-packed nanodents was about 300 nm, whereas the depth of the nanodents was relatively shallow, only about 30 nm.
As shown in Figure 2b, strong blue light scattering was found on the surface of the Al nanodents illuminated with white LED. The blue light scattering became stronger with the increment of the incidence angle from 0 • to 60 • . A simple schematic diagram was used to illustrate the position of the photos taken, as shown in Figure 2a. Obviously, light within the short wavelength range (blue) was first separated when the viewpoint ranged from 0 • to 60 • . At large angles (>70 • ), weak light with longer wavelengths (green and yellow) were separated. However, the scattering intensity of light with longer wavelengths was very weak, therefore the scattering of Al nanodents was mainly localized in the short wavelength range. As shown in Figure 2b, strong blue light scattering was found on the surface of the Al nanodents illuminated with white LED. The blue light scattering became stronger with the increment of the incidence angle from 0° to 60°. A simple schematic diagram was used to illustrate the position of the photos taken, as shown in Figure 2a. Obviously, light within the short wavelength range (blue) was first separated when the viewpoint ranged from 0° to 60°. At large angles (>70°), weak light with longer wavelengths (green and yellow) were separated. However, the scattering intensity of light with longer wavelengths was very weak, therefore the scattering of Al nanodents was mainly localized in the short wavelength range. To further understand the selected scattering of Al nanodents, the light scattering spectra and the output spectrum of white LED were characterized simultaneously. The light peak around 450 As shown in Figure 2b, strong blue light scattering was found on the surface of the Al nanodents illuminated with white LED. The blue light scattering became stronger with the increment of the incidence angle from 0° to 60°. A simple schematic diagram was used to illustrate the position of the photos taken, as shown in Figure 2a. Obviously, light within the short wavelength range (blue) was first separated when the viewpoint ranged from 0° to 60°. At large angles (>70°), weak light with longer wavelengths (green and yellow) were separated. However, the scattering intensity of light with longer wavelengths was very weak, therefore the scattering of Al nanodents was mainly localized in the short wavelength range. To further understand the selected scattering of Al nanodents, the light scattering spectra and the output spectrum of white LED were characterized simultaneously. The light peak around 450 To further understand the selected scattering of Al nanodents, the light scattering spectra and the output spectrum of white LED were characterized simultaneously. The light peak around 450 nm in Figure 3a was found to grow higher as the incidence angle increased from 30 • to 70 • . In contrast to the light peak around 450 nm, the light peak around 550 nm decreased with the increment of the incidence angle, which confirms the selected scattering in short wavelength ranges on Al nanodents. The spectrum of outputs also proves such selected scattering, as shown in Figure 3b. Significant reduction of the light intensity in the short wavelength range was clearly observed as the incidence angle increased from 30 • to 70 • . In order to distinguish the peak in 550 nm, we added 0% to 20% (interval 5%) to the value of the intensity of light, with the incidence angle ranging from 30 • to 70 • in Figure 3b. With the increment of the incidence angle, a few intensity changes were observed at Appl. Sci. 2019, 9, 3626 4 of 6 550 nm and a slight red shifts was observed to occur at the peak of 550 nm. The reason for the red shifts was that at very large incidence angles, the scattering of light with longer wavelengths is enhanced, as shown in Figure 2b. Meanwhile, selective light scattering did result in a reduction of light intensity by about 5-10% in the whole spectrum, which was mainly caused by the back scattering of light within short wavelengths. Through further optical design, combined with the selective coating of specific areas of down-conversion dyes, it is believed that the scattered light with short wavelengths will also be fully utilized. As a result, the ratio of the light component can be simply controlled by changing the angle of incidence. Considering their selected light scattering properties, Al nanodents can be used as simple light filters for cheap white LEDs to reduce the damage caused by blue light. increment of the incidence angle, which confirms the selected scattering in short wavelength ranges on Al nanodents. The spectrum of outputs also proves such selected scattering, as shown in Figure 3b. Significant reduction of the light intensity in the short wavelength range was clearly observed as the incidence angle increased from 30° to 70°. In order to distinguish the peak in 550 nm, we added 0% to 20% (interval 5%) to the value of the intensity of light, with the incidence angle ranging from 30° to 70° in Figure 3b. With the increment of the incidence angle, a few intensity changes were observed at 550 nm and a slight red shifts was observed to occur at the peak of 550 nm. The reason for the red shifts was that at very large incidence angles, the scattering of light with longer wavelengths is enhanced, as shown in Figure 2b. Meanwhile, selective light scattering did result in a reduction of light intensity by about 5-10% in the whole spectrum, which was mainly caused by the back scattering of light within short wavelengths. Through further optical design, combined with the selective coating of specific areas of down-conversion dyes, it is believed that the scattered light with short wavelengths will also be fully utilized. As a result, the ratio of the light component can be simply controlled by changing the angle of incidence. Considering their selected light scattering properties, Al nanodents can be used as simple light filters for cheap white LEDs to reduce the damage caused by blue light. A far-field model based on FDTD simulation was used on quasi-ordered hexagonal close-packed Al nanodents to prove such angle-dependent selected light scattering, as shown in Figure 4. In this model, the scattering light was collected on a hemispherical surface (diameter of 1 m) with periodic boundary conditions. To better understand the light scattering direction, the direction of the incidence was defined along the Y axis, as shown in Figure 4. The scattered light distribution of the incidence was near the center of the hemispherical surface, presented by polar coordinates. As shown in Figure 4a-c, the scattering of light at a 450 nm wavelength greatly increased with the increment of the incidence angle. The scattering of light was mainly localized along the incident direction, which was along the Y axis. In contrast to light at a 450 nm wavelength, the scattering of light at 550 nm was relatively low, although it grew slightly with the increment of the incidence angle, as shown in Figure 4d-f. Obviously, the far-field simulations coincide well with our experiment observations. A far-field model based on FDTD simulation was used on quasi-ordered hexagonal close-packed Al nanodents to prove such angle-dependent selected light scattering, as shown in Figure 4. In this model, the scattering light was collected on a hemispherical surface (diameter of 1 m) with periodic boundary conditions. To better understand the light scattering direction, the direction of the incidence was defined along the Y axis, as shown in Figure 4. The scattered light distribution of the incidence was near the center of the hemispherical surface, presented by polar coordinates. As shown in Figure 4a-c, the scattering of light at a 450 nm wavelength greatly increased with the increment of the incidence angle. The scattering of light was mainly localized along the incident direction, which was along the Y axis. In contrast to light at a 450 nm wavelength, the scattering of light at 550 nm was relatively low, although it grew slightly with the increment of the incidence angle, as shown in Figure 4d Near-field simulation was also carried out to better understand the selected scattering on Al nanodents. As shown in Figure 5, two light sources were chosen according to the two peaks of the white LED with 450 and 550 nm wavelengths. As shown in Figure 5a-c, no obvious near-field enhancement was found in Al nanodents under vertical incidence (angle of 0°) at both wavelengths of 450 and 550 nm and light was mainly localized in the holes of Al nanodents. Overall, the main Near-field simulation was also carried out to better understand the selected scattering on Al nanodents. As shown in Figure 5, two light sources were chosen according to the two peaks of the white LED with 450 and 550 nm wavelengths. As shown in Figure 5a-c, no obvious near-field enhancement was found in Al nanodents under vertical incidence (angle of 0 • ) at both wavelengths of 450 and 550 nm and light was mainly localized in the holes of Al nanodents. Overall, the main difference of the two kinds of light lies in their penetration depth. As shown in Figure 5a-f, the penetration depth of light at the same wavelength increased with the increment of its incidence angle, and the penetration depth of light at a 450 nm wavelength was greater than that at a 550 nm wavelength with the same incidence angle. The selected light scattering of light can be attributed to the interaction between light and holes on the Al nanodents, and only provides efficient light coupling for lights with short wavelengths. Figure 4. (a-c) Far-field simulations on Al nanodents with 0°, 15°, and 30° incidence angles at a 450 nm wavelength. (d-f) Far-field simulations on Al nanodents with 0°, 15°, and 30° incidence angles at a 550 nm wavelength.
Near-field simulation was also carried out to better understand the selected scattering on Al nanodents. As shown in Figure 5, two light sources were chosen according to the two peaks of the white LED with 450 and 550 nm wavelengths. As shown in Figure 5a-c, no obvious near-field enhancement was found in Al nanodents under vertical incidence (angle of 0°) at both wavelengths of 450 and 550 nm and light was mainly localized in the holes of Al nanodents. Overall, the main difference of the two kinds of light lies in their penetration depth. As shown in Figure 5a-f, the penetration depth of light at the same wavelength increased with the increment of its incidence angle, and the penetration depth of light at a 450 nm wavelength was greater than that at a 550 nm wavelength with the same incidence angle. The selected light scattering of light can be attributed to the interaction between light and holes on the Al nanodents, and only provides efficient light coupling for lights with short wavelengths.
Conclusions
In summary, quasi-ordered hexagonal close-packed Al nanodents have been used to manage the output spectrum of white LEDs. The light scattering on Al nanodents showed high angle-and wavelength-dependence. Through the control of the incidence angle, the intensity of light in white LEDs can be easily tuned. Far-field simulations confirmed the experimental observations. The near-field simulation revealed the relationship between the scattering and the penetration depths of
Conclusions
In summary, quasi-ordered hexagonal close-packed Al nanodents have been used to manage the output spectrum of white LEDs. The light scattering on Al nanodents showed high angle-and wavelength-dependence. Through the control of the incidence angle, the intensity of light in white LEDs can be easily tuned. Far-field simulations confirmed the experimental observations. The near-field simulation revealed the relationship between the scattering and the penetration depths of light at different wavelengths on Al nanodents. This work could benefit the application of cheap white LEDs in indoor lighting in less-developed regions. | 4,130 | 2019-09-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Design and Implementation of Laser Rangefinderfor Obstacle Height Monitoring on Line of Sight of Microwave Communication Link
A laser rangefinder is a device that uses laser light to determine the distance of an object. The working principle of this laser rangefinder is that the laser beam that is emitted to the object will be reflected back to the rangefinder, and the required propagation time of laser beam will be calculated to get the distance value. In this study, a laser rangefinder system was designed to be used as monitoring of obstacle height between the line of sight of a microwave communication link using a quadcopter, which is used as a top-viewer picture taker. The quadcopter was used as a device to carry the laser rangefinder to measure the obstacle height between the location points of near-end and farend. The obstacle height reading results were transmitted using a 5.8 GHz wireless transceiver to the monitoring location in real time. The data received were then processed to be displayed in the form of obstacle height graph as a function of the line of sight communication distance. The test results show that the implementation of laser rangefinder technology has an accuracy of more than 90%.
INTRODUCTION (
The standard operating procedure for a microwave radio transmission survey is by manually determining the location point of nearend and far-end coordinates, then proceed with scanning the communication path to measure the height of the object, which could be an obstacle to the communication system.The obstacle height measurement data are analyzed using the Pathloss software to determine the antenna height position at both near-end and far-end locations so that the communication pathline of sight is free of obstacles [1].
The survey of the microwave transmission system cannot be carried out if the communication trajectory traverses an area that cannot be reached by the transportation system so that the determination of the antenna height can only be done based on the estimated height of the object.This estimation method is susceptible to errors so that a recommendation for the incorrect device installation is often generated.
Thisstudy aimed to design and implement a laser rangefinder device for high-object monitoring systems on a microwave line of sight communication using a quadcopter that Correspondence address: Hudiono Email<EMAIL_ADDRESS>: Jl.Soekarno Hatta No.09, Jatimulyo, Kec.
Lowokwaru, Kota Malang, Jawa Timur 65141 movedbetween two candidate points of the antenna location (between the communication lines).The quadcopter was equipped with a Laser Rangefinder as an object height sensor below the communication path.The height measurement results of objects which were suspected as candidate obstacle were sent through a 5.8 GHz wireless transceiver to the monitoring location using a computer.The data was then processed and displayed in real time in the form of a graph.
The automatically received obstacle height data were also stored on the computer.With the help of the Pathloss software, the data were analyzed directly as a basis for determining the antenna height at both near-end and far-end locations so that the line of sight communication path was free from obstacles.
METHOD
The steps in designing the obstacle height monitoring system in microwave communication path were: 1. Collecting data on microwave radio transmission survey requirements related to determining the antenna height of the transmitterand the receiver and the height of an obstacle that was expected to interfere the line of sight communication system as a reference for the system design.2. Collecting data on required specification and features related to measuring the height of the object on the communication path, the quadcopterfor carrying theobject height sensor, and the remote transmission system of the measured object height data to the monitoring location.used for data transmission between BTS in a distance ranges from 1 to 1.5 km as the long distance communication system has used the optical fiber system nowadays.
The secondary data affected the selection of components/devices used in the obstacle height monitoring system as follows.
Height sensors
The height sensor must be light, small, and can reach the object size in the 100 m high range (AGL).This study used the Rangefinder type sensors with the specifications [4][5],as described in Table 1.
Data Transmission Devices
The resulting obstacle height data were transmitted through a wireless transceiver, which were then received at the monitoring location to be processed and displayed in real time.This study used 5.8 GHz wireless transmission in the type of AV Transceiver TS 832 and RC 832, with a minimum power of 600 mWatt and a distance range of 5 km [6].
FSKModem
The digital data generated by the height sensor were transmitted through a wireless transceiver that requires analog input.Therefore, a digital to analog data converter was used, which was an FSK modem with the type of TCM 3105 NE [7].
Height Monitoring System Design
After selecting the main components and devices forming the object monitoring system as a candidate obstacle, then the system was designedas the block diagram in Figure 3.
Obstacle Height Measuring System
The laser rangefinder height sensor was controlled by a microcontroller (Arduino Uno R3), then the generated height data were fed to the FSK modem to be transmitted to the monitoring location via the wireless AV Transceiver.
Obstacle Height Data Receiver System
The system in the monitoring location consists of an RC-832 wireless receiver, FSK modem, and computer to display height in real time in the form of graphs.
RESULTS AND DISCUSSION
The result of this study was an artificial model of an obstacleheight monitoring system on the line of sight of a microwave communication link using a quadcopter[8], as shown in Figure 6.
Figure 6. Obstacle Height Monitoring System
Obstacle height measurement is done using a laser rangefinder, was an electronic distance meter measuring instrument that is very well known for distance/ height measurement [6].A laser rangefinder can also be used for robot motion sensors.This sensor is very precise [10].So that this sensor was chosen and was very suitable to be used for obstacle height measurements in this study, although it has some disadvantages [9], were: a.The laser rangefinder was Planar, which means that objects that were parallel to this sensor cannot be detected.b.The Laser rangefinder cannot detect objects of a transparent type of material (for example; from glass materials).c.Measurements on dark and very far objects have lower accuracy.
Apart from laserfinder deficiencies, the laser rangefinder was a high measurement sensor that was very efficient and accurate.
Calibration of Height Sensors
To ensure the implementation of the rangefinder as planned, the sensors were calibrated.The laser rangefinder calibration results for a constant distance of 220 meters, as shown in Figure 7.
Figure7.Laser Rangefinder Calibration results
The calibration result shows that in 260 measurements of an object located in a distance of 220 m, the average difference between the measured and the real distancewas 3.5 m.This means that the error of the laser rangefinder device was 1.59% or it had an accuracy of 98%.
Testing of Obstacle Height Monitoring System
To ensure the performance of the height obstacle monitoring system on the communication path between the near and far end, the system was tested with a planned track pathon the google map, as displayed in Figure 8.When the laser rangefinder sensor hit leaves or a tree, the measurement results were not stable because some laser beamshit the object underneath or through the leaves.The highest obstacle between communication links was 10.5 meters in the far-end location.
System testing data results were then used as a basis for determining the antenna height so that the communication link was free of obstacles.Using the Pathloss-4 software, the antenna height was determined [1], as shown in Figure 9.
Figure 1 .
Figure 1.The obstacle height monitoring system
Figure 2 .
Figure 2. Results of determination of antenna height analysis
3 .
Designing and developing the obstacle height monitoring system.4. Calibrating the instrument of object/obstacle height measurement by Laser Rangefinder.5. Testing the developed system by measuring the height of an object/obstacle in the planned communication path and displayed the result in graphs in real time.The study result was a tool for surveying the line of sight microwave radio transmission, which is mainly used to determine the height of an obstacle in the communication path.The obstacle height then used as a reference to determine the antenna height.Therefore, the secondary data were based on the survey requirement of the obstacle height monitoring system reference as follows [2][3].1.The transmitter and receiver antenna in the radio communication system should be in a line of sight, with the height higher than the obstacle but lower than 40 m (AGL) in a suburban, urban, and dense urban areas or 70 m (AGL) in a rural area.2. The conducted radio transmission survey is
Figure 3 .
Figure 3. Block diagram of obstacle height monitoring system
Figure 4 .
Figure 4. Obstacle height sensor control circuit
Figure 5 .
Figure 5. Obstacle height data receiver system
Figure8.
Figure8.The testing path of the rangefinderThe results of the obstacle height monitoring system are displayed in Figure9.
Figure 9 .
Figure 9. Graph of obstacle height measurement results
Table 1 .
Specification of rangefinder object height sensor | 2,112.6 | 2018-07-23T00:00:00.000 | [
"Computer Science"
] |
Ultrastructure of primary pacemaking cells in rabbit sino‐atrial node cells indicates limited sarcoplasmic reticulum content
Abstract The main mammalian heart pacemakers are spindle‐shaped cells compressed into tangles within protective layers of collagen in the sino‐atrial node (SAN). Two cell types, “dark” and “light,” differ on their high or low content of intermediate filaments, but share scarcity of myofibrils and a high content of glycogen. Sarcoplasmic reticulum (SR) is scarce. The free SR (fSR) occupies 0.04% of the cell volume within ~0.4 µm wide peripheral band. The junctional SR (jSR), constituting peripheral couplings (PCs), occupies 0.03% of the cell volume. Total fSR + jSR volume is 0.07% of cell volume, lower than the SR content of ventricular myocytes. The average distance between PCs is 7.6 µm along the periphery. On the average, 30% of the SAN cells surfaces is in close proximity to others. Identifiable gap junctions are extremely rare, but small sites of close membrane‐to‐membrane contacts are observed. Possibly communication occurs via these very small sites of contact if conducting channels (connexons) are located within them. There is no obvious anatomical detail that might support ephaptic coupling. These observations have implications for understanding of SAN cell physiology, and require incorporation into biophysically detailed models of SAN cell behavior that currently do not include such features.
| INTRODUCTION
The pacemaking rhythm that controls the overall beating rate of the heart in health originates in the sino-atrial node (SAN). The SAN is comprised of an anatomically and functionally heterogeneous collection of cells all capable of spontaneously and rhythmically generating action potentials, demonstrating the key property of automaticity. 1,2 Among these, the most highly specialized SAN cells are the leading or dominant pacemakers, those with the fastest rate of diastolic depolarization under a given set of physiological parameters. They are located in the central region of the SAN and are the least anatomically developed cells, with the lowest density of organelles, particularly myofibrils. 3 As one moves away from this central region, there is a transition in the properties of spontaneous action potentials produced by the cells and in | 107 IYER Et al.
their structure, with the addition of myofibrils, an increase in SR, and the presence of internal corbular SR. The significance of the structural transition in functional terms and the question of whether true atrial cells are infiltrated in the pacemaking core of the node have been variably interpreted. On the one hand, the structural transition is mostly described as a gradation of myofibrillar content from center to periphery of node, which has been correlated with variations in electrophysiological parameters, the so-called "gradient model". 3,4 Other investigators 5 find that cells with typical characteristics of atrial myocytes are found interspersed within the inner core of the SAN, and propose that a gradual increase in the density of infiltrating atrial-type cells is at the basis of the transition from nodal to atrial electrical properties, the so-called "mosaic model." Regardless of whether the node contains a gradual local variation of cells or is constituted of a mosaic of mixed cells, the heterogeneity has a meaning in terms of dependable function of the SA node as a pacemaker. 1 Two main (not necessarily mutually exclusive) schools of thought have dominated debate on the origin of the spontaneous pacemaker potential in SAN cells. 6 In the first, functional parameters of plasmalemmal ionic channels are considered fully responsible for the slow depolarization and the derived action potential when threshold is reached. The discovery of the funny current specific to SAN cells 7 and the further characterization of HCN4 (hyperpolarization-activated, cyclic-nucleotide gated four) as the major carrier of the funny current 8 laid a strong foundation for the ionic basis of the intrinsic rhythmicity. An alternate proposal is that rhythmicity is regulated by calcium transients via voltage-gated sarcolemmal Ca 2+ channels, SR calcium stores, and the Na + /Ca 2+ exchanger. 9 This proposes that an exponential increase in NCX current at end-diastole, due to spontaneously propagated local SR calcium release, affects SAN pacemaking frequency. 10 Since the discovery that internal calcium delivery in these cells of small size could drive depolarization (11 see 6 for a review), the magnitude of this effect in driving physiological pacemaking has been hotly debated. 12 The current paradigm suggests that the two mechanisms function in concert, as a coupled clock system that is mutually entrainable, robust, and reliable. 10 The question of how SAN cells communicate with each other and with the atrial myocytes that surround them to ensure regular, reliable conduction of the impulse within the SAN and out of it provides an interesting puzzle. On the one hand, the cells of the major pacemaking core must communicate between themselves and either with the surrounding cells that, in turn, mediate access to the atrial cells or with atrial cells that may have infiltrated the node. 5 On the other hand, the primary pacemaking cells must be protected from retrograde transmission that would overcome their rhythmic signal. How this is achieved is not clear. Immunolabeling experiments (summarized in 13) have been hard to interpret. Labeling for the most abundant connexon in heart (CX43) is mostly negative, 14 but different isoforms may be involved. Verheijck et al 15 show very clear punctate anti-Cx45-positive sites in nodal area of the mouse, and antibodies against CX40 are positive for some cells, but can also be totally negative for relatively large groups of them. Masson-Pevet, using electron microscopy, showed the images of small "classical" gap junctions with a number of connexons forming tight clusters (quoted in Ref. 13, see Ref. 3,16,17), but did not indicate whether these were found in the SAN cells of the inner core. Other researchers have also found such small gap junctions, although quite rarely. 18 Finally, the suggestion was made that very small punctate connections may be the preferred site of intercellular communication by providing for the location of small clusters of conductive connexons. 19 The more recently proposed mechanism of ephaptic coupling has not been explored in the case of the SA node. It will be dealt with in the discussion section. The aim of this investigation is to provide an in-depth ultrastructural description of SAN cells from the central region of the rabbit SAN. The study is restricted to the cells constituting the main pacemaking region and it provides a quantitation of the SR elements that should be taken into consideration in establishing the relative importance of the calcium-driven internal oscillator in driving pacemaker activity. It turns out that the cells have much smaller SR components than previously assumed, certainly when compared to ventricular myocytes, so initial modeling based on data from ventricle may need to be reconsidered for these SAN cells.
| MATERIALS AND METHODS
Sinus nodes were isolated from adult male New Zealand White rabbits in accordance with the National Institutes of Health Guidelines for the Care and Use of Animals (Protocol No. 034-LCS-2019). New Zealand White rabbits (Charles River Laboratories) weighing 1.8-2.5 kg were deeply anesthetized with pentobarbital sodium (50-90 mg/kg). The heart was removed quickly and placed in solution containing the following (in mM): 130 NaCl, 24 NaHCO 3 , 1.2 NaH 2 PO 4 , 1.0 MgCl 2 , 1.8 CaCl 2 , 4.0 KCl, and 5.6 glucose equilibrated with 95% O 2 -5% CO 2 (pH 7.4 at 35.5°C). Excised hearts were initially retrogradely perfused by gravity with heparinized Tyrode solution, followed by 75 mL of 3% glutaraldehyde 0.1M cacodylate buffer pH 7.2. After a short period of time, the right atrium and associated sinus node were dissected out and kept in the fixative for a variable period of time at 4°C, up to several days. The node region was pinned ( Figure 1) and the central partially translucent areas (arrows) where the leading pacemaker site exists under baseline conditions were identified and further dissected out. The tissue was rinsed in cacodylate buffer, and either postfixed in 2% OsO 4 in the same buffer containing 0.6% K 3 Fe(CN) 6 , or postfixed in 2% OsO4 in the same buffer for 1 hour at room temperature, rinsed in H 2 O and en-bloc stained in aqueous saturated uranyl acetate for 1 hour. 20 The tissue was dehydrated in ethanol and acetone and embedded in Epon. Thin (50-60 nm) sections were cut at right angles to the thin layer of the SA node on a Leica Sitte microtome and stained with "Sato" lead citrate. 21 Sections were imaged at 60-80KV either in a Phillips 410 (Mahwah. NJ) or in a JEOL 1010 (JEOL USA) electron microscopes, both equipped with a Hamamatsu camera (Advanced Microscopy Techniques, Chazy, NY). Average quantitative information was obtained through an appropriate morphometric analysis of a number of thin section images as described by Weibel et al 21 Measurements were taken on digitized images using the freely available NIH Image J program.
| SAN architecture and cell identification
The translucent region of the node is composed of layers of dense collagen bands that separate strands of pacemaking cells ( Figure 2). The epicardial and endocardial surfaces of the thin node are easily identified based on details of the epithelium and connective tissue covering them. The cells located in the central portion of the node have been previously identified as the primary pacemakers. 1 Anatomically, these are the least developed cells, containing the lowest density of organelles, particularly myofibrils. The cells are folded up and closely spaced, so each cell has extensive proximity to several other cells. Figure 3A,B shows the outline of a cell that was followed in its entirety within the section. The overall shape is quite similar to that described for isolated cellsthe cell is long and thin, and in this case, it has a bifurcation at one end, as shown by Verheijck et al. 5 In this image, the cell ends in a junction that connects it to the adjacent cell via small actin-based adhering junctions, such as those found in the intercalated discs of the working myocardium (between blue arrows). Close contacts with two different cells are made along the lateral borders (green arrows). Green marks indicate the presence and approximate size of jSR peripheral couplings. Figures S1 and S2 show the closely apposed outlines of two cells reconstructed from serial sections imaged in SEM (see methods). Note that both cells have a quite convoluted shape and face each other over most of their surfaces.
In most thin section images, individual cells appear as short profiles that vary widely in appearance and size because the cells are cut at odd angles relative to their long axis (Figures 4-6). For brevity of description, we use the term "cell" in reference to the randomly sectioned cell profiles, although usually they represent only a small portion of the F I G U R E 1 The SA Node. Dissection of a SAN from rabbit heart previously fixed by perfusion. The sample is pinned, the ligation at the upper left was used for help in the dissection. The magenta dotted line follows the outline of the SAN upper (top) and lower regions; the cyan dots follow the crista terminalis; the yellow arrows point to two of the almost transparent regions that were embedded and sectioned for EM. Atrial tissue is at the left of the SAN F I G U R E 2 Low-power image of a section across the center of a node area such as indicated by arrows in Figure 1. Dense collagen bands (arrows) separate cell-rich bands which are also infiltrated by collagen bundles. Connective tissue plus endothelium (right) and mesothelium (left) cover the two surfaces. No obvious ultrastructural differences were observed between cells on the two sides actual cell. In the literature, "dark" and "pale" cells have been described, based on their density in light microscope images. Higher magnification electron micrographs reveal that the difference is due to the content of intermediate filaments, also known as neurofilaments, in the cytoplasm. Figure 4 illustrates two typical "dark "cells, with cytoplasm completely filled by a dense network of filaments sectioned at varying angles ( Figure 4A,B). Noteworthy details of the cell in Figure 4 are as follows: scarce myofibrils, few mitochondria, several peripheral couplings (between arrows), but extremely few (or none) membrane-limited profiles of free SR. A typical "pale" cell ( Figure 5) is characterized by apparently empty areas of various sizes, interspersed with a scarce content of cytoskeleton, including scanty neurofilaments. Other details are similar to those already described for the dark cell: few myofibrils and mitochondria, peripheral couplings (between arrows), and some adhering Both dark and pale cells are extremely rich in glycogen, as demonstrated after cells are treated with potassium ferrocyanide rather than uranyl acetate to increase the contrast ( Figure 6, see methods). Glycogen-protein "granules" of uniform size 23 accumulate in large clumps, filling the previously apparently empty areas of the pale cells ( Figure 6A) and are dispersed in small groups between the intermediate filaments of the dark cells Figure 6B). So, the apparently empty appearance of the light cells is due to the clustering of glycogen granules into large lumps.
| Quantitative data on SR content and distribution
All cells have a relatively high frequency of peripheral couplings, formed by associations of small flat junctional SR cisternae with the plasmalemma via visible arrays of feet (RyRs) (Figure 7). A count of the frequency of PCs along the sectioned profiles of plasmalemma shows an average of 5.3 PCs over an average perimeter length of 40.4 µm for the same cell profiles (from 30 profiles), indicating an average frequency of 0.13 PC/ µm of perimeter or a calculated average inter PC distance along the perimeter of 7.6 µm. Note that the average measurements take into account domains with a higher PC frequency as well as areas that have far fewer PCs. The overall shape of an entire cell in Figure 3 clearly shows that PC positioning varies along the cells. Additionally, due to surface membrane convolutions, the distances along the plasmalemma are larger than the spacings along straight lines. It is not clear why a considerably lower frequency was estimated by Masson-Pevet et al, 17 who quoted sub-micrometer distances between RyR clusters. There are no T tubules; therefore, no dyads and also corbular SR is absent.
The amount of free SR (fSR), often seen associated with PCs, is quite limited in the cells that we have studied. fSR outlines are only seen in some cell images, and, where visible, SR tubules are mostly limited to the cell periphery ( Figure 8). The measured distance between the plasmalemma and the furthest SR element varies between 0.2 and 0.9 µm and on the average free SR profiles lie in a band which is within 0.4 ± 0.2 µm from the plasmalemma (from 30 measurements).
To obtain a value for the volumes of junctional SR (in PCs) and of free SR in the cells, we measured the surface areas of the sectioned outline of the two elements and compared it to the surface areas of the sectioned outline of the cells. The ratio between areas of PCs and fSR area and the cell area is the same as the volume ratios of the two organelles. The average F I G U R E 5 A "pale" cell profile shows large apparently empty areas and few intermediate filaments. It has the same content of peripheral couplings (between arrows) and mitochondria (M) as dark cells, very little internal free SR (SR, small arrows) and varied relationship with other cells along its border. In this image, the entire lower region of the cell closely faces a neighboring one. Infrequent views of cells that are included in their entirety within in the section plane (eg, Figure 3) show that the "light" appearance with many apparently empty spaces is maintained over the whole visible region of the cell. We conclude that "light" and "dark" cell profiles do not belong to the same cell area of sectioned PCs, calculated from the measured average length and width of 29 PCs is 0.0043 ± 0.0012 µm 2 . The average number of PCs/cell was 5.3 ± 1.2 and the average area of sectioned cell profile was 73.11 ± 17.82 µm 2 . From this, we calculate the percentage of sectioned cell area occupied by jSR area to be 0.03 (see Table 1, column 2). The average free SR area in the same cell profiles was 0.03 ± 0.04 µm 2 , and using the above data for average area of cell profile, the calculated percentage of sectioned cell area occupied by fSR is 0.04 (Table 1, column3). Outlines of the total SR (jSR + fSR) occupy 0.07% of the cell outline (Table 1, column 4).
| Plasmalemma details: intercellular communication and caveolae
Cells within the sinus node are tightly packed and constrained in close proximity with each other (Figure 2), so they have multiple interactions (Figures 3-5 and Figure S1). Pale and dark cells are randomly mixed and their relationships to each other involve three configurations. Some part of the cell, for example, the upper surface and part of the left side in Figure 4, is separated from the neighboring cells by a double layer of basal lamina; in other regions, for example, the lower part of Figure 4, at left the basal laminae of the two adjacent cells are fused into one; the rest of the cell surface is involved in a prolonged region of close contacts with one of its neighbors. Several densities on the cell surface are hemi-adhering junctions that allow anchorage to the extracellular network, via the basal lamina. At some sites, a direct mechanical We surveyed extensive areas of cell contacts within the sinus node and, with very few exceptions, we found no evidence for small but identifiable gap junctions. In the search for possible cell contacts, we encountered only a single small recognizable, classical gap junction with closely apposed membranes ( Figure 9F), similar to a larger junction between cells of either intermediate type of invading atrial cells ( Figure 9E). However, a further close look at images from contact regions in very thin sections of cells from the inner core that had been treated to enhance contrast revealed small punctate junctions ( Figure 9A-D) of the type described by Masson-Pevet et al and by Irisawa. 17,19 Unfortunately, a realistic estimate of their frequency is not possible due to the difficulty in visualizing them.
The plasmalemma of primary pacemaker cells is richly endowed in caveolae ( Figure 10A,B). However, the distribution of caveolae is uneven, since many cell outlines are practically devoid of them (eg, see Figures 4 and 5).
| DISCUSSION
One major question in the functioning of the primary pacemaking cells is whether or by how much a "calcium clock" may be involved in determining their periodic action-potential activity. 23 Previous model projections on the magnitude of the calcium clock events 9,10 were based on quantitative data for SR content of the ventricular myocardium. On that basis, a well-defined calcium wave could be suggested to be at the basis of the observed calcium signals in isolated cells. We find, however, that both free and junctional SR volumes of cells strictly identified from their location in the pacemaking center of the node are a small fraction of that found in ventricular myocardium, in the rabbit (Table 1). Additionally, the free SR is restricted to a peripheral band within the pacemaking cells, and in agreement with Musa et al, 25 there are neither T tubules nor corbular SR. Keeping in mind the well-characterized identity of the cells described here, it will be of primary importance to determine how these data, particularly the scarcity of calcium pumping SR, affect calculations of calcium wave activity. 26 This work originated from the specific requirement for quantitative data (mostly extent and distribution of SR components) necessary for answering the above questions. Therefore, our methods were limited to electron microscopy.
The primary pacemaking cells in the core of the rabbit SAN connect to each other at their ends via adhering junctions, of the type present at intercalated discs of the working myocardium and face each other at their lateral borders across extensive narrow gaps that occupy ~30% on the average of their total surface. In the past, close examinations of the cell surfaces by electron microscopy and following the use of antibodies failed to reveal either the structural signature of gap junctions or aggregates of CX43 and CX45 (two cardiac specific connexins), see introduction. Our close examination of the core peacemaker cells confirms that classical aggregates of connexons are extremely rare in the inner core of the rabbit SAN, but that "mini" T A B L E 1 Morphometric parameters of rabbit SAN cells compared with other cardiac myocytes 22,24,35,36 F I G U R E 1 0 Images from two different cells. Caveolae, each appearing as a small-membrane-limited balloon are found in extensive clusters, as illustrated here, that are unevenly distributed over some parts of the cell surfaces. We found no clue to the reason for this uneven distribution. Caveolae are not usually associated with active endocytic processes, but a coated endocytic vesicle is rarely associated with a multiple caveolar invagination (arrow in B). Small arrows indicate peripheral couplings junctions of the type described by Irisawa 19 are present. A few connexons located at such small contact sites would probably be sufficient for electrotonic transmission coordinating the pacemaking events and would protect the cells from unwanted backfiring. 27 An alternative hypothesis that has gained ground in recent years is the concept that electric fields and/or extracellular accumulation of ions generated by action in one cell may modulate current flowing through channels in a neighboring cell, constituting communication by ephaptic transmission. 28 However, such transmission requires specific anatomical basis, such as the creation of restricted spaces 29 and there is no evidence that such spaces are present at the extensive lateral appositions of pacemaking cells.
| LIMITATIONS
The limitations of this study include small sample size and the fact that only male rabbits were included for analysis. Gender-based morphological differences in rabbit sinus node have not been described in the literature and to keep the sample homogenous only male rabbits were included for the study. | 5,184.2 | 2020-01-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Improving energy sustainability for public buildings in Italian mountain communities
The objective of this work is to analyze and then optimize thermal energy consumptions of public buildings located within the mountain community of Lanzo, Ceronda and Casternone Valleys. Some measures have been proposed to reduce energy consumption and consequently the economic cost for energy production, as well as the harmful GHG emissions in the atmosphere. Initially, a study of the mountain territory has been carried out, because of its vast extension and climatic differences. Defined the communities and the buildings under investigation, energy dependant data were collected for the analysis of energy consumption monitoring: consumption data of three heating seasons, geometric buildings characteristics, type of opaque and transparent envelope, heating systems information with boiler performance and climatic data. Afterward, five buildings with critical energy performances were selected; for each of these buildings, different retrofit interventions have been hypothesized to reduce the energy consumption, with thermal insulation of vertical or horizontal structures, new windows or boiler substitution. The cost-optimal technique was used to choose the interventions that offered higher energy performance at lower costs; then a retrofit scenario has been planned with a specific financial investment. Finally, results showed possible future developments and scenarios related to buildings energy efficiency with regard to the topic of biomass exploitation and its local availability in this area. In this context, the biomass energy resource could to create a virtuous environmental, economic and social process, favouring also local development.
Introduction
During the last half of the century, worldwide matters as climatic change, global warming and limited resource depletion became more current and urgent. These issues are strictly related to the emission of greenhouse gases (GHG) coming from anthropic activities. In order to reduce the negative effect of climate change, global warming and limited resource depletion over economic, social and natural systems, a rapid decrease in GHG emissions is needed, together with policies and strategies for a more sustainable and resilient society. In particular, energy resilience implies a functioning and stable energy system, providing continuity and minimizing service interruptions; then, energy security and sustainability are among the most important aspects in urban energy resilience [1].
The energy sector is still responsible for about 60% of global GHG emissions, and therefore it is indisputable that the sustainability of the environment is indissolubly influenced by the energy sector [2]. Therefore, energy sector should be focused on the opportunities and challenges to ensure accessible, reliable, affordable and clean energy sources, also stimulating the economic growth, social welfare and job creation.
In Italy, to improve energy security and sustainability, prior actions are energy efficiency and the use of renewable energy technologies, reducing also energy imports from abroad; these actions are encouraged also by public subsides. The climate and energy targets of EU are indeed attainable through an energy transition of human activities from non-renewable to renewable energy sources associated with more energy efficiency.
The civil sector plays a central role accounting for 37.1% of final energy consumption in the EU with a steady growth of þ33.8% over the 1994e2014 period [3]. It is therefore urgent to intervene on private and public buildings energy consumptions in order to improve their energy performance, reducing their needs. Also, the EU 2030 Strategy encourages its members to take actions on energy efficiency especially in the existing building heritage and in Italy, policies of financial incentives have been provided to encourage investments in civil sector with retrofit interventions and producing energy with renewable sources.
The aim of this work is to define a methodology through a particular case study: improving energy sustainability of public buildings in a mountain community. After the description of the case study, in the norther part of Italy, energy consumptions and the available renewable energy sources have been mapped with the use of a Geographic Information System (GIS). In particular, the proposed GIS-based approach provides a localized optimization of energy demand and supply, exploiting the available wooden biomass resources near the area for a sustainable, secure and replicable energy planning.
Background
For this research work, the scientific literature has been investigated, in particular articles related to energy consumption monitoring and energy retrofit of buildings. Semprini et al. [4] used the energy signature to analyse energy consumptions, proposing three not invasive and low cost scenarios. Ma et al. [5] proposed a methodology for a wide building heritage, dividing the buildings by type of user, thermal vector, system efficiencies, period of construction, geometry and type of envelope. Yaquin et al. [6] focused their work on the reduction of energy consumption with energy saving measures, highlighting the importance of the real energy needs of users. Mutani et al. [7,8] implemented energy consumption data and building characteristics with a GIS tool for 50 municipalities near Turin, comparing the results of bottom-up and top-down models from building scale to territorial scale. Rospi et al. [9] compared the measured and estimated energy consumptions of a building with energy requirements. Finally, Aelenei et al. [10] and Mutani et al. [11] evaluated the nZEB for European countries considering the application of the European Directives, for a reduction of energy consumption in the building sector considering any political or economic obstacles that could reduce the strategic impact.
In this work, two GIS-based methodologies are associated: the first to identify the critical buildings that consume more energy for space heating, where priority of intervention should be applied [12]; the second one to evaluate, with a costbenefit analysis, a priority order of the interventions to be implemented from an economic and environmental point of view [13]. In addition, these methodologies have been adapted to a very various building heritage on a very vast territorial area with different climatic characteristics; finally, the cost-optimal analysis was also modified to favour renewable energy sources. These methodologies have the advantage to be replicable in unions of municipalities, as the analysed mountain community, and in energy communities for which an approval process of a law is under way in the Piedmont Region.
Study area of Lanzo, Ceronda and Casternone Valleys Union
The geographical area of this case study is the mountainous territory at North-West of Turin and in the North-West part of Italy and it is composed by 21 municipalities administered by a public entity named Mountain Union of Lanzo, Ceronda and Casternone Valleys (in Italian: Unione Montana Valli di Lanzo, Ceronda e Casternone U.M.V.L.C.C.). This territory is crossed by Stura di Lanzo river and it can be subdivided in 4 valleys: two main ones (Val Grande and Val d'Ala) and two minor ones (Ceronda and Casternone pre-Alpine valleys).
This mountain community is located in an area of approximately 478 km 2 at 50 km from the city of Turin. Each municipality has an average extension of 22 km 2 with maximum values of 62 and 46 km 2 , respectively for Balme and Ala di Stura, and minimum values of 5 and 6 km 2 for Pessinetto and Vallo Torinese. From a demographic point of view, the average population is of 1742 inhabitants per municipality: the most populated is Lanzo Torinese with 5133 inhabitants, while the less populated is Balme with only 112 inhabitants. In this context, public buildings are sprawled on the territory with very different characteristics both by typology and size and then, it is not possible to find recurring buildings archetypes in terms of geometric shape, structural characteristics and materials used.
In order to obtain a representative sample, seven municipalities have been chosen with various climatic and topographic characteristics, different number of inhabitants and dimensions, as shown in Fig. 1: three little municipalities in the high valley (Balme, Ala di Stura and Ceres), three bigger municipalities in the medium valley (Lanzo Torinese, Balangero and Cafasse) and one in low valley (La Cassa). Topographic differences, especially altitude and valley orientation, influence climate characteristics and then different databases on daily air temperatures and Heating Degree Days at 20 C (HDD) have been collected by the Regional Environment Agency (ARPA Piemonte) weather stations (in Fig. 1). For each municipality a weather station has been identified, geographically close to the municipality and with similar altitude; climate data have been collected for three years: 2013, 2014 and 2015. The area considered in this study is characterized by a climatic heterogeneity, presenting average temperatures that vary of about 4.5 C in coldest month and 7.6 C in the hottest month between Balme (at 1410 m a.s.l.) and Venaria Reale (at 337 m a.s.l.), as shown in Table 1. This heterogeneity of climate is also evident in the HDD values recorded by the different weather stations, with a 2013 colder than the other years and with high differences between the weather stations as in 2014 with 4768 HDD registered by Balme and 2453 HDD registered by Venaria Reale in the same year.
Within the selected municipalities, 48 buildings were chosen to represent the public building heritage. The chosen buildings were characterized by different period of construction, type of user and energy system. About 60% of the buildings were built before 1976 (before the first Italian Law on building energy performance L.373/ 1976; 27% are schools, 24% are used for entertainment activities, and 23% are offices). About the heating systems, 67% are heated with natural gas, while 20% still uses gas oil. The selection of the public buildings for this analysis included also lacks in information on the type of use and data about energy consumptions; in the mountains, some buildings can be used with discontinuity and with various types of heating systems. This analysis excluded 12 public buildings for lack of data and then the analysed buildings were 36. Also, the different energy sources used in the mountain community to produce thermal energy for residential and tertiary sectors were analysed in Fig. 2: most of the supply energy comes from natural gas (42%), and biomass (39%); LPG (9%) and the renewable sources, such as the solar one; biofuels are less used. In Piedmont Region natural gas is the principal energy source but in the mountain territories wooden biomass is very used especially for space heating with small boilers (i.e. < 200 kW) in single buildings but the potential expansion of this energy source is very high. In fact, biomass is closely linked to the territory that produces and uses it, thus with more efficient biomass boilers is possible to create a virtuous environmental, economic and social process, favouring also local development.
Then, in order to reduce the greenhouse gas (GHG) emissions for future low-carbon scenarios, the local availability of woody biomass was also taken into account. In Fig. 3, the different forest biomass resources are represented considering also their accessibility with roads and the territory slope. The analysis conducted in this work starts from previous researches on the evaluation of energy produced by the biomass energy source [14,15].
In Italy there are 66 unions of mountain municipalities and other 86 unions of hilly municipalities to which this proposed methodology could be applied.
Materials and methods
In Figs. 4 and 5, the adopted methodology is presented though the case study of the Mountain Union of Lanzo, Ceronda and Casternone Valleys. As mentioned above, after the analysis of the vast and not homogeneous territory with 21 municipalities and, the selection of 7 representative municipalities, data about 36 public buildings have been analysed. The information about the sample of public buildings were collected in an energetic cadastre with two types of information: the geometric and typological properties of buildings envelope (with real data or using Italian Standard UNI/TR 11552:2014 [16] knowing the period of construction of buildings), the characteristics of the space heating and hot water production systems, and the data about thermal energy consumptions for the years 2013, 2014 and 2015.
Geometric features of the buildings were used to evaluate surfaces, heated volumes and the surface to volume ratio (S/V) of each building. To obtain these data, existing documents related to renovations or enlargements of public buildings have been analyzed. In the absence of these data, thematic and technical maps with ISTAT census data were examined through the use of a GIS tool. This procedure consents also to compare the energy performance of the different types of buildings to create simplified bottom-up models characterizing the building heritage at territorial scale [7].
In this work, the buildings annual thermal consumptions were normalized through the gross heated volume (kWh/m 3 /y), in order to compare energy efficiency level of public buildings excluding the dimensional component. Referring to the consumptions related to the three years 2013, 2014 and 2015, a graphical comparison between annual consumption and the specific annual consumption has been used to identify the most "energivorous" buildings. This graphical representation, called "quadrant method", allowed the identification of buildings with a higher priority for intervention considering the average value of consumptions for a group of buildings. Space heating energy consumptions were also normalize with the heating degree days, conventionally at 20 C, to disregard climatic differences. In Italy, the HDD at 20 C are also used to define 6 climatic zones (from the warmest A to the coldest F) and, in the analysed mountain community, only La Cassa, Givoletto and Fiano are in the climatic zone E (2101-3000 HDD), while all the other municipalities belong to climatic zone F (with more than 3000 HDD).
To evaluate the critical buildings, in term of energy consumptions and then give a priority of intervention for energy efficiency measures, the annual energy consumption (kWh/y) was compared with the specific energy consumption (kWh/ m 3 /y) characterizing respectively the energy costs and the building energy efficiency level. The graphical method used to represent the annual consumption and the specific annual consumption highlights 5 critical buildings with higher energy consumptions.
For each of the 5 critical buildings, a thermal model based on current standard regulations [17,18,19] has been implemented with the aim of defining a energy efficiency strategy with its specific financial plan. The results of the energy models were compared with the real energy consumptions according to the methodology shown in Fig space heating, hot water production and artificial lighting systems characteristics.
Thermal models were validated by a comparison of the model results with the real energy consumption data, according to Standards on energy audit [18,19] considering acceptable a relative difference of AE10%. Afterwards, these models were also used to evaluate retrofit interventions in a cost/benefit analysis.
The low-carbon scenarios have been evaluated with the cost-optimal analysis (Italian Standard UNI EN 15459:2008 [20]), defining a comparative framework to identify the optimal cost-based retrofit measures [21]: best scenarios reach low energy consumptions (low EP, kWh/m 2 /y) with low costs (V/m 2 ) and these scenarios are localized around the minimum of the curve of cost-optimal graph.
For this evaluation, the following main aspects have been considered: Reference prices for retrofit interventions in the Piedmont Region 2016 [22], using also real data from public building retrofit interventions at 2015; Financial incentives in Italy for energy efficiency retrofit measures of public buildings (Italian "Conto Termico 2.0:2016"); Available annual quantity of wooden biomass, in order to enhance interventions using this local and renewable energy source.
In order to compare the different scenarios, the following parameters for each intervention were calculated: Cost of intervention with and without financial incentive (V); Energy performance achieved (kWh/m 3 /y); Annual energy and economic savings (V/y);
Payback time of investments (y).
For the cost-optimal analysis, in order to highlight and to exploit scenarios based on the use of renewable sources (particularly biomass), the non-renewable energy performance index EP gl,nren was used to better visualize the energy performance improvements from adoption of renewable energy sources. Then, three cost-optimal analyses have been implemented: the first, considering the total price of the retrofit interventions; the second using current Italian financial incentives and the third excluding the biomass costs for profit and transport, since it was considered a public resource.
The greater use of the local wooden biomass to produce energy in mountain communities in also one of the objective of the Environmental Report of the Regional Air Quality Plan [23]. In fact, in the Italian mountain areas, the use of polluting and inefficient fireplaces especially for space heating of residential buildings, represents a widely widespread reality and their substitution with new efficient biomass boilers could improve air quality with also a reduction of heat dispersions.
Not all the wooden biomass present in the municipal territory is accessible and available; these characteristics depend mainly by: the presence of streets and the slope of the territory influencing the accessibility of a wooden area. To evaluate the quantity of wooden biomass that can be harvest every year to produce energy, a GIS tool with the following databases have been used: At the end of the analysis it was found that in Ceres 900 hectares of wooded biomass can be exploit for the production of energy (Fig. 3), while in Lanzo Torinese only about half. In Table 2 the availability of wooden biomass to produce energy for the two municipalities is reported. This evaluation has been made by a comparison between the accessible wood areas and the database on biomass to produce energy in Regione Piemonte at 2013. Even if Lanzo Torinese is the principal municipality, since it is located in the hillside, has lower biomass availability if compared to Ceres in the mountain area. For the municipalities of Lanzo Torinese and Ceres, the results reported in Table 2 indicate also that the thermal energy produced by biomass could be sufficient to meet the heat energy-use of public buildings.
As it is possible to observe in Fig. 6 the total energy consumption (thermal and electric components) and thermal energy consumptions depend mainly by the number of inhabitants and the heated volume of buildings besides the climate. The database about the annual per capita thermal energy consumption at the municipal level, provided by the Metropolitan City of Turin for the years 2000e2013 [24], shows that the mountain areas (as Ala di Stura, Cantoira and Ceres) registered higher consumptions if compared with the municipalities located in the plain area. As an example, the annual per capita energy consumption in the municipality of Balme is about 60 MWh/cap/y, more than twice the average of the 21 municipalities belonging to the mountain community analysed (25 MWh/cap/y); as a reference, the City of Turin registered, in the plain area, an energy consumption of about 13 MWh/cap/y. These results are also due to the high presence of holiday houses in the mountain areas and then a lower number of inhabitants.
Results
In this paragraph the main results of the application of the quadrant graphical method and the cost-optimal analysis on public buildings are presented.
In Fig. 7, the quadrant method has been applied to the public buildings of the mountain union; each building is represented by a point, identified by the value of its annual average consumption (x-axis) and its annual average specific consumption (y-axis). A twofold analysis has been implemented: in the first case, the traditional graph with the average absolute (kWh/y) and specific (kWh/m 3 /y) energy consumptions of the buildings was represented; in the second case, the average absolute consumption (kWh/y) and the specific consumption normalized on 3197 HDD at 20 C of Lanzo Torinese (kWh N / m 3 /y) were represented to evaluate the energy efficiency level as independent by different climatic conditions.
From the quadrant graph illustrated in Fig. 7 it is possible to note the differences between the two analyses and the effects of the HDD normalization on the specific consumption data. With the normalization on HDD, minor values on the specific energy consumption are represented especially for cold climates. The horizontal and the vertical lines represent the average and median values of absolute and specific thermal consumptions of the public buildings sample; in this work, the median value, instead of the average value, was chosen as the reference limit to prevent the presence of anomalous consumption data.
Buildings in the first quadrant have an annual and specific annual consumption higher than the median value, therefore they are buildings with highest energy costs (high kWh/y) and lowest energy performance (high kWh/m 3 /y). The buildings on the fourth quadrant, however, show lowest consumptions both in absolute and in specific terms, than improvements in energy efficiency could not be a priority.
With this representation, 5 critical buildings have been identified and on these buildings an energy model have been performed to evaluate the effect of energy efficiency measures (Fig. 8). The selected buildings for the cost/benefit analysis of retrofit interventions were: Lanzo Torinese kindergarten, Lanzo Torinese nursery, Balangero primary school, Cafasse nursery and Ceres City Hall. The choice of the buildings has also been agreed with the mountain community taking into account also ordinary maintenance requirements as for Ceres City Hall. For this building, the energy consumption seems low but this value is due to the partial period of operation of the City Hall in a very small municipality like Ceres.
The results of thermal models have been compared to the real energy consumption data for all the buildings and they were considered validated as the relative difference was lower than AE5% (in Table 3). All the buildings were powered by natural gas while, in Ceres, the City Hall has a gas oil boiler. A total of 58 retrofit scenarios have been implemented, some with individual interventions and others aggregating individual ones, as it is illustrated in Table 4.
Retrofit interventions are different for each buildings because not all interventions are technically and economically feasible; as an example, for the school in Balangero, no biomass and PV systems have been provided since there were new boilers with relatively high efficiencies.
In order to define the global cost of retrofit interventions, some parameters have been calculated: the life period of retrofit measures, the discount rate and the investment costs: for the design, purchase, installation, annual maintenance and energy costs.
In this study, the global costs were composed by the sum of all costs projected over a 30 years of life period using a 0.3% of discount rate; the cost was divided by the net heated area of each building. The cost of annual energy for space heating was calculated as a product between the annual thermal energy-use of the building and the cost of the fuel used (in Table 5), while the cost of the interventions was calculated with and without the current financial incentives for building energy retrofit.
In this work, the specific energy performance achieved was defined with the global primary energy performance index EP (kWh/m 2 /y) given by renewable and not renewable energy sources: EP gl ¼ EP gl,ren þ EP gl,nren .
To obtain the primary energy, energy consumptions were multiplied by the coefficients of conversion into primary energy, specified in the Italian Decree D.M.
In Figs. 9, 10, and 11, the three different cost-optimal analyses are represented.
These graphical representations are useful for highlighting interventions characterized by a good compromise between global costs and final primary energy performances. These better interventions are located in the minimum of the curves with low global costs and low primary energy consumption EP. Each intervention is identified by an alphanumeric code were the letters represent the building under analysis and the numbers the type of retrofit measure (as in Table 4).
In the cost-optimal analysis, represented in Figs. 9 and 10, it is possible to notice how the scenarios based on biomass are characterized by optimal low values of EP, but higher costs (with a cost of biomass of 0.27 V/kWh). In fact, these scenarios take into account the higher cost associated to a biomass boiler on the market, with respect to a traditional one. The optimum scenarios, highlighted in these two analyses, are mostly based on the insulation of opaque envelope accomplished by the substitution of obsolete boiler (i.e. CR6, LA6, B9 and CF5).
The third cost-optimal analysis, represented in Fig. 11, excluded the costs associated to profit and transport of biomass, since this resource is directly managed by the mountain community. In this case, the global cost of biomass was reduced by the 30% and the optimal cost curve drastically changes, highlighting the scenarios based on biomass fuel.
For each of the scenarios proposed by the cost-optimal analysis, a financial plan was defined taking into account the specific interventions, in order to reinvest the economic savings obtained from previous energy saving interventions for each building.
In Fig. 12, the financial plan referred to the better retrofit scenarios is represented considering financial incentives and local availability of biomass. In this case, a starting budget of 250,000 V was considered to allow the adoption of the more convenient scenario (derived by cost-optimal analysis). The focus is to invest the economic savings implemented in the previous year in further retrofit interventions for all the select buildings. After 9 years, it is possible to invest 50,000 V in other buildings retrofit; it should be considered also that this economic saving will be forever.
Discussion
This work provides an accurate methodology that can be replicated in other territorial contexts to evaluate the effects of different policies for a more sustainable development. The methodology adopted, in fact, has allowed to effectively utilize existing methods, adapting them to a specific case study as the Lanzo, Ceronda and Casternone Valleys Union.
The presented approach has taken into account the complexity and specificity of this mountain territory, considering the more effective energy efficiency interventions, with an analysis on energy consumptions on a sample of representative critical buildings. The results can also be extendable to the other public buildings in a similar area.
The applied methodologies, in particular the quadrants method and the cost-optimal analysis, have been modified in order to obtain more effective results on the analysed mountain community.
In the quadrant method the annual energy consumption was compared with the specific energy consumption normalized on the HDD. With this approach, even for a vast territory, the level of energy efficiency, measured with the specific energy consumption, does not vary with the climate and depends only by the buildings characteristics. This normalized value of specific consumption can help identifying an order of priority of retrofit interventions to improve energy efficiency, while the annual consumption is not normalized because it represents the annual energy costs. From the graphs represented in Figs. 9, 10, and 11, it can be noticed that, over the life period of thirty years, applying a market price to biomass, the scenarios using biomass boilers are uncompetitive and unfavourable; while, taking into account the local availability of biomass, these interventions are also economically convenient. This analysis tries to highlight the retrofit scenarios that include the installation of a biomass boiler in order to exploit a local and renewable resource as an alternative to traditional fuels. For the economic feasibility, this was possible by considering a lower price of biomass as its local availability and the municipal property of the forest areas; thus, the biomass price was reduced by 30% to deduct the costs of biomass profit and transport. Indeed, Fig. 10 shows that currently, the existing economic incentives are not adequate to support the purchase of single biomass boilers but the acquisition of numerous systems for all the community could be investigated, as the installation of cogeneration biomass plants to supply more buildings.
The cost-optimal analysis evidenced that retrofit interventions only on thermal insulation of buildings envelope are not economically convenient, while good performances can be observed if they are combined with the substitution of the boiler. Furthermore, retrofit interventions in the mountain areas are more convenient than in the hilly ones because the investment costs are constant, while the economic savings after retrofits are higher.
Conclusions
This methodology is answering to the European Cohesion Policy 2014e2020, providing effective actions based and the real building heritage to support the transition to a low-carbon economy and the Urban Agenda Habitat III [25]. Energy sustainability and security is one of the milestones of the new Urban agenda and, with regard to Italy, the objectives are clear: reducing energy and primary energy consumptions in buildings and implementing tools to support the transition to a lowcarbon economy by promoting a gradual renovation of buildings and the adoption of renewable energy technologies.
The diffusion of efficient biomass systems using the availability of this local resource could lead to a twofold advantage: on the one hand, it consent to contain economic costs for public administrations, while on the other hand, a virtuous circle of good forest management would be created with a probable economic and environmental benefits for the local communities. Especially for high-income countries like Italy, it is important not only the development and application of low-carbon production technologies but also the promotion of the progress in low-carbon production technologies to reduce the probable scale effect on emissions due to economic growth [26].
The novelty of the methodology is on the application of a GIS-based approach to reach the energy security in a vast and various territory guarantying energy supply where there is energy demand with the available renewable energy sources. This methodology was also supported by effective tools as the quadrant method and the cost-optimal graph to find the more efficient actions. In this work, these tools have been modified to operate at territory scale with different climate characteristics and various building heritages as a mountain community. In particular the energy efficiency level of buildings was normalized on HDD and the energy performance was calculated with the not renewable component of EP, considering the costs with financial incentives and further discounts for the local resources.
Finally, the Piedmont Region is the first region in Italy studying a law to facilitate the creation of energy communities that can self-generate the energy they need by exploiting technologies that produce electricity and heat from renewable sources available locally. This methodology will be tested on the first case studies in future researches.
Declarations
Author contribution statement Guglielmina Mutani, Mauro Cornaglia, Massimo Berto: Conceived and designed the analysis; Analyzed and interpreted the data; Contributed analysis tools or data; Wrote the paper.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. | 7,033.6 | 2018-05-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
An Enhanced Technique of COVID-19 Detection and Classification Using Deep Convolutional Neural Network from Chest X-Ray and CT Images
. Background . Coronavirus disease (COVID-19) is an infectious illness that spreads widely over a short period of time and fi nally causes a pandemic. Unfortunately, the lack of radiologists, improper COVID-19 diagnosing procedures, and insu ffi cient medical supplies have all played roles in these devastating losses of life. Deep learning (DL) could be used to detect and classify COVID-19 for potential image-based diagnosis. Materials and Methods . This paper proposes an improved deep convolutional neural network (IDConv-Net) to detect and classify COVID-19 using X-ray and computed tomography (CT) images. Before the training phase, preprocessing methods such as fi ltering, data normalization, classi fi cation variable encoding, and data augmentation were used in conjunction with the proposed IDConv-Net to increase the e ff ectiveness of the detection and classi fi cation processes. To extract essential features, deep CNN is then employed. As a result, the suggested model can identify patterns and relationships crucial to the image classi fi cation task, resulting in more precise and useful diagnoses. Python and Keras (with TensorFlow as a backend) were used to carry out the experiment. Results . The proposed IDConv-Net was tested using chest X-rays and CT images collected from hospitals in Sao Paulo, Brazil, and online databases. After evaluating the model, the proposed IDConv-Net achieved an accuracy of 99.53% and 98.41% in training and testing for CT images and 97.49% and 96.99% in training and testing for X-ray images, respectively. Further, the area under the curve (AUC) value is 0.954 and 0.996 for X-ray and CT images, respectively, indicating the excellent performance of the proposed model. Conclusion . The fi ndings of our proposed IDConv-Net model con fi rm that the model outperformed compared to existing COVID-19 detection and classi fi cation models. The IDConv-Net outperforms current state-of-the-art models by 2.25% for X-rays and 2.81% for CT images. Additionally, the IDConv-Net training approach is signi fi cantly quicker than the current transfer learning models.
Introduction
The worldwide outbreak of the coronavirus disease (COVID- 19) is still wreaking havoc on people's lives and health [1].COVID-19 is a highly infectious disease with limited and less effective treatment options [2].The transmission of COVID-19 occurs through respiratory droplets released when an infected individual talks, coughs, or sneezes due to infection with the SARS-CoV-2 virus.The virus can also be spread by contacting the mouth, nose, or eyes after touching a surface or object that has been exposed to the virus [3].Numerous COVID-19 patients frequently overburden the healthcare systems in many countries.About 347.49million/5.60 million patients have been diagnosed/ died with COVID-19 infection since December 2019.The incidence of illnesses and deaths due to COVID-19 is increasing each day.According to a report [4] on 12 September 2022, a total of instances of COVID-19 have been reported at 613,958,298 in the world.Among them, 6,516,913 have died, and 592,777,665 have recovered.Furthermore, over 1,075,668 out of 97,095,092 patients have died in America; 684,914 out of 34,574,765 patients have died in Brazil; 528,165 out of 44,500,580 patients have died in India; 29,334 out of 2,014,887 patients have died in Bangladesh, etc. [4].
Generally, late detection of COVID-19 can assault the lungs and harm the tissues of the disease-ridden patient [5].The lungs and human respiratory system are still particularly susceptible organs where the COVID-19 virus can easily proliferate.Damage results and the air sac is filled with liquid and expelled as an outcome [6,7].As a consequence, the patient has trouble breathing with oxygen.So, we want to rapidly and precisely determine the degree of lung injury to survive the patients and reduce fatality [8].Moreover, early COVID-19 detection can save the patient's life and stop spreading.A significant level of protection should be offered by a parenteral COVID-19 vaccine approach capable of inducing a potent, long-lasting immune response involving neutralizing antibodies and T cells [9,10].Different vaccine platforms and strategies have advantages and disadvantages from an immunological perspective.As a result, the COVID-19 vaccine has significantly changed the pandemic's trajectory and reduced the rate of mortality [9,11].
One of the diagnostic methods used for detecting COVID-19 is real-time reverse transcription polymerase chain reaction (RT-PCR), which is a recommended technique by the WHO for identifying the presence of the virus causing COVID-19 [12].However, the RT-PCR method takes a few hours to two days to produce test results.Additionally, this technique is difficult, expensive, manual, and unavailable everywhere.The expense and lack of RT-PCR affect many developing and underdeveloped nations [13].Further, RT-PCR testing needs a laboratory kit; many nations find it difficult to produce or gather during the outbreak [14].Moreover, the COVID-19 RT-PCR test's reduced sensitivity was noted in several investigations.Many researchers have reported this test's sensitivity to 71% to 98%, which reduces the detection accuracy of COVID-19 cases [15].
Another approach is medical imaging which plays a critical role in COVID-19 detection and management.Specifically, chest X-rays and computed tomography (CT) scans have been used to detect and monitor COVID-19 patients.Medical imaging, such as chest X-rays and CT scans, can be helpful in detecting COVID-19 for several reasons, including the visualization of lung abnormalities, the confirmation of the diagnosis, the severity assessment, and the monitoring of disease progression [16].Specifically, medical imaging is used by radiologists to verify the COVID-19 diagnosis manually.However, as radiologists must manually diagnose a significant number of COVID-19 patients, it is a laborious, error-prone, and exhausting process that necessitates competent radiologists [17].
Over the years, artificial intelligence (AI) has shown potential in the field of medical imaging.Deep learning (DL) is an effective tool for analyzing medical imaging data because it can automatically identify patterns and features from large datasets without requiring manual feature engineering [18].Furthermore, DL has the potential to improve the speed, accuracy, and accessibility of COVID-19 diagnosis, which can help to control the spread of the virus better and improve patient outcomes.There has been a significant amount of research on the use of medical imaging for COVID-19 detection.Several researchers have applied machine learning (ML) and DL methods, such as convolutional neural networks (CNNs), transfer learning (TL), autoencoders, and ensemble, to medical imaging for COVID-19 detection.Further, these ML and DL methods have shown promise in COVID-19 detection using medical imaging [19].Moreover, deep CNNs have demonstrated potential in COVID-19 detection and classification using medical imaging due to their ability to automatically learn hierarchical features from the input images.Furthermore, CNNs are designed to minimize noise and variation in the input images.Additionally, it allows the network to leverage knowledge learned from a large and diverse dataset, which can improve performance on the target task.However, in some research on COVID-19, lung cancer, monkeypox, brain stroke, etc., detection and classification were performed using CNN with insufficient accuracy [6,20,21].Additionally, in some cases, CNN and transfer learning require a longer training time in the detection and classification.
After considering these issues, we require a compatible deep learning framework that will be able to help consultants and healthcare staff quickly and correctly identify COVID-19 disease from X-ray and CT images [22].This research is aimed at demonstrating an enhanced deep convolutional neural network-based solution for automatic COVID-19 detection from chest X-rays and CT images.The COVID-19 radiography dataset that is publicly available is limited, and a large dataset is preferred to train deep CNN models.Even after using overfitting mitigation techniques, training DL models on a small dataset can result in overfitting.One of the most critical issues when designing the architecture is limiting the number of trainable parameters to avoid overfitting.An early call-back function can be employed to avoid overfitting.Further, data augmentation can also be used to address problems with small datasets.While developing deep learning models, overcoming the vanishing gradient problem is crucial.Additionally, the problem of accuracy degradation during deeper network training needs to be addressed.This paper's primary contributions include the following: The subsequent sections of this paper are organized as follows: Section 2 provides a literature review of the study, while Section 3 describes the method used and the required materials in detail.Our proposed IDConv-Net model is elaborated on in Section 4, while Section 5 presents the potential training parameters employed in our model.Section 6 of this paper presents the study's findings, while Section 7 provides a discussion of these results.The paper concludes with a summary in Section 8.
Related Works
To stop the COVID-19 pandemic from spreading, it is essential to identify the virus quickly and precisely.Chest X-ray and CT images are available in almost all hospitals worldwide and are the most widely used and economically advantageous medical imaging technology for evaluating lung problems [22,23].Chest X-ray and CT scans can reliably identify lung injury in COVID-19 patients earlier [9,24].It identifies the virus's stage while indicating its presence [16].However, the lack of distinctive characteristics and the resemblances between lung lesions and other viral diseases make COVID-19 susceptible to misdiagnosis [25].Considering these, potential AI appliances such as ML and DL can overcome the COVID-19 disease detection errors caused by people from X-ray and CT imaging techniques [26][27][28].AI has proven its efficiency and performance in detecting diseases like cancer, tumor, pneumonia, and COVID-19.DL-based approach, such as CNN, plays a key role in processing medical images, particularly in features extracting and classifying [29].In [30], Bassi and Attux developed a dense CNN to classify COVID-19, pneumonia, and normal from chest X-rays.They proposed a novel approach of output neurons that modifies the twicetransfer learning techniques.They achieved a good performance from their model regarding the classification of COVID-19.However, larger datasets and clinical investigations were required to guarantee accurate generalization.Agrawal and Choudhary [20] suggested a deep CNN for detecting COVID-19, utilizing two datasets of chest X-rays.For image segmentation, they used an encoder-decoder architecture.The CNN structure encoder extracts features and transfers them to the decoder as part of the segmentation procedure for this experiment.The findings demonstrated that, for two datasets used in COVID-19 detection, the suggested model achieved high accuracy of 94.4% and 95.2%, respectively.
The authors [31] introduced a novel COVID-CXNet utilizing a familiar transfer learning-based CheXNet model.They utilized relevant and meaningful features in the detection of novel coronavirus.A CNN-long short-time memory (CNN-LSTM) model was designed by Purohit et al. [32] to extract features from raw data hierarchically.They employed several COVID-19 chest X-ray datasets to test and investigate the model accomplishment of COVID-19 detection.For the larger dataset, however, the model needs to be trained for longer, which needs to be cut down.
Ayalew et al. [17] presented the DCCNet model for the diagnosis of COVID-19 patients.The authors employed two methods, namely, histograms of oriented gradients (HOG) and CNN, to extract features from input images, and used a support vector machine (SVM) classifier to classify COVID-19.The SVM classifier yielded 99.97% accuracy during training and 99.67% accuracy during testing when combined with CNN and HOG-based features.
Indumathi et al. [33] mentioned an ML algorithm to classify and predict COVID-19-affected zone.From March to July 2020, they used the Virudhunagar district's COVID-19 dataset.They achieved a 98.06% accuracy rate, which was higher than the 95.22% accuracy rate of the C5.0 algorithm.
Salau [34] used an SVM technique to classify and identify COVID-19 using chest CT data.After extracting features from CT scans using a discrete wavelet transform (DWT) technique, the study built a classification model.The findings demonstrated that the suggested model has a high accuracy of 98.2% in COVID-19 detection.
Chaunzwa et al. [35] used a DL framework to detect lung cancer from CT images.In [36], the identification of COVID-19 on CT scans is accomplished using ML methods.However, their investigation only used images that cost 150 for CT scans.Khan et al. [37] highlight promising DL research for understanding radiography pictures and progressing the investigation of constructing specific DL-based assessment methods for unique COVID-19 variations, delta, omicron, and challenges ahead.In [6,21], to identify COVID-19, the authors implemented an SVM technique.Using 208 test data, they achieved a lower recognition rate.In [38], the authors applied ML techniques to detect COVID-19 automatically using X-ray images to enhance accuracy.Rahimzadeh and Attar [39] considered Xception and ResNet50V2 approaches for COVID-19 identification from X-ray images.In [40], the researchers employed pretrained transfer learning models, such as ResNetV2, Incep-tionV3, and ResNet50, for detecting lung disease and COVID-19 using X-ray images.In [40], COVID-19 was identified using just X-ray data by CNN models like Inception-ResNetV2, ResNet50, and InceptionV3, where the models had 98%, 97%, and 87% classification accuracy, respectively.These experiments used a few X-ray data.They may have tested their models' performance on other modalities, such as CT scans.
However, some problems with past research include insufficient detection accuracy for different modalities of images, small datasets, overfitting issues, and using CNN 3 BioMed Research International without first preprocessing images.Further, some works require prolonged training time, which is another drawback.This study used a number of image preprocessing approaches to address these drawbacks.Furthermore, the proposed enhanced model improves detection and classification performance and reduces the training approach time.
Materials and Methods
This section elaborates on the methodology provided for identifying COVID-19.Figure 1 depicts the process of the proposed methodology.
Image Data Acquisition.
A dataset is the backbone of research.We used two types of images, 2D CT and X-ray images.For CT images, the settings for the 64-slice scanner were calibrated with the following parameters: collimation of either 128 × 0 6 mm or 64 × 0 6 mm, tube voltage of 120 kilovolts (kV), section thickness of five millimeters (mm), slice interval of five millimeters (mm), the pitch of 1.375 mm, and field of view of three hundred fifty by three hundred fifty (mm).In addition, the patient's position was supine; both arms were elevated, and the patient was instructed to hold their breath.The datasets were reconstructed with a wall thickness and increment ranging between 1.5 and 2 mm [41].
We collected 1,252 COVID-19 positive and 1,230 normal images from the SARS-CoV-2 CT-scan dataset which images were gathered from real patients in hospitals from Sao Paulo, Brazil.
Our COVID-19 CT scan image dataset consisted of 7,593 COVID-19 images obtained from 466 patients, as well as 6,893 normal images obtained from 604 patients.Then, merge CT images of both datasets to create a new 2D-CT dataset containing 8,845 COVID-19 and 6,893 normal images, totaling 17,168 images.Similarly, we collected 576 COVID-19 positive and 1,583 normal X-ray images from COVID-19 X-ray dataset and 4273 COVID-19 positive images and 10192 normal X-ray images from the COVID-19 Radiography Database and then merged them to create an enlarged size of the new X-ray dataset.We used a new merged X-ray and 2D-CT image dataset to perform our model better.Overall, the merged X-ray dataset contains 4,192 positive COVID-19 and 11,775 normal images, and the merged 2D-CT dataset consists of 8,845 COVID-19 positive and 8,123 normal CT images.We represent some sample CT and X-ray images in Figures 2 and 3.In Table 1, we highlighted the number of images extracted from the sources dataset.The datasets were partitioned into training, validation, and testing sets, as shown in Table 2.For each dataset, 80% of the images were allocated for training, 10% for validation, and 10% for testing purposes.
Data Preprocessing.
Preprocessing is crucial in transforming raw data into a format appropriate for the ML or DL approach.It primarily enhances the source images by controlling normalization, multicollinearity, scaling, shuffle, and data division [42].Furthermore, preprocessing methods enhance the image quality, making an experiment more suc-cessful.Moreover, it is very difficult to handle high dimensions of input data.Sometimes, it may cause overfitting and poor results.For this reason, we downsized the images to 224 × 224.We applied the dimensionality reduction technique for less computational time and quick visualization.Before training the model, it is essential to convert string or nonnumeric features into numeric ones.So, we utilized data transformation for data compatibility, which means converting string or non-numeric features into numeric [43][44][45].We also use feature engineering, which entails selecting the features that would be helpful in training a model.The normalizing technique was utilized to compare various features on a comparable scale.As a result, we can use higher learning rates or models to converge more quickly for a given learning rate.It also helps to stabilize the gradient descent step.
Normalization of Data.
The significance of data normalization for developing exact prescient models has been analyzed for the different ML algorithms that have recreated a crucial position [46].The fundamental objective of data normalization is data quality assurance before its application to predictive analytics.Various data normalization techniques can be utilized, including min-max normalization, Z-score normalization, decimal scaling, and median standardization, among others [47,48].The prime aims of data normalization are given below: (i) This data group makes all entries and attributes appear identical (ii) It provides the dataset with relevant information that is more obvious and natural, reducing its size and simplifying its structure so that it is easier to identify, contrast, and retrieve (iii) It enhances and simplifies the numerical data without losing the critical characteristics with reduced complexity, leading to easy segmentation The dataset can be normalized by dividing an image's gray-scale value by 255.However, this study uses Z-score normalization as a normalized technique [49], which is stated as follows: where Z k indicates the normalized weights of Z-score, Z k is the weight of S th row and k th column, S represents the mean, and std S represents the standard deviation, which can be expressed as When manipulating data, the values are typically scaled into the [0-1] range, ensuring that the data is stored.5 BioMed Research International values.For this reason, we had to maintain some procedures to get numerical values for each extracted feature.Furthermore, we apply normalized and standardized operations to get better processing to train models and support various DL networks.In our experiment, we converted the two levels of COVID-19 and non-COVID-19 to 0 and 1 using the LabelEncoder function from the Python standard module [50].
3.5.Data Augmentation.Data augmentation is a powerful and useful technique for improving machine learning models' accuracy and predictive ability by increasing the number of images in a dataset through modified versions of existing training images.Moreover, it reduces the complexity of collecting more images to enlarge the dataset.Data augmentation utilizes techniques such as data wrapping and oversampling to increase the number of images in a dataset.Nevertheless, it may appear to be an overfitting problem in the results [51].To mitigate this problem, we have applied flipping, rotation shearing, mirroring, zooming, fill mode, and channel shifting using principal component analysis to augment the data [52].The augmentation parameters that we used to increase the number of images are given below: Flipping: the image is horizontally and vertically flipped.The flipping operation reconfigures the pixels while preserving the image's attributes.An image's vertical and horizontal position is randomly adjusted by 0.2 degrees.
Rotation: the image is flipped by a number of degrees between 0 and 360.In the model, every flipped image will be different.The rotation range is from -360 to 360 degrees.
Shear: to produce or correct perception angles, the image can be twisted along a particular axis using a shear range of approximately 0.4 degrees.
Zoom: the zoom range of an image in the data augmentation method can be zoomed in or out.This method enlarges the image by zooming in or out randomly and adding pixels around the image.The extent of zoom is around 0.5 degrees.
Fill mode: to fill the empty pixel's values, the default value "nearest" is applied, which replaces the nearest image pixels.
Channel shifting: it randomly shifts channel values to vary the hue by 10.
Our Proposed IDConv-Net Model
This section represents the proposed model and working outline.The proposed IDConv-Net model has five convolutional layers and four pooling layers (max pooling), batch normalization, rectified linear unit (ReLU), dense and dropout layers, and sigmoid.The architecture of the IDConv-Net model is shown in Figure 4, which consists of input images, feature extraction, and classification layers.Firstly, the feature extraction layer extracts the critical features from the input images; then, the last part of the model, such as the fully connected layer, performs classification.As a result, the model functions as a feature extractor before acting as a classifier.
Feature Extraction.
Feature extraction is an important step in ML and Dl applications, as it can improve the efficiency, accuracy, and interpretability of the subsequent learning algorithms [53].The feature extraction part of our model consists of five convolutional layers followed by four max-pooling layers through the ReLU activation layer (see Figure 5), while subsequent ReLU layers follow through the batch normalization layer and maximum pooling layer, and finally, the last convolution layer, followed by the flattening and dropout layer, as shown in Figure 4.
Apart from this, the input layer initially receives input images with the size of 224 × 224 × 3 CT or X-ray chest image, where 224 × 224 is the image's dimension, and 3 is the RGB channel.The convolutional layer is responsible for featuring maps, i.e., feature representation, of the input images [54].The input image x is convoluted with a set of trainable weights, sometimes referred to as multidimensional filters f k , and the result is coupled with biases b k .Assume there are K filters; this layer's kth output can be represented as given in the following equation: and M, N, C are the height, width, and channel of the input, respectively.Further, x c r p ′′,q ′ ′ represents the local region of p th row and q th column of input image, where P is the zero padding number and S is the pixels of stride.Further, the batch normalization (BN) layer improves network training and lessens sensitivity to network initialization between the convolutional and activation unit (ReLU) layers [55].In this paper, the ReLU activation function is applied, mathematically shown in Equation (6), and it only keeps the positive part of the activation.a p,q,n = f z p,q,n = MAX z p,q,n , 0 6 Furthermore, the polling layer takes maximum values with a pool size of (2, 2).Consequently, the max-pooling layer pooled the feature maps with the dimension 111 × 111 × 64, followed by the second convolutional layer.Similarly, the second convolutional layer convolved feature maps, followed by a second pooling layer with a similar filter size of 2 × 2 and a stride of 2. Consequently, the image's dimension will be reduced to 54 × 54 × 256.Table 3 BioMed Research International step in classification, which converts the data's form into a one-dimensional data vector.In the classification function, a dropout layer is followed by a thick layer with 1024 neurons.A dense layer produces the final output with two neurons and a sigmoid activation function, which identifies the image as belonging to one of the chest diseases: COVID-19 or the normal.
The classification layer in a CNN using the sigmoid activation function can be represented using the following equation: where x is the output of the previous layer, W is the weight matrix of the classification layer, b is the bias vector, and σ is the sigmoid activation function defined as The output of the final layer of the proposed model is passed through the sigmoid function to obtain a value between 0 and 1, which can be interpreted as the probability that the input image belongs to the positive class.The decision boundary can be set to 0.5, so if the output of the sigmoid function is greater than 0.5, the input image is classified as belonging to the positive class.If it is less than or equal to 0.5, the input image is classified as belonging to the negative class.Moreover, a dropout layer is utilized with a value of 0.3 for the last convolution layer to avoid overfitting between the training and testing performance.
Training and Performance
The performance of the training and testing set depends on the experiment setup and performance matrices such as precision, Recall, F-score accuracy, sensitivity, and specificity.Experiment setup and performance matrices were described in this section.
Experiment Setup.
Hyperparameter tuning is an important step in building machine learning models, as it involves selecting the optimal hyperparameters that result in the best performance of the model.To get excellent performance from the model, we repeatedly fine-tuned the model.We used three optimized hyperparameters during our study's training phase: batch size, epochs, and learning rate.Manually tuning that parameter is time-consuming; therefore, we have applied the grid search method to select the best value of the hyperparameter.Table 4 summarizes the initial and optimal parameters found during the experiment.From Table 4, we can infer that the best-optimized batch size is 32, epochs are 50, and the learning rate is 0.001 for both datasets.We performed a grid search method using ML frameworks and libraries, such as scikit-learn in Python to obtain these values.
Adam, also known as adaptive momentum, is used to enhance the performance of our suggested IDConv-Net model because it performs consistently while categorizing binary images [56].The experiment was conducted using an organization laptop with Windows 10, a Core i7 processor, and 16 GB of RAM.Furthermore, we run the model on a Jupiter laptop and the Google Colab GPU environment with 12 GB of RAM.
The proposed model was developed and fine-tuned using chest X-ray and CT image dataset to get insight into the COVID-19 identification issues.We split our dataset into three sections: training, validation, and testing to evaluate the performance of the IDConv-Net model.To assess the performance, we have used 80% data for the model training up, 10% data for model validation, and the rest 10% for model testing.Table 2 shows the data distribution for the training, validation, and testing sets, respectively, for a better understanding of both datasets.
Performance Metrics.
Performance measures are crucial to assessing the proposed approach.In this study, we measured precision, recall, F1-score, and accuracy using four metrics: true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN).Each of these performance metrics is used to assess the performance described in the equation below: Precision (P): it is comprehended as a positive predictive value.It measures the proportion of positive and projected instances out of the total number of cases that are expected to be positive.
F1 = 2 × Precision × Recall Precision + Recall 11
Accuracy (A): the ratio is the number of right prediction cases separated by the total number of cases.
A = TP + TN TP + TN + FP + FN 12
The value of all performance metrics ranges from 0 to 1.
Results
The study used secondary datasets of two modalities: X-rays and CT images.The experiment utilized 15967 X-ray images, where 14370 images were used for model training and validation, and the remaining 1597 images were used to evaluate the model.Similar to the first experiment, 15471 CT images were used for model training and validation, and the remaining 1697 images were used to evaluate the model.In this experiment, 17168 CT images were used.Firstly, we ran the experiment five times to optimize the hyperparameters, including node size, batch size, learning rate, and drop rate.The optimized learning rate was 0.0001 and 0.001 for the X-ray and CT studies, respectively, and 0.99 momentum while training the model with the Adam optimizer using the binary cross-entropy loss.In our research, we have used 50 epochs.However, it was completed in 48 epochs for the CT images and 47 epochs for the X-ray images due to the early stopping function, which is accountable for terminating the execution when reaching an optimum result.Moreover, the complete trainable parameters in model train-ing were 8,004,481 out of 8,088,129.Furthermore, the sigmoid activation function is used in the final layer since our model works as a binary classifier.
We utilized X-ray and CT images separately to evaluate the performance of the proposed IDConv-Net model.To evaluate the performance of IDConv-Net, firstly, we train the model with X-ray images.We used 80% of the data for model training and 10% for validation.The remaining 10% was used to evaluate the model's performance.After evaluating the IDConv-Net model with X-ray images, we achieved an accuracy of 97.49% and 96.99% for training and testing, BioMed Research International respectively (see Table 5).Furthermore, the model achieved a precision of 97.14%, recall of 91.87%, and F1-score of 94.43% from the X-ray image dataset.From the confusion matrix of the X-ray image dataset (see Table 6), only 12 out of 419 COVID-19 images are misidentified.Furthermore, only 36 out of 1178 normal images are miss identified.
In a different study, we used a CT scan image dataset to train the IDConv-Net model.In this study, 80% of the data were utilized for model training, and 10% were used for model validation.The performance of the model was assessed using the final 10%.We achieved an accuracy of 99.53% and 98.41% for training and testing, respectively, after evaluating the IDConv-Net model on CT images.The findings of IDConv-Net are compared with other state-ofthe-art methods shown in Table 7, where the suggested model achieved a precision of 98.64%, recall of 96.31%, F1 -score 98.48%, training accuracy of 99.53%, and testing accuracy 98.41%.From the confusion matrix of the CT image dataset (see Table 8), only 12 out of 885 COVID-19 images are miss detected, whereas 15 out of 812 normal images are miss detected.The accuracy and confusion matrix proves the model classification reliability even with an entirely new data set.
Finally, we can infer that our proposed model can accurately classify COVID-19 and normal patients from X-ray and CT image datasets.The proposed model obtained a better accuracy with a bit of loss, which is shown in Figures 6 and 7.In another study, the model outperformed the existing models with a little loss on CT images, as shown in Figures 8 and 9.
Moreover, the area under the curve (AUC) summarizes the receiver operating characteristics (ROC) curve demonstrating the classifier's ability to distinguish between classes.The horizontal axis (X-axis) represents the false positive rate (FTR), and the vertical axis (Y-axis) represents the true positive rate (TPR).The AUC-ROC value is an indicator of the detection performance of the model, with a higher value indicating better performance.The AUC-ROC 0.954 and 0.966 have been achieved simultaneously from our proposed model using X-ray and CT image datasets shown in Figures 10 and 11, respectively.The results of our study indicate that training time for a deep learning model is an important consideration for detecting and classifying COVID-19.Based on the data presented in Table 9, it can be observed that our proposed IDConv-Net model exhibited significantly reduced training times compared to other transfer learning models.Specifically, the training time for the X-ray image dataset was only 31 ± 1 minutes, while the training time for the CT image dataset was 34 ± 1 minutes.These training times were substantially lower than those observed in existing models, which took double as long.Therefore, our IDConv-Net model can be considered a highly efficient and effective approach for image classification tasks.We have also demonstrated the random prediction outcomes of test images using 11 BioMed Research International our suggested IDConv-Net in Figures 12 and 13.In this direction, we evaluated the identification accuracy by comparing the actual and predicted test images with a confidence level.
Discussion
In this study, 2D-CT and X-ray images were used.The 2D approach is slice-based, using a single slice image as input to produce a score for each individual.As opposed to this, 3D is a volume-based technique that uses the entire volume (a sequence of slices) as its input to produce a single patient score.However, 2D is still trustworthy for inspecting impor-tant areas of images and for complex geometry.Moreover, we applied preliminary filtration to all chest images on the train set to control quality and remove incomprehensible slices for processing the chest CT images.Before being approved to train in the IDConv-Net model, two expert physicians graded the diagnosis for the images.A third expert evaluated the evaluation set to ensure no grading errors.Furthermore, we applied the generalization technique to enhance the model performance.The grid-search approach was utilized to identify the optimum hyperparameters.We have chosen the local minima by defining a set of discrete values.The objective function is then evaluated at its grid point by inputting the appropriate parameter values following that.Subsequently, the local minimum can be identified as the lowest objective function value grid point.The minimum and maximum values for each prior were also employed; these parameters were determined empirically according to the characteristics of the images and the number of instances.The parameter settings used in this study are shown in Table 4 with the initial and optimized parameters.
After training, the model can process new data and estimate accurate predictions.In addition, we also used some techniques like data feature extraction preprocessing to get an accurate classification.
We can see the results of the comparison of the proposed model with the state-of-the-art model in Table 5, where we obtained such excellent results due to applying some image preprocessing techniques like noise removal, filtering, data transformation, and feature extraction.Furthermore, our model is quicker because we use fewer layers than the state-of-the-art model.Figure 6 represents the training and validation accuracy of the proposed model during the training.Figure 7 shows the training and validation loss during the model training.From these curves, we can infer that there is no overfitting and exhibits a good model performance.Although we have used 50 epochs, the model terminates execution after 47 epochs for X-ray and 48 epochs for CT images due to the early stopping function.
Similarly, for the CT scan study, Table 7 highlights that our proposed model achieved an excellent performance compared to state-of-the-art models.Moreover, Figure 8 shows the accuracy for the training and validation sets during the model build-up.Similarly, Figure 9 indicates the loss of train and validation sets during the model train.After evaluating the model, we got excellent accuracy and loss curves which indicates the model's good performance.
The study's most significant part was increasing the accuracy level of detection and classification.It is also possible that the goal is to obtain accuracy as close to 100% as possible because, even in a few cases, misdiagnosis is not worth it.
Although similar models of CNN (e.g., AlexNet, nCOVnet, MobileNetV2, and ResNetV2) could detect COVID-19 with insufficient accuracy, moreover, more hidden layers of these models consume more time to yield results.In addition, these models increase the complexity of providing the detection results.Our proposed IDConv-Net model has great significance as a binary classifier.Firstly, it works as a feature extractor, then as a classifier.Moreover, the model Overall, the proposed IDConv-Net provides effective results individually on the X-ray and CT images.Finally, according to Tables 5 and 7, our suggested IDConv-Net model achieved the best accuracy for the X-ray and CT image datasets, respectively.Moreover, to avoid overfitting, we used dropout with a value of 0.3 in the last convolution layer of the proposed model.Furthermore, we used an early stopping function during the training of our proposed model to ensure that the model is not overfitted.Thus, the model is good and reliable for detecting COVID-19 in an unknown dataset.We also performed a qualitative analysis where the proposed IDConv-Net achieved a high prediction outcomes rate with a confidence level ranging from 95 to 99+ on the testing set, indicating that it can accurately classify COVID-19 using both X-ray and CT images.By using a more streamlined model architecture, we were able to reduce the computational demands of the training process while still achieving high levels of performance.Therefore, we can infer that the proposed model works appropriately for both datasets and acquires better accuracy than state-of-the-art detection and classification models.Additionally, DL models' predictions could have been understood and interpreted with the use of a collection of tools and frameworks called explainable AI (XAI).Furthermore, XAI develops a set of ML techniques that produce more understandable models while preserving high performance (prediction accuracy) and enabling human users to comprehend, properly trust, and manage the new breed of AI partners.Another solution to prevent COVID-19 is wearing a face mask and practicing regular hand washing.These are two important measures that effectively reduce the spread of COVID-19.Low-cost sensor-based hand washing techniques can contribute to reducing the spread of COVID-19.However, these measures are most effective when combined with other prevention strategies, such as social distancing and avoiding large gatherings [71,72].
The advantage of the study is the proposed model consists of fewer layers than other detection models.As a result, it reduces complexity and training time due to a lower layer than other models.Another advantage of the model is that it can detect and classify both data types with higher accuracy.The most vital advantage of the model is that it does not contain overfitting in both datasets' training and testing results.In addition, the following advantages of the model can increase its accuracy if we use balanced datasets.In contrast, the drawbacks of the study are that the model yields less accuracy for X-ray images than CT images due to poor resolution and bony structure of chest scan.However, it can be overcome using high-resolution X-ray images.Another drawback of the model might reduce accuracy if we use imbalanced datasets.The other drawback is that some slices among hundreds of pieces do not contain disease features.These slices are taken from the chest scan's superior/upper, middle, or inferior/lower part.As a result, the model sometimes provides a minor misclassification for COVID-19.
Conclusion and Recommendation
COVID-19 poses a severe threat to all living things in the world.A new variant of COVID-19 (e.g., Omicron) will be dangerous and deadly if it mutates with delta or another lethal variant and then spreads quickly worldwide.As a result, early detection of COVID-19 can protect against its spread by isolating affected people.For this purpose, our proposed IDConv-Net can compensate by detecting and classifying COVID-19 at an early stage.Our proposed IDConv-Net model achieves a training accuracy of 99.53% and a testing accuracy of 98.41% for CT images.On the other hand, the IDConv-Net model also achieves a training accuracy of 97.49% and a testing accuracy of 96.99% for X-ray images.Furthermore, our suggested IDConv-Net model outperforms previous COVID-19 detection and classification models that are currently available.Additionally, our proposed model requires less training time than existing models to detect and classify COVID-19.
Overall, while the proposed model has shown great promise in medical imaging applications, several challenges still need to be addressed to make them more effective and practical for use in real-world settings.The model is considered black-box, meaning it can be difficult to understand how they make their predictions.In the future, we plan to use Grad-CAM and XAI to make the model more comprehensive and user-friendly for disease diagnosis.
displays the proposed IDConv-Net model, which outlines its constituent layers, including their corresponding output sizes.The IDConv-Net model is comprised of five convolution layers, four activation layers, and four max-pooling layers.The resulting output features are then passed through a flatten layer, a dense layer, a dropout layer, and a sigmoid activation layer.4.2.Classification.The classification layer is the final layer of the proposed model that produces the network's output in the form of predefined categories or classes.The classification layer follows the feature extraction layer that extracts the high-level features from the image.The output of the feature extraction layer is sent to a flattened layer as the first
Figure 4 :Figure 5 :
Figure 4: The architecture of our recommended IDConv-Net model.
7
(i) TP.The experimental result for the COVID-19 patients is accurate.That means the model detects positive results for COVID-19-affected patients (ii) TN.The experimental result for the Non-COVID-19 patients is accurate.That means the model detects negative results for Non-COVID-19 affected patients
Figure 6 :
Figure 6: Accuracy curve of the proposed IDConv-Net model for X-ray images.
Figure 7 :Figure 8 :Figure 9 :
Figure 7: Loss curve of the proposed IDConv-Net model for X-ray images.Model accuracy
Figure 10 :Figure 11 :
Figure 10: AUC-ROC of the proposed IDConv-Net model for X-ray images.
Figure 12 :
Figure 12: Random prediction outcomes using proposed IDConv-Net model from X-ray test images.
Figures 12 and 13 illustrate the actual and predicted outcomes with a confidence level of identification for X-ray and CT images, respectively.These results suggest that a deep CNN model can be an effective tool for COVID-19 diagnosis and potentially assist healthcare professionals in detecting and treating the virus.Moreover, the results of our study demonstrate that our proposed model can detect and classify COVID-19 in a relatively short time frame.As shown in Table 9, our proposed model achieved comparable outcomes to a transfer learning model while requiring less training time across different image modalities.The reduced training time of our proposed model can be attributed to several factors, including the use of fewer layers in the model architecture and the implementation of enhanced preprocessing techniques.
Figure 13 :
Figure 13: Random prediction outcomes using proposed IDConv-Net model from CT test images.
Table 1 :
Details of X-ray and CT datasets before and after merging images.
Table 2 :
The datasets used in this study.
Table 3 :
The proposed IDConv-Net model with its layers and output size.
Table 4 :
Parameter settings used during this study.
Table 5 :
Performance comparisons between the IDConv-Net and state-of-the-art models on X-ray images.
Table 6 :
Confusion matrix of the proposed IDCNN model on X-ray images.It is significant to the experiment because of indicating the test accuracy.
Table 7 :
Performance comparisons between the IDConv-Net and state-of-the-art models on CT images.
Table 8 :
Confusion matrix of the proposed IDCNN model on CT images.
Table 9 :
Time comparison between our proposed IDConv-Net and state-of-the-art models (50 epochs). | 9,538.2 | 2023-12-11T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Energy Management Optimization of a Dual Motor Lithium Ion Capacitors-Based Hybrid Super Sport Car
: Nowadays, hybrid electric vehicles represent one of the main solutions for the reduction of greenhouse gases in the automotive sector. Alongside the reduction of CO 2 , hybrid electric vehicles serve as a strong alternative on drivability and performance to conventional internal combustion engine-based vehicles. Vehicles exist with various missions; super sport cars usually aim to reach peak performance and to guarantee a great driving experience to the driver, but great attention must also be paid to fuel consumption. According to the vehicle mission, hybrid electric vehicles can differ in the powertrain configuration and the choice of the energy storage system. Manufacturers have recently started to work on Lithium-Ion Capacitors (LiC) -based hybrid vehicles. This paper discusses the usage of a control-oriented vehicle and powertrain model to analyze the performance of a dual motor LiC-based hybrid V12 vehicle by Automobili Lamborghini. P3–P4 and P2–P4 parallel hybrid configurations have been selected and compared since they allow to fully exploit the potential of the LiC storage system characterized by high power. The validated model has been used to develop control strategies aimed at fuel economy and CO 2 reduction, and in particular, both Rule Based Strategies (RBS) and Equivalent Consumption Minimization Strategies (ECMS) are presented in the paper. A critical comparison between the various powertrain configurations is carried out, keeping into account the peculiarities of the LiC technology and evaluating the performance of the different control approaches.
Research Motivation
The current work aims to deepen the analysis of the longitudinal dynamics of a dual motor Lithium-Ion Capacitor (LiC)-based hybrid super sport car. The study of vehicle dynamics has acquired strong importance throughout the years since it allows us to understand and optimize the vehicle characteristics fully. Moreover, the possibility to analyze the vehicle behavior through simulation and modeling activities shifts the focus on virtual or software environments instead of experimental testing. This method allows saving money and time during the vehicle development process.
One of the great advantages of computer simulation techniques is in fact represented by the possibility to analyze various design proposals with easiness and without the need for a prototype as it would be required for experimental testing. It must be considered that computer simulations are useful only if the software is reliable, meaning that it is able to reproduce faithfully the behavior of the actual vehicle. In that case, considerable savings in time and costs are expected. The main experimental activities that are run on the entire vehicle are represented by chassis dynamometer testing through emission cycles or road experimental testing. Experimental data from the chassis dynamometer can be used to The vehicle performance directly depends on the hybrid powertrain. The Electric Motors' (EMs) parallel configurations that are analyzed are represented by the P3-P4 and P2-P4 positions, as shown in Figure 2. The P4 EM is placed at the front axle, while the rear EM (P2 or P3) is directly coupled with the gearbox. In both cases, the hybrid system can directly power the wheels if requested. Figure 2 shows the EMs that are mechanically connected to the shafts. The gear ratios will be dimensioned to keep the motors connected until certain target speeds.
The P4 front EM will be disengaged at 190 km/h, while the rear EMs will disengage at the vehicle's maximum speed, approximately equal to 350 km/h. Thus, they will be able to cover the complete speed range of the vehicle.
As shown in Figure 2, the resulting Lamborghini Aventador will be a 4WD vehicle where the front wheels are powered exclusively through the P4 EM that is mechanically connected to the front axle. The vehicle performance directly depends on the hybrid powertrain. The Electric Motors' (EMs) parallel configurations that are analyzed are represented by the P3-P4 and P2-P4 positions, as shown in Figure 2. The P4 EM is placed at the front axle, while the rear EM (P2 or P3) is directly coupled with the gearbox. In both cases, the hybrid system can directly power the wheels if requested. Figure 2 shows the EMs that are mechanically connected to the shafts. The gear ratios will be dimensioned to keep the motors connected until certain target speeds.
The P4 front EM will be disengaged at 190 km/h, while the rear EMs will disengage at the vehicle's maximum speed, approximately equal to 350 km/h. Thus, they will be able to cover the complete speed range of the vehicle.
As shown in Figure 2, the resulting Lamborghini Aventador will be a 4WD vehicle where the front wheels are powered exclusively through the P4 EM that is mechanically connected to the front axle. Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 22
Literature Review
The energy storage system that is evaluated is uniquely based on LiCs. This kind of technology is characterized by high specific power, high cycle life, and low specific energy [2] and usually finds its application for operations like Start&Stop [3]. Since it is considered difficult to use capacitors alone as an energy storage reservoir [2,4], they are often used as auxiliaries in combination with other energy storage systems [5][6][7]. The hybrid energy storage system allows to decouple the specific energy and specific power requirements, and while the capacitors cover the power request, the main energy storage system can be optimized for the energy request and cycle life.
The decision to work with an energy storage system uniquely LiC-based is innovative, and Automobili Lamborghini has already started to investigate an application of this kind, as shown in [1]. This vehicle represents one of the first proposals of the company in the hybrid market. The choice of high-power energy storage based on LiCs allows covering the torque gap, adding a boost function, and reducing fuel consumption.
Research Contributions
Different from the previous study, the main contribution of this work is represented by fuel economy optimization specifically designed for a LiC-based energy storage system, as a new strategy is modeled to take the benefits of the system's characteristics. The application is innovative also for the kind of functions implemented that are usually satisfied through high energy systems [8][9][10], while in this study, they are destined to a lower energy content system.
Once the conventional vehicle model has been validated, the LiC-based configuration is analyzed in detail. Later, the control strategy is introduced, and emissions cycles are simulated.
At first, the control strategy model will be based on a Rule Based Strategy (RBS), which will target lower fuel consumption results through control rules. Afterward, an Equivalent Consumption Minimization Strategy (ECMS) is implemented [9,10].
Fuel economy is not easy to obtain with this kind of application. In fact, the vehicle is characterized by a V12 engine [11], with a large displacement and high CO2 emissions and fuel consumption values, as all super sport cars do.
At last, a simulation with a smaller engine displacement is run, and it is compared to hybrid vehicles commonly available on the market. This allows to evaluate the impact of the hybrid system properly.
Literature Review
The energy storage system that is evaluated is uniquely based on LiCs. This kind of technology is characterized by high specific power, high cycle life, and low specific energy [2] and usually finds its application for operations like Start&Stop [3]. Since it is considered difficult to use capacitors alone as an energy storage reservoir [2,4], they are often used as auxiliaries in combination with other energy storage systems [5][6][7]. The hybrid energy storage system allows to decouple the specific energy and specific power requirements, and while the capacitors cover the power request, the main energy storage system can be optimized for the energy request and cycle life.
The decision to work with an energy storage system uniquely LiC-based is innovative, and Automobili Lamborghini has already started to investigate an application of this kind, as shown in [1]. This vehicle represents one of the first proposals of the company in the hybrid market. The choice of high-power energy storage based on LiCs allows covering the torque gap, adding a boost function, and reducing fuel consumption.
Research Contributions
Different from the previous study, the main contribution of this work is represented by fuel economy optimization specifically designed for a LiC-based energy storage system, as a new strategy is modeled to take the benefits of the system's characteristics. The application is innovative also for the kind of functions implemented that are usually satisfied through high energy systems [8][9][10], while in this study, they are destined to a lower energy content system.
Once the conventional vehicle model has been validated, the LiC-based configuration is analyzed in detail. Later, the control strategy is introduced, and emissions cycles are simulated.
At first, the control strategy model will be based on a Rule Based Strategy (RBS), which will target lower fuel consumption results through control rules. Afterward, an Equivalent Consumption Minimization Strategy (ECMS) is implemented [9,10].
Fuel economy is not easy to obtain with this kind of application. In fact, the vehicle is characterized by a V12 engine [11], with a large displacement and high CO 2 emissions and fuel consumption values, as all super sport cars do.
At last, a simulation with a smaller engine displacement is run, and it is compared to hybrid vehicles commonly available on the market. This allows to evaluate the impact of the hybrid system properly.
Materials and Methods
This analysis was carried out in a MATLAB/Simulink(R2019a)-based simulation environment, working on the longitudinal dynamics vehicle model previously introduced. The various hybrid control strategies were analyzed, comparing their fuel economy impact and the feasibility of the proposals.
Longitudinal Dynamics Model
As explained in [1], the longitudinal dynamics model was based on the equilibrium along the X and Z direction, in addition to the momentum equilibrium, as it can be seen in Figure 3.
Materials and Methods
This analysis was carried out in a MATLAB/Simulink(R2019a)-based simulation environment, working on the longitudinal dynamics vehicle model previously introduced. The various hybrid control strategies were analyzed, comparing their fuel economy impact and the feasibility of the proposals.
Longitudinal Dynamics Model
As explained in [1], the longitudinal dynamics model was based on the equilibrium along the X and Z direction, in addition to the momentum equilibrium, as it can be seen in Figure 3. Reported below is the equation for the longitudinal dynamics: Solving (1), where + express the longitudinal traction force for the front and rear wheel, respectively, is the aerodynamic force, + are the rolling resistance produced on each axle, the longitudinal acceleration can be determined and, by integration, the longitudinal speed. The longitudinal dynamics model aims to accurately simulate emission cycles or real driving cycles. At first, the cycle will be chosen from an implemented pop-up menu that includes both homologation driving cycles and real experimental driving cycles obtained from road tests.
The driving cycle generates a target speed, a time-dependent quantity that is the input for the vehicle model. The target speed is compared with the actual speed, and their difference enters the PI controller, where a command corresponding to the driver accelerator and braking pedals was generated.
The command value will act as a torque request to the wheels that will be satisfied by the hybrid powertrain. When possible, the powertrain components were based on experimental data, guaranteeing low simulation time but relatively low accuracy (especially for the dynamic behavior of the model). Otherwise they were modeled on physics laws and validated.
PI Driver
The PI controller output was normalized between −1 and 1, representing 3 different cases: Reported below is the equation for the longitudinal dynamics: Solving (1), where F x f + F xr express the longitudinal traction force for the front and rear wheel, respectively, F aero is the aerodynamic force, R x f + R xr are the rolling resistance produced on each axle, the longitudinal acceleration can be determined and, by integration, the longitudinal speed. The longitudinal dynamics model aims to accurately simulate emission cycles or real driving cycles. At first, the cycle will be chosen from an implemented pop-up menu that includes both homologation driving cycles and real experimental driving cycles obtained from road tests.
The driving cycle generates a target speed, a time-dependent quantity that is the input for the vehicle model. The target speed is compared with the actual speed, and their difference enters the PI controller, where a command corresponding to the driver accelerator and braking pedals was generated.
The command value will act as a torque request to the wheels that will be satisfied by the hybrid powertrain. When possible, the powertrain components were based on experimental data, guaranteeing low simulation time but relatively low accuracy (especially for the dynamic behavior of the model). Otherwise they were modeled on physics laws and validated.
PI Driver
The PI controller output was normalized between −1 and 1, representing 3 different cases:
•
If it is positive, the car must accelerate to reach the target speed. This signal matches the accelerator driver pedal.
• If it is equal to zero, the target speed is reached.
•
If it is negative, the vehicle must brake.
Vehicle Resistances
Vehicle resistances have been based on experimental data. Vehicle tests have been run to determine the dependency of the resistant force with speed, according to the following coast-down equation: where are the vehicle coastdown coefficients.
According to this formula, the F res already models the aerodynamics, rolling and all the vehicle resistances. This approach was easily implementable but required experimental testing on the specific vehicle or on the most similar vehicle available.
On the other side, data of this kind can be easily shared due to the ease of use of the model. Since a prototype is not available at the moment, it has been established to work with this approach.
Powertrain
The V12 engine model was based on experimental data and on the torque map that was inserted in the control unit.
The powertrain model was completed through the gearbox and transmission model. The torque was transmitted to the gearbox shafts and multiplied by the gear ratio, to the wheels.
Electric powertrain components were not present in the conventional vehicle model and will be introduced in a second phase to reproduce the hybrid configuration.
Model Validation
For the model validation, the reader is referred to the following document [1]. There, the model validation for the conventional Lamborghini Aventador vehicle was carried out. The two works took that configuration as a starting point while they differed in the hybrid powertrain components.
Hybrid Powertrain
Once the conventional model was complete, the hybrid powertrain was designed and modeled.
As it has already been explained, the hybrid powertrain consisted of 2 identical EMs, one at the front axle in P4 position, and the other in P3 or P2 position at the rear axle. For every EM, a proper transmission ratio was designed that could fit the desired speed range.
The EMs models were based on experimental maps that kept into account the contribution of the single EM and the inverter associated with it. The energy storage system was based on LiCs and the reader is referred to [1] for the model detailed description.
The masses of the hybrid components will be added in the simulation, the LiC-based hybrid vehicle was expected to weigh 57 kg more than the conventional one.
Transmission Ratios
The EMs mechanical connections were dimensioned with reference to the maximum admissible speed established in the project design.
The transmission ratios were dimensioned to guarantee front electric traction and recuperation until 190 km/h, when the front EM in P4 position was detached. On the other hand, the P2-P3 EMs will be detached at maximum speed, and they will be able to power the rear wheels over the complete speed vehicle range. These target speed values will be compared to the maximum EM speed, equal to 24,000 RPM, and the transmission ratio was determined (Table 1). Since the P2 configuration was positioned at the primary gearbox shaft, the transmission ratio value will be determined keeping into account the gearbox gear ratios. The chosen value will guarantee an EM speed below 24,000 RPM for any inserted gear.
Electric Motors
Every EM was associated with an inverter, and their contribution was described by black-box models based on experimental testing results provided by the suppliers.
All the EMs were assumed to be identical. Every EM guaranteed a maximum torque of 68 Nm and a maximum power of 65 kW. The EMs can run up to 24,000 RPM.
Lithium-Ion Capacitors
The usage of LiCs as the main and only energy storage system was innovative. LiCs were derived from Electric Double Layer Capacitors (EDLC) [6], and they combined the activated Carbon cathode of an EDLC with the Li-doped Carbon anode of Lithium-Ion Batteries to ensure great power and good energy content.
On the other side, Li-Ion Batteries were more commonly installed in hybrid electric vehicles and electric vehicles due to the high energy content that they can provide [12]. Typically, common batteries guarantee high energy values (meaning a high electric range in automotive applications) but cannot guarantee high power performance [2,13].
In Figure 4, a first approach to the description of the LiC behavior is shown. The scheme was simple, representing only a series resistance R s and a capacitance C [2]. The leakage resistance R L allowed to describe a more detailed model but could be omitted without losing too much in accuracy. hand, the P2-P3 EMs will be detached at maximum speed, and they will be able to power the rear wheels over the complete speed vehicle range. These target speed values will be compared to the maximum EM speed, equal to 24,000 RPM, and the transmission ratio was determined (Table 1). Since the P2 configuration was positioned at the primary gearbox shaft, the transmission ratio value will be determined keeping into account the gearbox gear ratios. The chosen value will guarantee an EM speed below 24,000 RPM for any inserted gear.
Electric Motors
Every EM was associated with an inverter, and their contribution was described by black-box models based on experimental testing results provided by the suppliers.
All the EMs were assumed to be identical. Every EM guaranteed a maximum torque of 68 Nm and a maximum power of 65 kW. The EMs can run up to 24,000 RPM.
Lithium-Ion Capacitors
The usage of LiCs as the main and only energy storage system was innovative. LiCs were derived from Electric Double Layer Capacitors (EDLC) [6], and they combined the activated Carbon cathode of an EDLC with the Li-doped Carbon anode of Lithium-Ion Batteries to ensure great power and good energy content.
On the other side, Li-Ion Batteries were more commonly installed in hybrid electric vehicles and electric vehicles due to the high energy content that they can provide [12]. Typically, common batteries guarantee high energy values (meaning a high electric range in automotive applications) but cannot guarantee high power performance [2,13].
In Figure 4, a first approach to the description of the LiC behavior is shown. The scheme was simple, representing only a series resistance and a capacitance [2]. The leakage resistance allowed to describe a more detailed model but could be omitted without losing too much in accuracy. Defining Q SC the charge stored within the LiC, and C the capacitance, the voltage V c can be calculated as follows: According to Kirchhoff's rule, the terminal power is given by: Thus, the current i can be determined as [14]: Defining the charge stored within the LiC, and the capacitance, the voltage can be calculated as follows: According to Kirchhoff's rule, the terminal power is given by: Thus, the current can be determined as [14]: As explained in [1,15], the LiC model could be more complicated and detailed, reproducing a wider range of phenomena, but that level of detail was not strictly needed for automotive control-oriented applications. Moreover, according to previous works [1], the analysis of industrial capacitors from different suppliers showed that the choice to work with a single RC circuit branch was justified.
A validation based on experimental tests has been run, setting the required power as input ( Figure 6). The simulated terminal voltage was compared with the experimentally measured one in Figure 7. As explained in [1,15], the LiC model could be more complicated and detailed, reproducing a wider range of phenomena, but that level of detail was not strictly needed for automotive control-oriented applications. Moreover, according to previous works [1], the analysis of industrial capacitors from different suppliers showed that the choice to work with a single RC circuit branch was justified.
A validation based on experimental tests has been run, setting the required power as input ( Figure 6). The simulated terminal voltage was compared with the experimentally measured one in Figure 7. Defining the charge stored within the LiC, and the capacitance, the voltage can be calculated as follows: According to Kirchhoff's rule, the terminal power is given by: Thus, the current can be determined as [14]: As explained in [1,15], the LiC model could be more complicated and detailed, reproducing a wider range of phenomena, but that level of detail was not strictly needed for automotive control-oriented applications. Moreover, according to previous works [1], the analysis of industrial capacitors from different suppliers showed that the choice to work with a single RC circuit branch was justified.
A validation based on experimental tests has been run, setting the required power as input ( Figure 6). The simulated terminal voltage was compared with the experimentally measured one in Figure 7. The voltage root mean square error was equal to 0.2 V, and the model can thus be considered reliable.
Once the model was validated, the LiC configuration was chosen. The system will be a 60s1p (60 series cells and 1 parallel string) that stores 0.26 kWh and could reach over 130 kW of power, satisfying the power request of the EMs. The working voltage spans from 132 V to 228 V.
Hybrid Control Strategies Design
The hybrid control strategies were designed according to the vehicle mission. This work mainly wanted to investigate the possibility of guaranteeing fuel economy through the hybrid application.
As we were considering a super sport car application, the vehicle was characterized by a V12 engine, with a large displacement and high CO2 emissions and fuel consumption values, as all super sport cars do. The EMs account for a power and torque that are strongly inferior to the one expressed by the ICE (approximately 15%). Consequently, their impact on the performance and on fuel economy is expected to be small, and the percentage of improvements on fuel economy are expected to be quite low. Various hybrid powertrain configurations will be analyzed, and the simulated results will be reported as a comparison between the conventional vehicle and the hybrid configurations.
With the introduction of a control strategy, the electric energy can be managed in the best way to guarantee fuel economy. Moreover, a control strategy is required to respect the technology voltage limits. Good energy management can guarantee the proper use of technology, activating control functions as energy recuperation or boost.
MATLAB/Simulink allows the user to design various configurations and to activate only the desired one while keeping the others disabled. This was possible thanks to the Variant Subsystems [16], where the active choice was determined by the variant control, which can be a Boolean expression or also a string. The decision to work with a Variant Subsystem (Figure 8) allowed the user to choose the control strategy directly from the MATLAB environment, without necessarily entering Simulink. This means that the model was more easily accessible even to users who have not participated in the design of the model itself. Any unwanted changes in the Simulink environment could generate major problems, especially if the model was shared between users and departments. This is avoided when the model control is done via MATLAB. The voltage root mean square error was equal to 0.2 V, and the model can thus be considered reliable.
Once the model was validated, the LiC configuration was chosen. The system will be a 60s1p (60 series cells and 1 parallel string) that stores 0.26 kWh and could reach over 130 kW of power, satisfying the power request of the EMs. The working voltage spans from 132 V to 228 V.
Hybrid Control Strategies Design
The hybrid control strategies were designed according to the vehicle mission. This work mainly wanted to investigate the possibility of guaranteeing fuel economy through the hybrid application.
As we were considering a super sport car application, the vehicle was characterized by a V12 engine, with a large displacement and high CO 2 emissions and fuel consumption values, as all super sport cars do. The EMs account for a power and torque that are strongly inferior to the one expressed by the ICE (approximately 15%). Consequently, their impact on the performance and on fuel economy is expected to be small, and the percentage of improvements on fuel economy are expected to be quite low. Various hybrid powertrain configurations will be analyzed, and the simulated results will be reported as a comparison between the conventional vehicle and the hybrid configurations.
With the introduction of a control strategy, the electric energy can be managed in the best way to guarantee fuel economy. Moreover, a control strategy is required to respect the technology voltage limits. Good energy management can guarantee the proper use of technology, activating control functions as energy recuperation or boost.
MATLAB/Simulink allows the user to design various configurations and to activate only the desired one while keeping the others disabled. This was possible thanks to the Variant Subsystems [16], where the active choice was determined by the variant control, which can be a Boolean expression or also a string. The decision to work with a Variant Subsystem (Figure 8) allowed the user to choose the control strategy directly from the MATLAB environment, without necessarily entering Simulink. This means that the model was more easily accessible even to users who have not participated in the design of the model itself. Any unwanted changes in the Simulink environment could generate major problems, especially if the model was shared between users and departments. This is avoided when the model control is done via MATLAB. Appl. Sci. 2021, 11, x FOR PEER REVIEW 9 of 22 In this regard, a graphical user interface could be implemented to further facilitate operations for users who are not familiar with the software.
RBS Hybrid Control
The first strategy implemented was an RBS. An RBS is a control strategy based primarily on control rules that establish the behavior of the vehicle's individual components [2,9,10,17].
The rules considered were fixed mathematical rules. These rules controlled the vehicle behavior, and they aimed to maximize the powertrain capabilities, for example, aiming to use as much electrical energy as possible to reduce fuel consumption or shifting the ICE working point to recharge the battery.
This kind of control strategy compared the system variables with thresholds whose reference values were fixed thanks to hypothetical evaluations, the designer's experience, and calibration activities based on experimental tests.
The vehicle behavior can be modeled through simple, low computational effort subsystems or more complex subsystems that could require larger amounts of data and higher computational effort. As shown in Figure 9, the main target of the RBS was to make the vehicle work in electric drive mode as soon as the electrical energy storage was full, i.e., the battery State of Charge (SoC) reached a value of 90%. Every time the electric drive mode was activated, the fuel consumption was reduced to the minimum, as the ICE could be shut down or it could be working in idle conditions. In this regard, a graphical user interface could be implemented to further facilitate operations for users who are not familiar with the software.
RBS Hybrid Control
The first strategy implemented was an RBS. An RBS is a control strategy based primarily on control rules that establish the behavior of the vehicle's individual components [2,9,10,17].
The rules considered were fixed mathematical rules. These rules controlled the vehicle behavior, and they aimed to maximize the powertrain capabilities, for example, aiming to use as much electrical energy as possible to reduce fuel consumption or shifting the ICE working point to recharge the battery.
This kind of control strategy compared the system variables with thresholds whose reference values were fixed thanks to hypothetical evaluations, the designer's experience, and calibration activities based on experimental tests.
The vehicle behavior can be modeled through simple, low computational effort subsystems or more complex subsystems that could require larger amounts of data and higher computational effort. As shown in Figure 9, the main target of the RBS was to make the vehicle work in electric drive mode as soon as the electrical energy storage was full, i.e., the battery State of Charge (SoC) reached a value of 90%. Every time the electric drive mode was activated, the fuel consumption was reduced to the minimum, as the ICE could be shut down or it could be working in idle conditions. In this regard, a graphical user interface could be implemented to further facilitate operations for users who are not familiar with the software.
RBS Hybrid Control
The first strategy implemented was an RBS. An RBS is a control strategy based primarily on control rules that establish the behavior of the vehicle's individual components [2,9,10,17].
The rules considered were fixed mathematical rules. These rules controlled the vehicle behavior, and they aimed to maximize the powertrain capabilities, for example, aiming to use as much electrical energy as possible to reduce fuel consumption or shifting the ICE working point to recharge the battery.
This kind of control strategy compared the system variables with thresholds whose reference values were fixed thanks to hypothetical evaluations, the designer's experience, and calibration activities based on experimental tests.
The vehicle behavior can be modeled through simple, low computational effort subsystems or more complex subsystems that could require larger amounts of data and higher computational effort. As shown in Figure 9, the main target of the RBS was to make the vehicle work in electric drive mode as soon as the electrical energy storage was full, i.e., the battery State of Charge (SoC) reached a value of 90%. Every time the electric drive mode was activated, the fuel consumption was reduced to the minimum, as the ICE could be shut down or it could be working in idle conditions. The discharge speed of the electrical energy storage depended on the technology's characteristics and, as soon as the minimum SoC value was chosen from calibration (i.e., SoC = 30%) was reached, the electric driving mode was stopped, and only the regenerative braking function was kept active. In this implementation, energy recuperation was possible only through the regenerative braking function, as no load point shift function is momentarily implemented.
The SoC was not the only control variable of the strategy, the EMs speed and the torque request were also kept into account, as they will be compared with their corresponding limit values.
The feasibility of the pure electric mode activation has been analyzed, and the possibility to work with the ICE in shut down conditions has been discarded. If the ICE were to be turned off, we would have issues with the lubricant flow, on the frictions, and on the heating of the powertrain components. Other issues would have been generated by the speed difference between the clutch and the gearbox components when the ICE reconnection takes place.
Then, the ICE will be kept in idle conditions, guaranteeing low fuel consumption values and allowing the clutch and gearbox to work in conditions that do not heavily stress the mechanical components.
The strategy will be activated after the warm-up phase of the after-treatment system on the Worldwide Harmonized Light Vehicle Test Procedure (WLTP) 3b cycle to guarantee the usual heating strategy for the post-treatment components.
The warm-up phase of the WLTP 3b emission cycle represents one of the most controversial intervals for the fuel consumption analysis. The introduction of the hybrid control strategy would necessarily generate changes in vehicle behavior during this starting phase. These changes were unpredictable, and it was very difficult to simulate them accurately.
The decision to maintain the same behavior as the conventional vehicle guaranteed the reduction of possible errors to the minimum since the results were analyzed as a comparison between cycle simulations.
Equivalent Consumption Minimization Strategy
Further considerations were made for the control strategy. The decision to work with an RBS strategy that targeted fuel economy ensured the possibility to work with a simple system. The RBS can provide a high simulation speed since it can work with control rules, but at the same time, it does not guarantee the best working condition for every integration step.
In fact, even if the engineering assumptions made for the RBS design make total sense, the rules and calibration were not flexible and cannot be adapted to the driving conditions.
An ECMS was a sub-optimal strategy that targeted the minimization of the instantaneous equivalent fuel consumption value [10]. It was evaluated in this work to understand whether the development of a system of that kind was justified by the fuel economy benefits. The ECMS was designed ( Figure 10) to improve the power flow distribution to the wheels. During braking, the regenerative braking control function from the RBS will be maintained, and the strategy will also keep into account the delayed activation due to the catalyst heating. As it can be seen in various documents, ECMS represented a commonly adopted solution for the fuel economy target. Extended documentation exists regarding this topic [8][9][10][18][19][20], and the main elements of this strategy were further analyzed.
Split Factor (u)
The control variable for this kind of strategy is , the torque split factor between ICE and EM.
Once the torque working range was defined, both for the ICE and the EM, the factor that minimizes the equivalent fuel consumption was sought. The torque request must be fulfilled by combining the 2 power sources.
The torque split factor is defined as follows: The split factor working window (Table 2), and consequently the ICE and EM torque working window, was determined at each integration step through the EM, ICE, and battery limits. Table 2. Split factor u complete working range, it will be further limited due to the system's limitations.
Split factor u
Driving Mode u = 1 Electric Drive 0 < u < 1 Hybrid Boost Operation u = 0 ICE only −n < u < 0 Battery Recharge Once the working window was defined, it was discretized in evenly spaced intervals to guarantee a fixed-size simulation array. The variable discretization that was chosen guaranteed a real-time simulation.
This simulation environment was a concept tool that allowed to carry out preliminary evaluations on the vehicle's longitudinal dynamics and needs to maintain a balance between simulation speed and simulation accuracy.
Split Factor (hy)
Secondly, the torque was split between the 2 EMs. The electrical limitations imposed an operational range for the 2 EMs that must be considered. Hypothetically, we could decide to deliver all the electrical torque with the front EM instead of the rear one or vice versa. The ℎ factor (Table 3) will define the split.
Split Factor (u)
The control variable for this kind of strategy is u, the torque split factor between ICE and EM.
Once the torque working range was defined, both for the ICE and the EM, the u factor that minimizes the equivalent fuel consumption was sought. The torque request must be fulfilled by combining the 2 power sources.
The torque split factor u is defined as follows: T HY = u·T REQ , The split factor u working window (Table 2), and consequently the ICE and EM torque working window, was determined at each integration step through the EM, ICE, and battery limits. Table 2. Split factor u complete working range, it will be further limited due to the system's limitations.
Split Factor u
Driving Mode u = 1 Electric Drive 0 < u < 1 Hybrid Boost Operation u = 0 ICE only −n < u < 0 Battery Recharge Once the working window was defined, it was discretized in evenly spaced intervals to guarantee a fixed-size simulation array. The variable discretization that was chosen guaranteed a real-time simulation.
This simulation environment was a concept tool that allowed to carry out preliminary evaluations on the vehicle's longitudinal dynamics and needs to maintain a balance between simulation speed and simulation accuracy.
Split Factor (hy)
Secondly, the torque was split between the 2 EMs. The electrical limitations imposed an operational range for the 2 EMs that must be considered. Hypothetically, we could decide to deliver all the electrical torque with the front EM instead of the rear one or vice versa. The hy factor (Table 3) will define the split.
The torque split factor hy was defined as follows (apart from the transmission ratios): Once the working window was defined, it was discretized in evenly spaced intervals, to guarantee a fixed-size simulation array. The variable discretization that was chosen guaranteed a real-time simulation.
The combination of the u and hy vectors will define a matrix of possible torque values, and subsequently of equivalent fuel consumption, from which the minimum value that satisfies the optimality criterion will be extracted.
Equivalent Fuel Consumption
The ECMS aimed to identify the best power flow distribution between the energy converters at every integration step, such that the optimality criterion that has been chosen is achieved.
At first, a global cost function was defined, that considered the usage of both the ICE and the EMs to power the vehicle. Their contribution was evaluated through the calculation of the equivalent consumption value, at every integration step, defined as follows: m bat value will be calculated based on the electrical power request and to the cost function, which depends on the equivalent cost and on the system's working point.
The Lower Heating Value (LHV) [J/g] will divide the power request to the battery to convert, using the equivalent factor s, electrical energy into virtual fuel consumption. Consequently, where η EM is the efficiency of the EM, P EM is the power request to the EM and γ is the factor that allows to properly evaluate if the EM is working as a generator (P EM < 0) or as an electric motor (P EM > 0), and it is defined as follows: This formulation can be easily implemented into the longitudinal dynamics model.
The minimization of the equivalent fuel consumption . m eq brings along the definition of the power flow distribution between ICE and EM, as the T ICE and T HY pair is defined.
The general formulation of the minimization problem refers to any kind of energy storage system and can be found in [8,21]. Defining ξ as the SoC, u the control variable, Q bat the battery charge capacity, I bat the battery current: .
It is possible to define the Hamiltonian of the optimal control problem: .
Thus, the Hamiltonian is the total equivalent fuel consumption. Introducing s(t): At the end: The optimal control satisfies: The optimal control depends on s(t), but its value is unknown a priori, thus the strategy is sub-optimal.
Battery Energy Cost Function (s)
The s factor indicates the cost of the electrical energy; it is dependent on the system's working conditions and can be formulated as follows [8,9]: The cost function was calculated at every integration step, and when its value is high, it makes it preferable to use the engine and recharge the battery, while if it is low, it makes the electric traction preferable.
The curly brackets contain a penalty term that modifies the s value when the SoC is near to the maximum or minimum acceptable values, making the electrical energy cost, respectively, lower or higher. On the other hand, the second term is a proportional correction obtained considering the difference between the target value of the SoC, and the actual one.
The parameters k p and k a are calibrated. Their value choice is decisive for the simulation results, as it directly impacts the cost function and consequently, the hybrid performance and the fuel consumption.
For this application, the electrical energy is low-cost since the LiC energy storage is a high-power system, and it can be charged and discharged rapidly. The values of the calibrated parameters are chosen accordingly.
The SoC target can be chosen depending on the hybrid vehicle mission and the technology that is used. The choice to work with a constant value is the simplest one, and it makes the strategy a Charge Sustaining (CS) one.
Other possibilities that are commonly used for Li-ion Batteries-based vehicles are represented by the Charge-Depleting/Charge Sustaining (CD/CS), which firstly discharges the battery until a certain SoC value and then keeps the value around a target SoC, or the Charge-Blended (CB), which follows a SoC target that linearly decreases with the driven distance [8].
The present work is based on a LiC-based super sport car, and an alternative SoC target is formulated similarly as it has been done in [1]. According to the previous work, the SoC target is a speed-dependent quantity related to the detachment speed value of the EM (in this application it has been set equal to 190 km/h, the detachment speed of the front EM).
As explained in [1], the SoC target was high at low speeds, as the kinetic energy was low, and this solution guaranteed a high energy quantity stored in the electrical system. On the other side, the SoC target was low at high speeds since energy was already existing in the form of kinetic energy that could be rapidly recovered through the EM. Moreover, if we were to have high electrical energy at high speed, the energy content would end up unused once the detachment speed was reached.
Differently from [1], in this study, the dependency of the SoC on speed was modeled as linear to better fit the complete SoC range.
This alternative SoC target formulation, represented in Figure 11 for a WLTP 3b cycle, was thought to fit better the behavior of LiC that dispose of a high charge and discharge rate and which can perform a high number of cycles [22,23]. The present work is based on a LiC-based super sport car, and an alternative SoC target is formulated similarly as it has been done in [1]. According to the previous work, the SoC target is a speed-dependent quantity related to the detachment speed value of the EM (in this application it has been set equal to 190 km/h, the detachment speed of the front EM).
As explained in [1], the SoC target was high at low speeds, as the kinetic energy was low, and this solution guaranteed a high energy quantity stored in the electrical system. On the other side, the SoC target was low at high speeds since energy was already existing in the form of kinetic energy that could be rapidly recovered through the EM. Moreover, if we were to have high electrical energy at high speed, the energy content would end up unused once the detachment speed was reached.
Differently from [1], in this study, the dependency of the SoC on speed was modeled as linear to better fit the complete SoC range.
This alternative SoC target formulation, represented in Figure 11 for a WLTP 3b cycle, was thought to fit better the behavior of LiC that dispose of a high charge and discharge rate and which can perform a high number of cycles [22,23]. This kind of application aims to avoid unused energy and to maintain high energy content where it is mostly needed, ensuring the satisfaction of performance or drivability requests, important objectives for a super sport car. Different control functions could be implemented to uniquely guarantee improvements in performance and drivability, especially at low speeds.
Results
A conventional vehicle simulation on a WLTP 3b cycle was run as a reference. Then, the results for the simulations with the active hybrid control strategies were reported as a fuel consumption comparison.
The WLTP Class 3b cycle phases were defined as follows, as illustrated in [24] and shown in Figure 12: This kind of application aims to avoid unused energy and to maintain high energy content where it is mostly needed, ensuring the satisfaction of performance or drivability requests, important objectives for a super sport car. Different control functions could be implemented to uniquely guarantee improvements in performance and drivability, especially at low speeds.
Results
A conventional vehicle simulation on a WLTP 3b cycle was run as a reference. Then, the results for the simulations with the active hybrid control strategies were reported as a fuel consumption comparison.
The WLTP Class 3b cycle phases were defined as follows, as illustrated in [24] and shown in Figure The simulations were valid only if the final SoC was equal to the initial one or if it was greater. This allows to correctly evaluate the impact of the hybrid control strategy on the fuel consumption results.
The tables reporting the results show in the first line the conventional series vehicle simulation, while in the following lines, they show the simulated results for the hybrid configurations.
The fuel consumption simulated results for the conventional vehicle were normalized (%) with respect to the maximum fuel consumption value obtained during the various cycle phases. As will be shown, Phase 1 was usually associated with the maximum value (i.e., 100%) since the engine works in cold-start conditions and at high fuel consumption operating points.
The simulated results for the hybrid vehicle configurations were reported as a fuel consumption comparison with the series vehicle, showing the percentage reduction.
RBS
At first, the RBS simulations were run, and the results were reported following the indications previously described. Table 4 and Figure 13 show the simulation for the configuration P3-P4, while Table 5 and Figure 14 show the simulation for the configuration P2-P4. The powertrain system behaved as expected, allowing the activation of the electric mode. The potential fuel consumption results were reduced due to the deactivation of the ICE for certain time periods.
During the energy storage recharge, the vehicle will be powered exclusively by the ICE. The simulations were valid only if the final SoC was equal to the initial one or if it was greater. This allows to correctly evaluate the impact of the hybrid control strategy on the fuel consumption results.
The tables reporting the results show in the first line the conventional series vehicle simulation, while in the following lines, they show the simulated results for the hybrid configurations.
The fuel consumption simulated results for the conventional vehicle were normalized (%) with respect to the maximum fuel consumption value obtained during the various cycle phases. As will be shown, Phase 1 was usually associated with the maximum value (i.e., 100%) since the engine works in cold-start conditions and at high fuel consumption operating points.
The simulated results for the hybrid vehicle configurations were reported as a fuel consumption comparison with the series vehicle, showing the percentage reduction.
RBS
At first, the RBS simulations were run, and the results were reported following the indications previously described. Table 4 and Figure 13 show the simulation for the configuration P3-P4, while Table 5 and Figure 14 show the simulation for the configuration P2-P4. The powertrain system behaved as expected, allowing the activation of the electric mode. The potential fuel consumption results were reduced due to the deactivation of the ICE for certain time periods.
During the energy storage recharge, the vehicle will be powered exclusively by the ICE. Both simulations show how the storage system recharged quickly, guaranteeing multiple discharges related to electric driving during the cycle. Figure 15 reported the speed of both the P3 and P2 EMs during the WLTP 3b cycle. Due to the different transmission ratios, the speed profiles were different. Both simulations show how the storage system recharged quickly, guaranteeing multiple discharges related to electric driving during the cycle. Figure 15 reported the speed of both the P3 and P2 EMs during the WLTP 3b cycle. Due to the different transmission ratios, the speed profiles were different. Both simulations show how the storage system recharged quickly, guaranteeing multiple discharges related to electric driving during the cycle. Figure 15 reported the speed of both the P3 and P2 EMs during the WLTP 3b cycle. Due to the different transmission ratios, the speed profiles were different. Appl. Sci. 2021, 11, x FOR PEER REVIEW 17 of 22
ECMS
The following simulations allow evaluating the impact of the ECMS, always according to the previously introduced indications. Table 6, Figure 16, and Figure 17 show the results for the configuration P3-P4, while Table 7, Figure 18, and Figure 19 show the results for the configuration P2-P4. The ECMS results show an improvement in fuel economy, both for the CS and Spd Dependency hypothesis. The SoC profiles are reported.
ECMS
The following simulations allow evaluating the impact of the ECMS, always according to the previously introduced indications. Table 6, Figures 16 and 17 show the results for the configuration P3-P4, while Table 7, Figures 18 and 19 show the results for the configuration P2-P4.
ECMS
The following simulations allow evaluating the impact of the ECMS, always according to the previously introduced indications. Table 6, Figure 16, and Figure 17 show the results for the configuration P3-P4, while Table 7, Figure 18, and Figure 19 show the results for the configuration P2-P4. The ECMS results show an improvement in fuel economy, both for the CS and Spd Dependency hypothesis. The SoC profiles are reported. The same simulations were run on the P2-P4 configuration as it has been done for the RBS. The same simulations were run on the P2-P4 configuration as it has been done for the RBS. The same simulations were run on the P2-P4 configuration as it has been done for the RBS. The CS calibration was tuned to guarantee an SoC value close to the target one at every instant. On the other side, the speed dependency configuration will guarantee a greater variability of the SoC, keeping space for any evaluations on performance or drivability.
Half Displacement Results
Further evaluations were made through simulation, modifying the engine displacement and its performance, to evaluate the impact of the hybrid control strategy and the LiC hybrid architecture on a smaller engine.
In particular, the RBS P2-P4 and ECMS speed-dependent P3-P4 were simulated, as these represented the best solutions for the two different control strategies.
The results are reported in Table 8, Figure 20, and Figure 21. The CS calibration was tuned to guarantee an SoC value close to the target one at every instant. On the other side, the speed dependency configuration will guarantee a greater variability of the SoC, keeping space for any evaluations on performance or drivability.
Half Displacement Results
Further evaluations were made through simulation, modifying the engine displacement and its performance, to evaluate the impact of the hybrid control strategy and the LiC hybrid architecture on a smaller engine.
In particular, the RBS P2-P4 and ECMS speed-dependent P3-P4 were simulated, as these represented the best solutions for the two different control strategies.
The results are reported in Table 8, Figures 20 and 21. The CS calibration was tuned to guarantee an SoC value close to the target one at every instant. On the other side, the speed dependency configuration will guarantee a greater variability of the SoC, keeping space for any evaluations on performance or drivability.
Half Displacement Results
Further evaluations were made through simulation, modifying the engine displacement and its performance, to evaluate the impact of the hybrid control strategy and the LiC hybrid architecture on a smaller engine.
In particular, the RBS P2-P4 and ECMS speed-dependent P3-P4 were simulated, as these represented the best solutions for the two different control strategies.
The results are reported in Table 8, Figure 20, and Figure 21.
Energy Storage Size Variation
At last, some simulations were run to evaluate the behavior of the energy storage with respect to its main limit represented by the low specific energy [2,4].
The simulations were run for the best-case scenario (ECMS P3-P4), assuming that the capacity of the energy storage system doubled and quadrupled, for example, simulating a 60s2p and 60s4p. A system of that kind will guarantee greater capacitance, lower internal resistance, and a greater mass. The results are reported in Table 9.
Conclusions
The results show an improvement in fuel economy with the activation of the hybrid control strategies. The RBS can reduce fuel consumption by up to 2.3%. In particular, the choice to work with a P2 instead of a P3 is profitable.
As shown in Figure 15, the P2 and P3 EMs run at different speeds, since their gear ratio is different. In particular, the P2 EM can meet the torque demand at the wheels through a lower torque since its transmission ratio is higher overall.
These differences have the direct consequence of lower electrical energy consumption, meaning that more time can be spent in electric drive mode. Consequently, different results for fuel consumption are generated in the two configurations, rewarding the case of P2-P4.
The comparison between RBS and ECMS shows that it is possible to guarantee a greater fuel consumption reduction working on the hybrid control strategy.
The ECMS reduces fuel consumption by up to 4.8%. Both the P2-P4 and the P3-P4 configurations achieve better results, as the strategy will choose at every working point the best torque split solution between the front and rear EM.
Overall, the fuel consumption reduction is small, and that can be associated with the characteristics of the vehicle at our disposal (i.e., high displacement and high absolute
Energy Storage Size Variation
At last, some simulations were run to evaluate the behavior of the energy storage with respect to its main limit represented by the low specific energy [2,4].
The simulations were run for the best-case scenario (ECMS P3-P4), assuming that the capacity of the energy storage system doubled and quadrupled, for example, simulating a 60s2p and 60s4p. A system of that kind will guarantee greater capacitance, lower internal resistance, and a greater mass. The results are reported in Table 9.
Conclusions
The results show an improvement in fuel economy with the activation of the hybrid control strategies. The RBS can reduce fuel consumption by up to 2.3%. In particular, the choice to work with a P2 instead of a P3 is profitable.
As shown in Figure 15, the P2 and P3 EMs run at different speeds, since their gear ratio is different. In particular, the P2 EM can meet the torque demand at the wheels through a lower torque since its transmission ratio is higher overall.
These differences have the direct consequence of lower electrical energy consumption, meaning that more time can be spent in electric drive mode. Consequently, different results for fuel consumption are generated in the two configurations, rewarding the case of P2-P4.
The comparison between RBS and ECMS shows that it is possible to guarantee a greater fuel consumption reduction working on the hybrid control strategy.
The ECMS reduces fuel consumption by up to 4.8%. Both the P2-P4 and the P3-P4 configurations achieve better results, as the strategy will choose at every working point the best torque split solution between the front and rear EM.
Overall, the fuel consumption reduction is small, and that can be associated with the characteristics of the vehicle at our disposal (i.e., high displacement and high absolute values of fuel consumption). At the same time, the results show a positive trend on the fuel economy that is due to the hybrid control strategies chosen.
It must be noticed that the speed-dependent SoC strategy guarantees better results than the charge sustaining one. This kind of application shows that we can achieve fuel economy even if we do not maintain a fixed SoC target, but a speed-dependent one. This means that depending on the speed of the vehicle we could leave room for any functions more performance-related, which could be activated at the request of the driver.
This result represents an important element for the design of super sport cars, for which performance and drivability are notable elements. It should be noted, however, that these results are a consequence of the type of energy storage system chosen, which has as its main feature the high power and the consequent reduced charging and discharging times.
The hybrid control strategy comparison points out that the choice to invest in the ECMS has benefited from the fuel economy point of view as it doubles the improvement on FC results.
A simulation with half displacement is run, and the results are compared with the ones from hybrid vehicles commonly available on the market. According to these, the Lamborghini application would fall within the sphere of mild hybrid systems [9,10].
The analysis is concluded simulating an energy storage system that is, respectively, doubled and quadrupled. It is shown that a greater capacitance brings slightly better fuel economy results that tend to an asymptote. It is evident that although the results improve, the reward obtained is not sufficient to justify an investment in a system that becomes more complex, heavier, and larger. Above all, because the system is inserted in a supercar, which typically seeks maximum performance and has dimensional limits related to design and aerodynamics.
Such an outcome points out that for an application of this kind the low specific energy limit of the energy storage system does not compromise the results, indeed this hybrid powertrain can achieve fuel economy thanks to its high power, which results in a high charge and discharge rate.
Even if the LiCs are limited for energy performance at the moment, proper development of the control strategy and the growth of the technology [2,22,23] could lead to fuel-saving applications.
In conclusion, the high-power characteristic makes the LiC technology interesting for applications like super sport cars, which greatly evaluate features such as performance and drivability along with fuel economy.
Author Contributions: A.F. carried out validation, simulations, and wrote the manuscript. N.C. and R.P. supervised the project and assisted the review and editing of the manuscript. M.R. participated in the conceptualization and supervision of the work. E.C. assisted in the project supervision. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not Applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 13,704.6 | 2021-01-19T00:00:00.000 | [
"Engineering"
] |
Discrimination of Low-Frequency Tones Employs Temporal Fine Structure
An auditory neuron can preserve the temporal fine structure of a low-frequency tone by phase-locking its response to the stimulus. Apart from sound localization, however, much about the role of this temporal information for signal processing in the brain remains unknown. Through psychoacoustic studies we provide direct evidence that humans employ temporal fine structure to discriminate between frequencies. To this end we construct tones that are based on a single frequency but in which, through the concatenation of wavelets, the phase changes randomly every few cycles. We then test the frequency discrimination of these phase-changing tones, of control tones without phase changes, and of short tones that consist of a single wavelet. For carrier frequencies below a few kilohertz we find that phase changes systematically worsen frequency discrimination. No such effect appears for higher carrier frequencies at which temporal information is not available in the central auditory system.
Introduction
In response to a pure tone below 300 Hz, an auditory-nerve fiber fires action potentials at almost every cycle of stimulation and at a fixed phase [1,2]. Above 300 Hz the axon begins to skip cycles, but action potentials still occur at a preferred phase of the stimulus. The quality of this phase locking decays between 1 kHz and 4 kHz, however, and phase locking is lost for still higher frequencies. Phase locking below 4 kHz is sharpened in the auditory brainstem by specialized neurons such as spherical bushy cells that receive input from multiple auditory-nerve fibers [3,4]. These cells can fire action potentials at every cycle of stimulation up to 800 Hz. Temporal information about the stimulus frequency is therefore greatest for frequencies below 800 Hz, declines from 800 Hz to 4 kHz, and vanishes for still greater frequencies.
Phase locking is employed for sound localization in the horizontal plane [5,6]. A sound coming from a subject's left, for example, reaches the left ear first and hence produces a phase delay in the stimulus at the right ear compared to that at the left. Auditory-nerve fibers preserve this phase difference, which is subsequently read out by binaurally sensitive neurons through coincidence detection to determine the angle at which the sound source is located.
The temporal information owing to phase locking might be employed for additional processing of auditory signals in the brain. In particular, phase locking could provide information about the frequency of a pure tone, for the interval between two successive action potentials is on average the signal's period or a multiple thereof. In an accompanying theoretical study we show how neural networks might read out the frequency of a stimulus to high precision [7].
Phase locking has long been hypothesized to aid frequency discrimination [1,2]. For the high frequencies at which temporal fine structure is not preserved in neural responses, the mechanics of the mammalian inner ear spatially separates frequencies sharply enough to account for their discrimination [8,9]. At low frequencies, however, the spatial frequency separation within the cochlea is less pronounced; nevertheless, psychoacoustic experiments show that humans can resolve low frequencies considerably better than high frequencies [8][9][10][11]. It is possible that temporal information conveyed through phase locking adds to the spatial frequency information provided by cochlear mechanics. Psychoacoustic experiments on the perception of amplitude-versus frequency-modulated tones as well as on complex tones provide indirect evidence for this hypothesis [10,12].
Results and Discussion
To test directly the usage of temporal information in human frequency discrimination, we constructed tones that are based on a single frequency but in which the phase changes every few cycles. Specifically, we generated wavelets with a carrier frequency f and an amplitude that increases smoothly from zero to a maximal value, remains constant for a certain number of cycles, and eventually returns to zero ( Figure 1A). We denote each wavelet's duration, measured in cycles, by L. Concatenation of many successive wavelets, in each of which the carrier signal has a random phase, yielded a tone with a random phase change every L cycles ( Figure 1A,B). We also generated control tones that have the same amplitude variation as the phase-changing tones but do not exhibit phase changes ( Figure 1C).
In the phase-changing tones the information encoded through phase locking is randomly disturbed every L cycles, so the amount of available information corresponds to that in a single wavelet of duration L. If phase information alone were employed for frequency discrimination, then phase-changing tones should be no more differentiable than short tones consisting of only a single wavelet of duration L. Frequency discrimination of phasechanging tones should therefore worsen with smaller wavelet duration. To test this idea we have also generated short tones that consist of a single wavelet. Because temporal information is not disturbed in the control tones they should allow for much better frequency discrimination that is independent of L.
Through psychoacoustic experiments we measured the ability of five normally hearing subjects to discriminate between two close carrier frequencies. For each kind of tone a standard two-interval forced-choice adaptive procedure yielded a threshold value Df, the smallest frequency difference that the subject could reliably detect [10] (Figure 2). A lower threshold Df accordingly signifies better frequency discrimination. The dimensionless frequency-difference limen follows as Df/f, in which f denotes the average carrier frequency of the presented tones.
We first tested subjects with tones at an average carrier frequency of 500 Hz, a condition in which neuronal responses can be cycle-by-cycle and exhibit phase locking. In all subjects we found that frequency discrimination of both the phase-changing tones and the short tones worsened in a comparable manner when the duration of the wavelets was reduced ( Figure 3A). For each subject and for both types of tones we quantified the dependence of the frequency-difference limens on wavelet duration by computing the correlation coefficients. We found the correlations to be significant: p-values were at most 0.05 with the exception of the limens for the phase-changing tones of one subject (2), for which the p-value slightly exceeded 0.05. The correlations were negative: frequency discrimination worsened either when the phase changes in a phase-changing tone became more frequent or when the length of a short tone was reduced. This result shows that phase locking is employed for frequency discrimination. Discrimination of the control tones did not vary significantly with the wavelet's duration; the p-values for the correlation coefficients lay between 0.1 and 0.6. For small wavelet duration, frequency discrimination of the control tones was superior to that of the short and the phase-changing tones. In particular, for a wavelet duration of seven cycles all subjects showed a smaller frequency-discrimination limen for the control tone than for the phase-changing or the short tone; the differences were statistically significant (p-values between 4?10 24 and 0.02 by two-sample paired Student's t-tests).
We next performed tests with tones at an average carrier frequency of 5 kHz, a circumstance in which temporal fine structure is not preserved in neural responses. All subjects exhibited similar frequency-difference limens for the phasechanging and the control tones ( Figure 3B). The limens did not vary significantly with the duration of the wavelets; the p-values for the correlation coefficients varied between 0.1 and 0.8. Evidently no phase information is employed in distinguishing such highfrequency tones. Moreover the limens were typically considerably smaller than those for short tones. With the exception of one subject (2), and of durations L = 10 and L = 200 in subject (1) as well as L = 50 and L = 200 in subject (5), the frequency-difference limens for the phase-changing and for the control tone at a given wavelet duration were significantly smaller than that of the corresponding short tone (p-values between 1?10 26 and 0.04 by two-sample paired Student's t-tests).
We finally inquired how the usage of temporal information for frequency discrimination depends on the carrier frequency. To this end we tested the five subjects with tones in which the wavelets had a duration of only seven cycles and varied the carrier frequency between 300 Hz and 5 kHz ( Figure 3C). We then performed two-sample paired Student's t-tests for each wavelet duration and each individual to determine whether the frequency- difference limen for a phase-changing tone was significantly different from that for the control tone. We found that below 1 kHz the phase-changing tones were significantly harder to distinguish than the control tones, whereas above 3 kHz both kinds of tones yielded comparable frequency-difference limens. In contrast, frequency discrimination of the short tones was typically comparable to that of the phase-changing tones below 1 kHz but worse above 3 kHz. Temporal information is therefore employed below 1 kHz but not much above 3 kHz, in agreement with the presence of phase locking.
The critical frequency at which the frequency-difference limens for the phase-changing and the control tones became comparable, that is, at which their differences were no longer statistically significant, varied from subject to subject. The transition occured at 1 kHz for two subjects (3 and 5), at 2 kHz for two subjects (1 and 4), and at 3 kHz for another subject (2). The cycle-by-cycle and phase-locked responses of neurons in the auditory brainstem below about 1 kHz presumably provided superior temporal information that all subjects employed for frequency discrimination. For stimuli of higher frequencies, however, subjects apparently varied in the degree to which they used temporal information.
Temporal information has been assumed to play a role in the appreciation of music as well as in speech recognition [12][13][14].
The approach that we have developed-quantifying the perception of tones with smooth phase changes through concatenated wavelets-permits testing of the role of phase locking in music and speech processing as well. The results from such experiments might additionally guide the design of future cochlear implants, most of which do not currently evoke phase-locked neural responses [2,15].
Ethics Statement
The study was approved by the Institutional Review Board at Rockefeller University under protocol TRE-0748. Written informed consent was obtained from all participants.
Sound Construction
A smooth rise in the amplitude A(t) of a wavelet in time t was obtained through the error function: in which t 0 denotes the time at which the amplitude has reached half of its maximal value of one and dt determines the curve's For each subject and each wavelet duration the statistical significance of the difference between the limen for the phase-changing and that for the control tone is indicated by either two stars (p-value smaller than 0.001), one star (p-value between 0.001 and 0.05), or ''ns'' (not significant; p-value above 0.05). The limens for the phase-changing tones exceed those for the control tones below 1 kHz, but the limens begin to converge above 1 kHz. doi:10.1371/journal.pone.0045579.g003 width, for which we have used two cycles. The decay of the amplitude follows analogously. The wavelet's duration is defined as the number of cycles between the time points at which the amplitude reaches half of its maximal value. For the phase-changing tones we generated many such wavelets with a carrier frequency f that has a random phase in each wavelet. Through superposition we then concatenated the wavelets such that the amplitude of each had decayed to half of the maximum when the subsequent wavelet's amplitude had risen to the same value. Neither the amplitude nor the phase changed when the carrier waveform had the same phase in both wavelets. If there was a phase change, however, the amplitude of the tone fell transiently because of destructive interference. We concatenated many wavelets to produce tones 0.7 s in duration.
Control tones were obtained by using the envelope of a phasechanging tone to modulate the carrier frequency. There was accordingly no phase change in such a tone. Short tones were individual wavelets.
Because the phase-changing tones resulted from a random sequence of phases in the successive wavelets, we generated ten different realizations for each tone. All tones were computed in Mathematica (Wolfram Research) with a sampling rate of 96 kHz.
Stimulus Delivery
A subject seated in a double-walled sound-isolation room (Industrial Acoustics Corporation) viewed a computer monitor outside the room through a double-walled glass window. A computer-generated sound was converted to an analog signal at a sampling rate of 96 kHz by a sound board (M-Audio Audiosport Quattro), amplified by a vacuum-tube amplifier (Stax Systems SRM007t), and delivered to the subject binaurally through electrostatic headphones (Stax Systems SR007a Omega II). The combination of amplifier and headphone had a flat frequency response between 6 Hz and 44 kHz. The phase-changing and control tones were presented at 65 dB SPL. To compensate for the lower audibility of the short tones, which resulted from their brevity, they were delivered at 80 dB SPL.
Psychoacoutic Testing Procedure
The subjects included two females and three males 26-36 years of age. All subjects except author T. R. were paid for their service.
Subjects interacted with a computer program through a graphical user interface. In each task a subject listened to two successive tones whose carrier frequencies differed by a small amount Df: one tone had a carrier frequency that was Df/2 above the frequency f, and the other tone's frequency was an amount Df/2 below. The two tones were separated by a pause of 0.5 s. The subject was then asked to indicate whether the first or the second tone was lower in frequency. Feedback was provided on the computer monitor, after which the program adapted the frequency difference Df depending on the correctness of the response: three consecutive correct answers resulted in a reduction of the frequency difference whereas a single wrong answer resulted in an increase. The first six changes in frequency difference were by a factor of two and the subsequent ones by a factor of ffiffi ffi 2 p .
Each subject was trained with all tones until he or she had achieved a stable performance. During an experiment, the first task employed a relatively large frequency difference well above the subject's limen. After an initial phase of ten changes in frequency difference, the subject had settled around an average minimal frequency difference Df (Figure 2). We then presented ten additional changes in frequency difference. The subject's frequency-difference limen and its error were calculated in the logarithmic domain as the average and the standard deviation from the last ten values of Df. Because of the adaptive strategy that we employed, each frequency-difference limen corresponded to the frequency difference at which the subject made three successive correct judgments with the same probability as he or she made one incorrect answer, and hence a probability of a correct response of about 70%.
Statistical Analysis
For each psychoacoustic test we calculated the mean and variance of the frequency-discrimination limen as described above.
The mean values and respective standard deviations for the different individuals and different tones are presented in Figure 3. When are the differences between an individual's limens for two types of tones statistically significant? The independent two-sample t-test informs us that two observed Gaussian distributions, obtained from ten samples each and with the same standard deviation s, result from the same random process with only about 5% probability (p-value 0.05) when the means of the two Gaussians differ by 2s. The probability for the same underlying process is already below 1% when the two means differ by 3s. Using a p-value of 0.05 as our criterion for statistical significance, we find that two distributions in Figure 3 are distinct if their shaded areas, indicating the standard deviations around the means, do not overlap. Overlapping shaded areas, in contrast, signify a probability of the same underlying stochastic process of more than 5%; we then regard the distributions' differences as not significant.
For investigation of the correlation between frequency-difference limens and wavelet duration we computed the correlation coefficient according to standard procedure [16]. Its statistical significance was calculated by a Student's t-test. We employed a one-tailed test because the correlation, if any, should be negative: more frequent phase or amplitude changes could only render frequency discrimination more difficult. | 3,728.2 | 2012-06-08T00:00:00.000 | [
"Physics"
] |
Analysis of Challenges Faced by PGMI Students at STAI Miftahul Ula Nganjuk in the Development of Digital Learning Media
The students of the Elementary Islamic Education Program are future prospective teachers. With the advancement of time, prospective teachers must adapt their teaching methods to the progress of information technology, one of which is digital-based learning media. About this issue, this research aims to describe PGMI students of STAI Miftahul Ula Nganjuk in developing digital learning media. This research uses a quantitative descriptive research method. The subjects of this study were PGMI students of STAI Miftahul Ula Nganjuk, selected using a simple random sampling technique, with a research instrument in the form of five questions on the Google Form platform, answered by 53 respondents. Subsequently, the data were analyzed descriptively. This study found that PGMI students of STAI Miftahul Ula Nganjuk faced difficulties in developing digital learning media. The difficulties include 1) the low understanding of PGMI students of STAI Miftahu Ula Nganjuk in developing digital learning media due to a lack of training related to digital learning media, 2) PGMI students of STAI Miftahul Ula Nganjuk have difficulty summarizing material used for the development of digital learning media, 3) the limitation of time for the development of digital-based learning media.
INTRODUCTION
Education is a transformative process that instigates changes in behavior, knowledge, and attitudes towards improvement.This is attributed to the fact that through education, individuals can acquire new experiences, impacting their behavior, attitudes, and perspectives on life (Safrizal et al., 2022).Moreover, education serves as a means for community development, potential enhancement, and character formation.
In addition to this, interactions during the educational process provide insights and understanding of teacher competencies in various domains, including cognitive, affective, and psychomotor skills.Thus, education becomes a platform for acquiring and expanding an individual's intellectual competence and fostering positive behavioral changes for societal engagement (Anastasha, 2020;Safrizal et al., 2021).Therefore, achieving these goals requires active attention and participation from students in the learning process, with the use of instructional media being one way to capture their interest.
Media is defined as any intermediary tool used to distribute ideas to a targeted audience (Sunarti & Vebrianto, 2020).In everyday communication activities, media serves as a mediator or connector between message senders and receivers (Miftah, 2013).Furthermore, in the context of learning, educators need to consider the role and contribution of instructional media in education (Safrizal et al., 2021).Given the urgency of media in supporting students in the learning process, its modification in presentation is deemed important (Hafzah et al., 2020).Even in the current era of digitization, teachers must equip students with competencies that support their future in the 21st century, focusing on communication, collaboration, critical thinking, and problem-solving (Selmedani et al., 2021).
The digital era, often referred to as the era of the fourth industrial revolution, frequently introduces new innovations in the field of education, leading to new technological practices becoming commonplace in the education sector.In education, the easily accessible and advancing technology supports the learning process in classrooms.The progress in technology can support various aspects of life, including learning activities that can be combined to assist students in achieving learning objectives.
In the use of instructional media, there are several fundamental aspects, including the appropriateness and direction of instructional media in achieving learning objectives.Additionally, it must align with the needs of the material, students' interests, and even their conditions (Purnasari & Sadewo, 2020).Thus, the chosen instructional media has been considered for its effectiveness and efficiency, while also considering the technical competencies possessed by the educators themselves (Vieira & Hai, 2023).
Digital-based instructional media has gained popularity among educators today, with many applications and websites available to facilitate the creation and innovation of instructional media, contributing to the achievement of learning objectives (Lase, 2019).Therefore, with the rapid development of technology, educators are expected to leverage it to facilitate the learning process (Miftah, 2013).
Ideally, being an educator or teacher involves directing and providing learning facilities to students in the learning process, rather than just being an information provider.Instructional media is inseparable from the strategies, models, and techniques used by educators in the learning process.Educators must orient these aspects to ensure that the learning process can attract students' attention and interest.However, based on the existing reality, many educators are reluctant to use digital-based instructional media in the learning process.Even future teachers still face difficulties in developing instructional media in line with current developments, particularly digital-based instructional media.
In a study by Husniati (2023), it was found that educators often use textbooks and tangible objects as instructional media.Furthermore, teachers encounter difficulties in creating IT-based instructional media, finding creative ideas, and lack knowledge and limited time in creating instructional media.Therefore, the solution adopted by teachers is to utilize existing instructional media and leverage instructional videos on YouTube.In another study, the challenges faced by teachers in using instructional media in social studies subjects include difficulties in designing, operating, and selecting the appropriate instructional media for the chosen teaching method (Putri & Citra, 2019).
If an educator cannot enhance their competence in the use and development of instructional media or does not use instructional media at all, it will impact the declining interest and enthusiasm of students.This will result in suboptimal achievement of learning objectives.The novelty of this research lies in the challenges faced by prospective elementary school teachers in developing digital-based instructional media.An analysis is necessary to enable future elementary school teachers to leverage technological developments in the learning process through digital instructional media.Therefore, this study aims to analyze the challenges faced by PGMI students at STAI Miftahul Ula Nganjuk in developing digital instructional media.
METHOD
This research employs a quantitative descriptive method chosen for its ability to present a true picture of reality based on statistical data.The study population comprises 53 PGMI students at STAI Miftahul Ula Nganjuk, spanning semesters one, three, five, and seven, selected through simple random sampling.
The subjects of the study are PGMI students at STAI Miftahul Ula Nganjuk, representing future MI/SD teachers.Data collection instruments include questions embedded in a Google Form focusing on the challenges faced by PGMI students in developing digital learning media.The gathered data is subsequently analyzed using descriptive statistical methods.
The measurement scale utilized is the Guttman scale, with questions organized in the Google Form.The succinctly and clearly formulated questions are based on indicators of the challenges encountered by PGMI students at STAI Miftahul Ula Nganjuk in the development of digital learning media.
RESULTS AND DISCUSSION Kesulitan Merancang Materi dan Media Pembelajaran Berbasis Digital
The primary question addresses the challenges faced by PGMI students in designing digital learning materials.Based on the scattered data, it is evident that a significant percentage (67.4%) of PGMI students at STAI Miftahul Ula Nganjuk still encounters difficulties in designing both material and digital learning media.The distribution indicates that 67.4% of PGMI students at STAI Miftahul Ula Nganjuk face challenges in designing materials and digital learning media, while only 35.3% claim not to experience such difficulties.This aligns with a study by Ikhsan et al. (2023), which found that teachers still struggle with digital media use due to low information technology skills.
Difficulties in the Process of Digital Media Development
The following section addresses the challenges faced by PGMI students in the process of developing digital media.The distribution of data is illustrated in Figure 2. Figure 2 depicts that 57.6% of PGMI students at STAI Miftahul Ula Nganjuk find it challenging to select applications for developing digital learning media.This may be due to factors such as limited knowledge of available applications and tools for developing digital-based learning media, lack of guidance from instructors or the institution, technological complexity, and the limited availability of certain financially related applications (Ahmadi, 2017;I. Pratiwi, 2022;Yuwono et al., 2021).
The second most significant challenge, at 20.6%, is summarizing content, indicating that one-fifth of the students face difficulties in effectively summarizing learning materials for digital media development.Summarizing material is crucial for creating effective digital learning media (Rahmawati et al., 2022).
The third challenge, at 11.8%, is difficulty in using digital media development applications.This suggests that some students encounter challenges in mastering specific applications essential for developing interactive digital learning media (Usmaedi et al., 2020).
Factors Affecting the Minimal Interest in Developing Digital Learning Media
The final section explores factors contributing to the minimal interest of PGMI students in developing digital learning media.The distribution of data is represented in Figure 3. Figure 3 indicates that the most significant factor affecting the minimal interest of PGMI students at STAI Miftahul Ula Nganjuk in developing digital learning media is the difficulty in aligning it with students' learning styles, with a 50% response rate.This emphasizes the gap between students' abilities to design digital learning media and the diverse learning preferences of their future students.
The second factor, at 23.5%, is the limitation of time, indicating that a substantial portion of students struggles to allocate sufficient time for digital media development.Time constraints pose a significant challenge for future teachers aiming to develop effective digital learning media (Yuwono et al., 2021).
The third factor, at 5.9%, is the difficulty in applying developed digital media in the teaching process.This highlights the challenge of translating digital media development skills into practical teaching applications (Yuwono et al., 2021).Additionally, 20.6% of PGMI students have not yet mastered digital media development, emphasizing a lack of knowledge or skills in designing and implementing digital learning media.
Addressing the Needs to Overcome Difficulties in Developing Digital Learning Media
The fourth question focuses on the needs of PGMI students at STAI Miftahul Ula Nganjuk, who are future teachers, to overcome difficulties in developing digital learning media.The distributed data, as illustrated in Figure 4, sheds light on their specific requirements.The data gathered from PGMI students at STAI Miftahul Ula Nganjuk indicates that 55.9% of students require training related to the development of digital learning media.This need arises from the students' lack of competence in developing digital or multimedia content-a critical factor in the challenges surrounding digital learning media development (Y.Pratiwi & Nugraheni, 2022).Therefore, it is imperative to provide training at educational institutions to minimize these issues.Efforts to address this challenge may include utilizing existing learning media, implementing media in a straightforward manner, and participating in various training programs, seminars, workshops, or training sessions related to digital media development (Purnasari & Sadewo, 2020).
On the other hand, 44.1% of students express the need for direct practice in using digital learning media in real teaching processes.This data suggests that some PGMI students at STAI Miftahul Ula Nganjuk feel the necessity to gain direct experience in applying digital learning media in teaching.As future teachers, they believe that training alone is insufficient, and hands-on practice is crucial to applying the knowledge and skills learned in real-life situations.Therefore, addressing the needs of future teachers requires designing learning strategies that enable students to actively engage in the use of digital learning media (Purnasari & Sadewo, 2020).This approach will contribute to the confidence and competence of future teachers in utilizing digital technology in the field of education (Y.Pratiwi & Nugraheni, 2022).
CONCLUSION
The conducted research indicates various challenges faced by PGMI (Islamic Elementary School Teacher Education) students at STAI Miftahul Ula Nganjuk as prospective future teachers in the development of digital-based learning media.In general, the difficulties encountered by these students encompass challenges in designing digital learning materials, obstacles in the digital media development process, and a notable lack of interest in the development of digital learning media.
Therefore, concerted efforts are required to address these issues.Some recommended initiatives include providing PGMI students at STAI Miftahul Ula Nganjuk with training on digital learning media and implementing programs that serve as practical platforms for students to apply digital learning media in real teaching processes.
Figure
Figure 1: Difficulty Diagram
Figure 2 :
Figure 2: Digital Learning Media Development Process Diagram
Figure 3 :
Figure 3: Factors Affecting Minimal Interest in Digital Learning Media Development | 2,762.2 | 2023-12-29T00:00:00.000 | [
"Education",
"Computer Science"
] |
Investigation Performance and Mechanisms of Inverted Polymer Solar Cells by Pentacene Doped P3HT : PCBM
The inverted polymer solar cells (PSCs)with pentacene-dopedP3HT : PCBMabsorption layerswere fabricated. It was demonstrated that the pentacene doping modulated the electron mobility and the hole mobility in the resulting absorption layer. Furthermore, by varying the doping content, the optimal carrier mobility balance could be obtained. In addition, the pentacene doping led to an improvement in the crystallinity of the resulting films and made an enhancement in the light absorption, which was partly responsible for the performance improvement of the solar cells. Using the space-charge-limited current (SCLC) method, it was determined that the balanced carrier mobility (μh/μe = 1.000) was nearly achieved when a pentacene doping ratio of 0.065 by weight was doped into the P3HT : PCBM : pentacene absorption layer. Compared with the inverted PSCs without the pentacene doping, the short circuit current density and the power conversion efficiency of the inverted PSCs with the pentacene doping ratio of 0.065 were increased from 9.73mA/cm to 11.26mA/cm and from 3.39% to 4.31%, respectively.
Introduction
Over the past decades, much effort has been devoted to improving energy utilization efficiency, to developing renewable energy, and to decreasing overall greenhouse gas emissions [1]. Recently, polymer solar cells (PSCs) have attracted much attention and are thought of as a potential candidate of the next generation solar cells, because they have many advantages, including low cost, flexibility, light weight, and easy fabrication [2,3]. However, compared with the inorganic solar cells [4][5][6], the PSCs suffer from two major drawbacks of a lower power conversion efficiency (PCE) and a worse stability [7]. Conventionally, the PSCs were constructed with an Al back cathode electrode, a poly(3,4-ethylene-dioxythiophene) : poly(styrene sulfonate) (PEDOT : PSS) hole transport layer inserted between the polymer absorption layer and the indium tin oxide (ITO) front anode electrode. Unfortunately, the oxygen could diffuse into the absorption layer through the pinholes and grain boundaries within the Al electrode. Consequently, the quality of the absorption layer of PSCs was degraded [8].
Furthermore, the ITO electrode was easily etched by the PEDOT : PSS [9]. These problems were responsible for the instability of the PSCs, which limited the application and commercialization of the devices. To improve the stability of the PSCs, an inverted cell structure for PSCs was previously proposed, where a high-work-function metal (Au or Ag) layer was used as the back contact anode electrode and the PEDOT : PSS hole transport layer was removed [10]. However, the conventional inverted PSCs still suffer from low PCE. To enhance the efficiency, many efforts have been carried out previously. For example, organic or inorganic materials were doped into the P3HT : PCBM absorption layers of the PSCs to enhance the light absorption or the carrier mobility. Various promising doping materials were previously reported, including cadmium selenide (CdSe) [11], zinc oxide (ZnO) [12], nanodiamonds [13], single wall carbon nanotubes (SWCNTs) [14], ferric oxide (Fe 3 O 4 ) [15], graphene [16], 3-hydroxyflavone (3-HF) [17], and perylene [18]. In these previous reports, the performances of PSCs were improved owing to an increase of light absorption. Consequently, the amount of the photoinduced charge carriers in the absorption layer was increased. However, the PCE of these PSCs with the absorption layer doped with various materials was still not satisfactory, only 1.5%∼3.6%. It has been pointed out that the carrier mobility mismatching in the absorption layer was one of the main reasons for the low PCE [19]. The balanced carrier mobility could decrease the carrier recombination in the absorption layer and hence increase the photocurrent of the resulting solar cells [20].
Recently, many efforts have been devoted to balancing the carrier mobility in the absorption layer. It was reported that the carrier mobility could be modulated by doping pentacene into the absorption layer of the conventional PSCs and the performances of the resulting devices were improved [21]. In order to further improve the performance of the PSCs, in this work, the inverted PSCs with pentacene-doped absorption layer were fabricated and investigated. To clearly identify the electron mobility and the hole mobility in the absorption layers, the electron-only devices and the holeonly devices with the corresponding absorption layers were analyzed, respectively, using the space-charge-limited current (SCLC) method. By varying pentacene doping content in the absorption layer, the optimal mobility balance condition was obtained. It is clarified that the PCE of the inverted PSCs was enhanced by properly balancing the carrier mobility in the absorption layer. Figure 1 shows the schematic configuration of the inverted polymer solar cells (PSCs). The 25 nm thick Al-doped ZnO (AZO) film was deposited on the ITO-coated glass substrate using a magnetron radio-frequency (RF) sputtering system. The AZO film worked as the electron transportation and hole blocking layer. The mixed solution of poly(3-hexylthiophene) (P3HT), (6, 6)-phenyl-C 61 -butyric acid methyl ester (PCBM), and pentacene with given mixing ratio in the 1,2-dichlorobenzene (DCB) was then spread on the AZO film using a spin-coating technique to form a P3HT : PCBM : pentacene absorption layer of the inverted PSCs. The thickness of the absorption layer was 200 nm. Subsequently, the deposited absorption layer was annealed in a nitrogen glove box at 110 ∘ C for 20 minutes. Finally, the 10 nm thick MoO 3 layer and the 100 nm thick Ag layer were subsequently deposited on the absorption layer as the anode electrode of the inverted PSCs using a thermal evaporator. The absorption area of the inverted PSCs was about 4 mm 2 . Thus fabricated inverted PSCs with various pentacene doping ratios (0, 0.05, 0.06, 0.065, and 0.07 by weight) in the P3HT : PCBM (1 : 0.8) absorption layers were, respectively, named solar cells A, B, C, D, and E, hereafter. For estimating the hole mobility and the electron mobility in the absorption layer, the hole-only devices of Au/P3HT : PCBM : pentacene/MoO 3 /Ag (100/ 200/10/100 nm) and the electron-only devices of ITO/AZO/ P3HT : PCBM : pentacene/Al (300/25/200/100 nm) were fabricated. In this work, ten batches, each batch had six devices, of the electron-only devices, the hole-only devices, and the inverted PSCs were fabricated and measured.
Experiments
The space-charge-limited current (SCLC) method was used to estimate the hole mobility and the electron mobility in the absorption layer for the hole-only devices and the electron-only devices, respectively. The crystallinity and surface morphology of the absorption layers with various pentacene doping contents were measured using Xray diffraction (XRD) and atomic force microscopy (AFM), respectively. The current density versus voltage ( -) characteristics of the inverted PSCs were measured at room temperature using a -curve tracer (Keithley 2400) with an AM 1.5 G solar simulator (100 mW/cm 2 ). The external quantum efficiency (EQE) was measured using a chopped calibrated light beam from a xenon lamp combined with a lock-in amplifier. The absorption and the diffuse reflection spectra of the absorption layer with various pentacene doping contents and the resulting cells were measured using an UV-Vis spectrometer (Hitachi, U4100).
Experimental Results and Discussion
The SCLC method was used to estimate the electron mobility ( ) and the hole mobility ( ℎ ) in various absorption layers by using the corresponding electron-only and hole-only devices, respectively. The dark current density-voltage characteristics of the electron-only devices and the hole-only devices with the absorption layers of various pentacene doping contents were shown in Figure 2. The electron mobility of the electrononly devices and the hole mobility of the hole-only devices were estimated by Mott-Gurney law equation shown as follows [22]: where is the dark current density, 0 is the permittivity of the P3HT : PCBM : pentacene absorption layer, which was estimated to be (average value ± standard deviation) (4.50 ± 0.02) × 10 −11 F/m from the capacitance-voltage measurement results, is the carrier mobility, is the applied voltage, and International Journal of Photoenergy is the thickness of the absorption layer of the devices. This equation could be also rewritten and shown as follows: To conform the Mott-Gurney law, the slope of log( )-log( ) curve for the electron-only devices and the hole-only devices should be 2. In this work, the applied voltage of 1.7 V matched in Mott-Gurney law was chosen to estimate the hole mobility of the hole-only devices and the electron mobility of the electron-only devices. Thus, the electric field ( ) of all devices estimated by the formula of = / , where of 1.7 V is the applied voltage and of 200 nm is the thickness of the absorption layer, was 8.5 × 10 4 V/cm. The resulting hole mobility and the electron mobility in the absorption layers with various pentacene doping contents are listed in Table 1. It can be seen that the hole mobility of the absorption layer increased and the electron mobility of the absorption layer decreased with increasing the pentacene doping content. In particular, for the P3HT : PCBM absorption layer with pentacene doping ratio of 0.065, the hole mobility, compared with the P3HT : PCBM absorption layer, was increased from (0.94 ± 0.01) × 10 −3 cm 2 /Vs to (1.16 ± 0.01) × 10 −3 cm 2 /Vs. Contrarily, the electron mobility in the absorption layer was decreased from (1.37 ± 0.01) × 10 −3 cm 2 /Vs to (1.16 ± 0.01) × 10 −3 cm 2 /Vs. The opposite variation of the electron mobility and the hole mobility with pentacene doping content indicated that the ratio of the hole mobility and the electron mobility was accordingly modulated. As seen from the results listed in Table 1, the carrier mobility ratio varied with the pentacene content and, in particular, balanced carrier mobility of 1.000 ± 0.001 was obtained in the absorption layer with a pentacene doping ratio of 0.065 by weight.
The variation of the carrier mobility upon pentacene doping can be understood based on the photovoltaic process in the polymer solar cells (PSCs) described below. Figure 3 shows the carrier transport process as well as the energy level diagram of the component materials in the inverted PSCs with P3HT : PCBM : pentacene absorption layer. In the process, the generation and transport of the carriers in the absorption layer played the most important role and were illustrated in detail in Figure 4. As seen from Figure 4, the domains of the donor material were separated in the acceptor materials. When the incident light was absorbed by the donor material, electron-hole excitons were generated in P3HT, as shown in Figure 4(a). The excitons diffused to the interface between the donor and acceptor materials, as shown in both Figures 3 and 4(b). At the interface, the excitons were dissociated and the resulting electrons are transited into the electronegative acceptor materials, whereas the resulting holes remained in the P3HT, as shown in Figure 4(c). Afterwards, the resultant electrons and holes are transported towards the corresponding electrodes [23] as shown in Figure 3. However, due to the energy difference between LUMO of pentacene (electron affinity, 2.9 eV) [24] and the LUMO of PCBM (electron affinity, 3.7 eV) [25], as shown in Figure 3, the addition of pentacene doped into the absorption layer obstructed the electron transportation and decreased the electron mobility in the absorption layer. Consequently, the electron mobility in the absorption layer decreased with an increase of the pentacene doping content. On the other hand, the hole mobility enhancement could be attributed to the improvement in the crystallinity of the P3HT, as reported previously [26]. To demonstrate this phenomenon, the crystallinity analyses of the absorption layers with various pentacene doping contents were carried out using XRD and the results are shown in Figure 5. As shown in Figure 5, the XRD spectra of all the deposited absorption layers exhibited a (100) diffraction peak of the a-axis orientation of P3HT [27]. Moreover, the intensity of the diffraction peak increased with an increase of the pentacene doping content. These results indicated that the crystallinity of the P3HT in the absorption layer was improved by doping the pentacene. This phenomenon indicated that the hole mobility enhancement could be attributed to the enhancement in the crystallinity, induced by pentacene doping, of the P3HT in the absorption layer.
Except the mobility balance in the absorption layer, changes in the other properties of the absorption layer upon pentacene doping might affect the performances of the resulting solar cells. Figure 6 shows the absorption spectra, in which the wavelength ranged from 300 nm to 800 nm, of the absorption layers with various pentacene doping contents. As shown in Figure 6, the absorption of the absorption layers increased with an increase of the pentacene doping weight ratio. The absorption enhancement is obviously favorable to the solar cell performance. In general, the absorptivity of the polymer is larger as the electric field of the incident light is aligned parallel to the orientation of the polymer main chains [28]. In other words, the improvement of the P3HT crystallinity enhances its absorption for the light incident perpendicularly to the main chains of the crystalline P3HT. In our case, as observed above by XRD analysis, the intensity of the (100) diffraction peak for the P3HT, which corresponded to an alignment of P3HT main chain parallelly to the substrate [27], increased with an increase of the pentacene doping content. It implied an improvement of P3HT crystallization with its main chain parallel to the substrate. Therefore, according to the previous observation [28], the absorptivity enhancement of the absorption layer was attributed to the crystallinity improvement of the P3HT in the pentacene-doped absorption layer. Furthermore, Figure 7 shows the surface morphologies of the absorption layers with various pentacene doping contents. [13,29]. Based on this observation, it could also be deducted that the crystallinity of the P3HT : PCBM : pentacene absorption layer was improved with an increase of the pentacene content, which was consistent with the above-mentioned XRD measurement results. Moreover, the increased surface roughness could enhance the light utilization via internal reflection and scattering at the roughened surface, which was also beneficial to the exciton production. To clearly demonstrate this feature, the reflectivity spectra of the P3HT : PCBM : pentacene inverted PSCs with various pentacene doped absorption layers were measured and the results are shown in Figure 8. As shown in Figure 8, the reflectivity of the inverted PSCs was slightly decreased with an increase of the pentacene doping content, which implied that the diffused reflection light from the roughened surface was more effectively absorbed by absorption layer. Besides, the roughened surface increased the contact area between the polymer film and the metal anode. Therefore, the photocurrent of the PSCs could be increased [30]. As demonstrated in the above discussion, pentacene doping improved the crystallinity of the P3HT in the absorption layer, which in turn caused changes in the carrier mobility, absorption, and surface roughness of the absorption layer. All of these changes affected the performances of the resulting devices. Figure 9 shows the -characteristics of the inverted PSCs with absorption layer of various pentacene doping contents. The photovoltaic characteristics of solar cells A, B, C, D, and E, including short circuit current density ( sc ), open circuit voltage ( oc ), fill factor (FF), and power conversion efficiency (PCE), were derived from the measured -characteristics and the results are listed in Table 2 be evidenced from the relationship of oc with the reverse saturation current density 0 [31]: where is the ideality factor, is the electron charge, is Boltzmann's constant, is the absolute temperature, ph is the photocurrent density, and 0 is deduced by extrapolating the linear regions of the dark current density-voltage curve ( Figure 10) to = 0. These devices with various pentacene doping contents exhibited similar dark current density and correspondingly had the similar oc . It could be seen that the performances, other than oc , of the inverted PSCs were improved by doping pentacene of a low ratio into the absorption layer and were optimized when the weight ratio of pentacene was 0.065 (solar cell D). For this optimized cell, the sc and PCE were 11.26 ± 0.04 mA/cm 2 and 4.31 ± 0.03%, respectively, which were obviously better than those of 9.73 ± 0.03 mA/cm 2 and 3.39 ± 0.02% for solar cell A. To further investigate the variation of the above-mentioned solar cell performances, the external quantum efficiency (EQE) of the inverted PSCs with various pentacene doping contents was measured in the wavelength ranged from 300 nm to 800 nm. The results, as shown in Figure 11, exhibited a similar variation as the pentacene content varied. For example, at the wavelength of 515 nm, the EQE of solar cells A, B, C, D, and E was 51.4 ± 0.1%, 55.0 ± 0.1%, 57.4 ± 0.1%, 59.5 ± 0.1%, and 58.3 ± 0.1%, respectively, in which solar cell D was the best one. The results discussed above indicated that the solar cell efficiency (EQE or PCE) was increased with an increase of the pentacene doping content in the absorption layer when the doping ratio was low and reached the maximum at the doping ratio of 0.065. When the pentacene doping ratio was further increased to 0.07 (solar cell E), the EQE was 8 International Journal of Photoenergy degraded. The efficiency improvement at lower pentacene doping ratio could be attributed to both the enhancement in the absorptivity of the absorption layer and the improvement in the carrier mobility balance. Obviously, the efficiency degradation at higher doping ratio could not be ascribed to the change in the absorptivity of the absorption layer. As mentioned above, the absorptivity increased monotonically with the pentacene doping ratio, which tended to enhance the solar cell efficiency. However, it was noticed that the efficiency of the solar cell varied in a similar way as the carrier mobility ratio did (see Table 1). The optimal efficiency was achieved at the same pentacene doping ratio when the mobility in the absorption layer was properly balanced. At a higher doping ratio, say 0.07, the EQE degraded, while the carrier mobility ratio was increased to 1.044 ± 0.001, departing obviously from the balance. This unbalanced mobility resulted in an accumulation of low mobility carriers, causing an increase in carrier recombination and a decrease in PCE and EQE. This kind of correlation between EQE and the mobility balance implied that the carrier mobility balance played an important role in the efficiency variation.
Conclusion
In summary, the inverted PSCs with various pentacenedoped absorption layers were fabricated. Using the SCLC method to measure and estimate the hole mobility and the electron mobility in the resulting absorption layer with various pentacene doping contents, it was revealed that the carrier mobility in the absorption layer could be modulated by doping various pentacene contents. In particular, the required carrier mobility balance ( ℎ / = 1.000) was obtained in the P3HT : PCBM absorption layer with the pentacene doping ratio of 0.065. Using the absorption layer with the balanced carrier mobility could reduce the carrier recombination in the absorption layers and hence enhance the photocurrent of the resulting inverted PSCs. Moreover, more electron-hole excitons were generated in the pentacenedoped absorption layer due to larger absorptivity and larger surface roughness, which provided additional contribution to the performance improvement. The maximum PCE of 4.31 ± 0.03% was obtained for the inverted PSCs with the pentacene doping ratio of 0.065 in the absorption layer. | 4,503.8 | 2014-04-08T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
On a Riemann–Liouville Type Implicit Coupled System via Generalized Boundary Conditions †
: We study a coupled system of implicit differential equations with fractional-order differential boundary conditions and the Riemann–Liouville derivative. The existence, uniqueness, and at least one solution are established by applying the Banach contraction and Leray–Schauder fixed point theorem. Furthermore, Hyers–Ulam type stabilities are discussed. An example is presented to illustrate our main result. The suggested system is the generalization of fourth-order ordinary differential equations with anti-periodic, classical, and initial boundary conditions.
Introduction
The generalization of ordinary derivatives leads us to the theory of fractional derivatives. The concept of fractional derivatives was established in 1695, after the well-known conversation of Leibniz and L'Hospital [1]. Mathematicians like Riemann, Liouville, Caputo, Hadamard, Fourier, and Laplace contributed a lot and made the area more interesting for researchers. A fractional-order derivative is a global operator, which may act as a tool to modify or modernize different physical phenomena like control theory [2], dynamical process [3], electro-chemistry [4], mathematical biology [5], image and signal processing [6], etc. For more applications of the fractional differential equations (FDES), we refer the reader to the works in [7][8][9][10][11]. Furthermore, the theory of coupled systems of differential equations is referred to as an important theory in the applied sciences envisaging different areas of biochemistry, ecology, biology, and classical fields of physical sciences and engineering. For details see in [12][13][14].
The theory regarding the existence of solutions of FDES, drew significant attention of the researchers working on different boundary conditions, e.g., classical, integral, multipoint, non-local, periodic, and anti-periodic [15][16][17][18]. Among the qualitative properties of FDES, the stability property of the solution is the central one, particularly the Hyers-Ulam (HU) stability [19][20][21][22][23][24][25][26]. Stability theory in the sense of HU was first discussed by Ulam [27] in the form of a question in 1940 and the following year, Hyers [28] answered his question in the context of Banach spaces. Recently, generalized HU stability was discussed by Alqifiary et al. [29] for linear differential equations. Razaei et al. [30] presented Laplace transform and HU stability of linear differential equations. Wang et al. [31] studied HU stability for two types of linear FDES. Shen et al. [32] worked on the HU stability of linear FDES with constant coefficients using Laplace transform method. Liu et al. [33] proved the HU stability of linear Caputo-Fabrizio FDES. Liu et al. [34] studied the HU stability of linear Caputo-Fabrizio FDES with the Mittag-Leffler kernel by Laplace transform method.
Higher-order ordinary differential equations (ODES) can be used to model problems arising from the field of applied sciences and engineering [35,36]. The generalization of fourth-order ODES are FDES (1) if α = κ = 4. Fourth-order differential equations have important applications in mechanics, thus have attracted considerable attention over the last three decades. The problem of static deflection of a uniform beam, which can be modeled as a fourth-order initial value problem is a good example of a real problem in engineering [37,38].
This problem has been extensively analyzed, some new techniques were developed and numerous general and impressive results regarding the existence of solutions were established in [39][40][41][42]. Sometimes, mathematical modeling of the various physical phenomena may arise as a coupled system of the forgoing ODES. Furthermore, for η i = −1 (i = 1, 2, . . . , 8), we can obtain anti-periodic boundary conditions which are applicable in several mathematical models, some are given in [43,44].
The manuscript is categorized as follows. For our main results, we establish some basic notations, definitions, and lemma in Section 2. In Section 3, we present existence, uniqueness, and at least one solution of system (1) by applying the Banach contraction fixed point theorem and Leray-Schauder fixed point theorem. In Section 4, we discuss definitions of HU type stabilities, which help us to show that system (1) has HU type stabilities by two different approaches. In Section 5, by a particular example of the system (1), we show that our results are applicable.
Background Materials
In this fragment, we present basic notations with Banach spaces, definitions of the considered derivative and integral, and lemma, which will be utilized in the next sections.
Similarly, (v, u) S = v S 1 + u S 2 is the norm defined on the product space, where S = S 1 × S 2 . Obviously S, (v, u) S is a Banach space. Definition 1. [45] For a continuous function v : R + → R, the Riemann-Liouville integral of order α > 0 is defined as such that the integral is pointwise defined on R + . Definition 2. [45] For a continuous function v : R + → R, the Riemann-Liouville derivative of order α > 0 is defined as where [α] represents the integer part of α and n = [α] + 1. We note that for > −1, where k i (i = 1, 2, 3, . . . , n) are unknowns.
Existence Theory
This section is devoted to the equivalent integral form of the proposed problem.
Remark 1.
Let µ ∈ C(J), the following κ ∈ (3, 4] order FDE with boundary conditions has the solution where G κ (t, τ) is given by gives Green's function G α (t, τ) of fourth-order ODE with anti-periodic boundary conditions.
For the reason of advantage, we set the following notations: and We use the following notations for convenience: Then, the fixed point of F and the solution of system (1) coincided, i.e., Using Banach contraction theorem in the following, we prove the uniqueness of solution of system (1).
Theorem 1.
Let the functions χ 1 , χ 2 : J × R × R → R are continuous and satisfy the hypothesis: In addition, suppose that where Q α and Q κ are defined by Equations (6) and (7), respectively. Furthermore, 0 ≤ L χ 1 , L χ 2 < 1 (through out the paper). Then, the solution of system (1) is unique. Consider Substituting (10) in (9), we get Therefore, On the same way, we can write Inequalities (11) and (12) combined give For any t ∈ J, and (v 1 , u 1 ), (v 2 , u 2 ) ∈ S, we get and thus we get Similarly, From the inequalities (13) and (14), we get that Therefore, F is a contraction operator. Therefore, by Banach's fixed point theorem, F has a unique fixed point, so the solution of the problem (1) is unique.
The next result is based on the following Leray-Schauder alternative theorem. Theorem 2. [46] Let F : S → S be an operator which is completely continuous (i.e., a map that restricted to any bounded set in S is compact). Suppose Then, either the operator F has at least one fixed point or the set B(F) is unbounded.
Similarly, for every t ∈ J and v, X : J → R, there are ϕ i (i = 1, 2, 3) : J → R + , such that 1, 2, 3) . In addition, it is assumed that Then, the system (1) has at least one solution.
Proof. First, we prove that F is completely continuous. In view of continuity of χ 1 , χ 2 , the operator F is also continuous. For any (v, u) ∈ B r , we have Now by H 2 , we have Therefore, (16) implies which implies that Similarly, we get Thus, it follows from the inequalities (18) and (19) that F is uniformly bounded. Now, we prove that F is equicontinuous. Let 0 ≤ t 2 ≤ t 1 ≤ t. Then, we have Therefore, we get Similarly Therefore, F(v, u) is equicontinuous. Thus, we proved that the operator F(v, u) is continuous, uniformly bounded, and equicontinuous, concluding that F(v, u) is completely continuous. Now, by using Arzela-Ascoli theorem, the operator F(v, u) is compact.
Finally, we are going to check that Then, and Therefore, from (20) and (21), we have Consequently, we get , for any t ∈ J, where Q 0 is defined by (15), which infer that B is bounded. Therefore, by Theorem 2, F has at least one fixed point. Thus, the system (1) has at least one solution.
Stability Results
Let us recall some definitions related to HU stabilities: Suppose the functions Θ α , Θ κ : J → R + are nondecreasing and α , κ > 0. Consider the inequalities given below.
Method (I)
Theorem 4. If hypothesis H 1 and Proof. Let (v, u) ∈ S be the solution of (22) and (w, ζ) ∈ S be the solution of following system: Then in view of Lemma 1, for t ∈ J the solution of (33) is given by: Consider Applying Lemma 3 in (35), we get Using H 1 of Theorem 1 and (6) in (36) Similarly, we can get We write (37) and (38) as From the above, we get where . (40), then by Definition 4 the problem (1) is generalized HU stable.
Conclusions
This paper concluded that the solution of coupled implicit FDES (1) is unique and exists by using the Banach contraction theorem and Leray-Schauder fixed point theorem. Under some assumptions, the aforesaid coupled system has at least one solution. Besides this, the considered coupled system is HU, generalized HU, HU-Rassias and generalized HU-Rassias stable. An example is presented to illustrate our obtained results. The proposed system (1) gives the following well-known system of ODES, which has wide applications in applied sciences [5] • η i = −1 (i = 1, 2, . . . , 8) and α, κ = 4, then we get fourth-order ODES system with anti-periodic boundary conditions. • η i = 0 (i = 1, 2, . . . , 8) and α, κ = 4, then we get fourth-order ODES system with initial conditions. Funding: Not applicable.
Institutional Review Board Statement: Not applicable
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable. | 2,328.2 | 2021-05-26T00:00:00.000 | [
"Mathematics"
] |
MIMO Communication Measurements in Small Cell Scenarios at 28 GHz
—Massive multiple-input–multiple-output (MIMO) systems operating in the centimeter-wave (cmWave) and millimeter-wave (mmWave) region offer huge spectral efficiencies, which enables to satisfy the urgent need for higher data rates in mobile communication networks. However, the proper design of those massive MIMO systems first requires a deep understanding of the underlying wireless propagation channel. Therefore, we present a fully digital MIMO measurement system operating around 28 GHz . The system enables to take fast subsequent snapshots of the complex MIMO channel matrix. Based on this method, we statistically analyze the time-dependent channel behavior, the achievable signal quality and spectral efficiency, as well as the channel eigenvalue profile. Furthermore, the pre- sented calibration approach for the receiver enables an estimation of the dominant absolute angle of arrival (AoA) and allows us to draw conclusions about the line-of-sight (LOS) dominance of the scenario. In total, 159 communication measurements over 20 s are conducted in three different small cell site scenarios to investigate the wireless propagation behavior. The measurements reveal the existence of several spatial propagation paths between the mobile transmitter and the base station. Furthermore, an insight into their likelihood in different propagation scenarios is also given.
I. INTRODUCTION
M ORE than ever before, mobile wireless communication networks demand for higher data rates. To meet these requirements, research and industry focus in particular on exploiting the large available spectral resources in the centimeter-wave (cmWave) and millimeter-wave (mmWave) region, the decrease of the cell size to increase the spectral reuse, and the utilization of MIMO systems to achieve a spatial multiplexing gain [1]- [5]. As the path losses increase with higher carrier frequencies, the application in mobile wireless communication networks is limited to small cell scenarios [6], [7]. Furthermore, at these higher frequencies, massive MIMO mobile radio base stations, employing large-scale antenna arrays with hundreds of antenna elements, are realizable in a compact form factor, offering huge spectral efficiencies [8]- [10]. These huge spectral efficiencies are achieved by transmitting uncorrelated data streams to the spatially separated users and exploiting the multipath channel between the mobile radio base station and each user to obtain a spatial multiplexing gain [11]. As a result, the Third Generation Partnership Project (3GPP) lately defined the n257-band between 26.5 and 29.5 GHz offering 3 GHz of spectral bandwidth [12].
To investigate the achievable data rates of massive MIMO communication systems in the n257-band and answer important system design questions, a deep understanding of the wireless propagation channel is required. Note that the propagation conditions determine the expected channel capacity of MIMO systems [13]. In practice, MIMO algorithms and architectures are evaluated in numerical simulations based on models of the wireless propagation channel [14]. Nevertheless, these channel models depend on simplifications of the complicated electromagnetic propagation and thereby never fully reproduce the propagation effects [15]. For these reasons, extensive measurement campaigns have to be performed to characterize the wireless propagation channel and demonstrators are needed to verify the performance and validate channel models.
A. Channel Measurements Around 28 GHz
Untill date, many research groups realized channel sounding systems to investigate the propagation characteristics around 28 GHz as presented in [16]- [46]. Particularly worthy to mention are the extensive measurement campaigns by Rappaport et al. [16] for the 28, 38, 60, and 73 GHz mmWave bands summarized. At 28 GHz, the results for urban scenarios reveal path loss exponents of 2.1 for line-of-sight (LOS) and 3.4 for nonline-of-sight (NLOS) scenarios, which are similar to today's microwave path loss models [16], [47], [48].
Another important research aspect in wireless channel sounding is the analysis of the dynamic channel behavior. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Therefore, the required measurement times to acquire the channel characteristics at each transmitter and receiver location have to be reduced. To better temporally analyze the wireless propagation channel, Bas et al. [17], [49] presented an MIMO channel sounder at 28 GHz based on a phased array structure that performs fast beam steering. Compared to channel sounders with rotating horn antennas, the measurement time could drastically be reduced down to milliseconds [17]. The channel sounder is used to analyze the outdoor to indoor propagation channel in [50] and [51] and to estimate the angular spectrum, delay spread, and Doppler spectrum in an outdoor micro cellular scenario in [52]. A different approach reducing the channel measurement time is introduced by Tataria et al. [19]. The presented MIMO channel sounder measures the 256 × 128 dual-polarized channel by switching between the different elements. In contrast to previous works, snapshots of the MIMO channel can be acquired in 380 ms.
Beside the extensive channel characterization efforts made, first, MIMO demonstrators operating within the n257-band have been presented in the literature. Researchers from Samsung Electronics demonstrated in [53] first indoor and outdoor coverage tests using a subarray-based (subconnected) hybrid beamforming testbed. This work was extended in [54] achieving data rates of up to 7.5 Gb/s by transmitting four parallel data streams to two mobile stations in close distance. Recently, Yang et al. [55] reported the first fully digital massive MIMO transceiver operating at 28 GHz consisting of 64 antenna elements. In the demonstrator test, 20 noncoherent data streams could be transmitted at the same time to eight user entities resulting in spectral efficiency of 101.5 b/s/Hz. Furthermore, MIMO communication measurements are presented by NTT Docomo in [56]- [58].
B. Main Contributions
To tackle the problem of long measurement times of current channel sounders and analyze achievable communication data rates within realistic small cell scenarios, we present a fully digital 16 × 4 MIMO measurement system operating around 28 GHz. Unlike the channel sounders presented earlier, we analyze the wireless propagation behavior by estimating and evaluating subsequent snapshots of the complex MIMO channel matrix, representing the time-dependent channel response between each transmit and receive antenna assuming a frequency nonselective channel [59]. This method enables us to take snapshots of the channel in much less than a millisecond allowing a good analysis of the dynamic propagation behavior. The main contributions can be summarized as follows.
1) This work presents a method to rapidly acquire narrowband snapshots of the complex MIMO channel matrix, which enables us to investigate the wireless propagation behavior around 28 GHz. 2) We verify this approach and analyze the MIMO wireless propagation channel in a total of 159 measurements in three different small cell site scenarios. For each measurement, the mobile unit is placed at a different location and the received data are recorded for around 20 s. Snapshots of the MIMO channel are estimated for each symbol, i.e., each 128 μs.
3) A calibration approach for fully digital MIMO archi-
tectures is presented and implemented at the receiver allowing the correction of amplitude and phase imbalances between the receive branches. This facilitates the estimation of the dominant absolute angle of arrival (AoA). With simultaneous determination of the spatial positions of transmitter and receiver as well as the receiver orientation, the found AoA allows to draw conclusions about the LOS dominance of the scenario. 4) We present for the first time measurement results for the channel eigenvalue statistics around 28 GHz. This statistic reveals with which likelihood up to four spatial propagation paths can be utilized. Note that the eigenvalues of the channel determine whether spatial multiplexing (Blast-type) communication techniques are wise to be applied [11]. 5) Furthermore, the subsequent snapshots of the MIMO channel matrix are used to evaluate the achievable spectral efficiencies. The measurement results give information about the degradation in spectral efficiency caused by foliage within the wireless propagation paths, as the coherence time is reduced. It should be noted that our MIMO communication demonstrator does not aim to replace the current channel sounder, but rather serves as an complementary approach to analyze the so far insufficiently investigated channel characteristics, as, for example, the time-dependent eigenvalue profile of the channel. In advance, the measured snapshots of the channel matrix can be directly fed into measurement-based MIMO channel models, to numerically analyze novel MIMO communication architectures and algorithms [60], [61]. Note that new architectures and algorithms are mostly evaluated in numerical simulations utilizing abstract MIMO channel models as presented in [62]- [66].
This work is organized as follows. Section II presents the hardware setup as well as the methods for receiver calibration and channel estimation. In Section III, the outdoor measurement scenarios are described in detail. Finally, Section IV discusses the results of the channel analysis around 28 GHz.
II. MIMO CHANNEL MEASUREMENT APPROACH
To investigate the behavior of the wireless propagation channel around 28 GHz, we developed a fully digital MIMO measurement system. The system is designed to measure the multipath channel characteristic emulating an uplink communication scenario between a mobile user with M ant = 4 transmitters and a base station with N ant = 16 receivers. In this section, we introduce the designed hardware setup and explain the developed channel estimation and system calibration approach. Furthermore, the estimation of the dominant AoA is explained and the modulation error ratio (MER) is discussed as a metric for assessing signal quality.
A. System Setup
The measurement system consists of a fully digital 16 × 4 MIMO configuration with four transmit antennas at the mobile user entity and 16 receive antennas at the base station. The block diagram of the system configuration is shown in Fig. 1. To achieve a high sensitivity, a heterodyne architecture is selected, which enables a flexible adjustment of the radio frequency (RF) and intermediate frequency (IF).
1) Mobile Transmitter: At the transmitter side, the training signals for channel estimation are generated by a host computer (PC) connected via Gigabit-Ethernet to two commercial software-defined radios (SDRs) of type USRP X310 by Ettus Research. The SDRs include digital-to-analogconversion, baseband-to-IF conversion, as well as IF filtering and amplification.
To translate the IF to the desired RF frequency band, an RF front end with four symmetrical transmit branches is designed. It consists of a four metal layer printed circuit board (PCB) with a substrate of type RO4003C from Rogers Corporation with a height of 203 μm and a dielectric constant ε r = 3.55. The IF-to-RF conversion and RF amplification are realized by commercially available monolithic microwave integrated circuits (MMICs). The PCB is integrated into a metallic housing for electromagnetic shielding, protection, and better heat dissipation. The mixer includes an internal frequency doubler and the upper sideband of the mixing process is used, resulting in an RF center frequency For the mobile transmitter, four monopole antennas are mounted on a metallic housing to enable a 360 • coverage in the azimuth plane. This makes the mobile transmitter independent of a rotation in azimuth. The monopole antennas have a height of λ 0 /4 at 28 GHz to avoid dips in the elevation radiation pattern. In elevation, the measured half-power beamwidth (HPBW) is 26 • with the main beam direction of 20.5 • upward originating from the ground plane of the monopoles. The tilt by 20.5 • upward is selected to be a good fit for the considered application scenario, where the base station is installed on an elevated position. The measured maximum realized element gain, including connector and feed line losses, is 1.5 dBi. The monopoles are arranged in a square separated by 0.55λ 0 at 28 GHz to achieve uniform coverage in azimuth over the entire 360 • range. If the antennas are not properly spaced, notches in the azimuth radiation pattern would occur.
For the later measurement campaign, the RF front end and SDRs are integrated within a transportable box and placed together with the DC power supply and LO signal generator on a trolley shown in Fig. 2(c).
2) Base Station: At the base station or receiver side, a 16 antenna element board is designed with an element spacing of 5.35 mm, which relates to a spacing of λ 0 /2 at 28 GHz. All antenna elements are realized as microstrip patch antennas using the same four metal layer RO400C PCB as for the RF front end. To increase the antenna element gain, two serially fed microstrip patch elements are vertically stacked, narrowing down the HPBW in elevation direction to 40.8 • . The HPBW in azimuth is 86 • . The measured realized element gain, including the connector and feed line losses, is 4.1dBi. A photograph of the front of the antenna board is shown in Fig. 2(a).
The 16 RF outputs of the antenna board are connected via coaxial cables to four RF back ends each consisting of four symmetric channels performing low-noise amplification, bandpass filtering, and RF-to-IF conversion. The RF back ends are constructed according to the same scheme as for the RF front ends utilizing a four metal layer RO400C PCB, commercial available MMICs, 2.92 mm connectors, and a metal housing adapted to the PCB. Furthermore, the antenna board is mounted together with the RF back-end modules onto a metallic construction, which allows a manual adjustment of the antenna elevation angle. The LO signal for RF-to-IF downconversion is, similar to the transmitter side, supplied externally at half the mixing frequency to each RF front end, as shown in Fig. 1.
Finally, the received and digitized data are transferred via Ethernet to a host PC, where online and offline postprocessing is performed. The receiver noise figure (NF) is calculated based on the information given in the data sheets of the used components to NF ≈ 2.1 dB.
3) Transmitter and Receiver Clock and Frequency Synchronization: GPS-disciplined, oven-controlled crystal oscillators (GPSDOs) by Jackson Labs Technologies, Inc., in combination with active GPS antennas by Ettus Research are employed to synchronize the SDRs and LO signal generators at the transmitter and receiver sides. The GPSDOs provide a high-accuracy 10 MHz reference with a phase noise of −110 dBc/Hz at 10 Hz and a pulse-per-second (PPS) signal to ensure a synchronous sampling between the SDRs. The PPS signal is aligned to the global standard within 50 ns. At the transmitter, the GPSDO is integrated into the first SDR. The 10 MHz reference and PPS are forwarded from the first SDR via daisy chaining to the second SDR. Moreover, the 10 MHz reference is provided to the LO signal generator, as shown in Fig. 1. At the receiver, the GPSDO is integrated into a OctoClock-G CDA-2990 by Ettus Research. The OctoClock-G CDA-2990 has eight 10 MHz reference and PPS outputs that are connected to the SDRs at the receiver. The additional SDR for calibration receives the 10 MHz reference and PPS via daisy chaining. Furthermore, the 10 MHz reference is forwarded by the first SDR via daisy chaining to the LO signal generator at the receiver.
To allow a sufficiently long warm-up time and settling to the global reference, the GPSDOs are switched ON 1 h in advance of the measurements. For the presented measurements in this work, no frequency offset could be detected in the signals recorded at the receiver, which indicates a sufficiently precise global reference. The GPS coordinates provided in this process are also used in the later measurement campaigns to determine the spatial position of the transmitter and receiver.
B. Channel Estimation Principle and Signal Processing
To estimate the MIMO propagation channel, known training symbols are transmitted at the mobile user entity as it is standard in many communication systems [67], [68]. As signal waveform orthogonal frequency-division multiplexing (OFDM) is used. The randomly selected training symbols are modulated using quadrature-phase shift keying (QPSK). OFDM facilitates the separation of the different transmit antennas by using exclusive OFDM subcarriers and enables the estimation of the complex MIMO channel matrix with several measurement points in the frequency domain [69]. By separating the transmit antennas in the frequency domain, the transmitters can be separated at each receive antenna, realizing an estimation of the instantaneous complex MIMO channel matrix. The MIMO channel matrix represents the channel response between each transmit and receive antenna assuming a frequency nonselective channel [59]. To fulfill this assumption, the signal bandwidth has to be smaller than the coherence bandwidth [70]. This also motivates to utilize OFDM, as the frequency-nonselectivity assumption just needs to be true for the bandwidth of a small range of OFDM subcarriers.
Let I ∈ {0, 1, . . . , N c − 1} be an index set addressing the N c OFDM subcarriers and divide it into a subset of indices I d , containing the complex modulated data symbols used for channel estimation purposes and a subset of indices I 0 , containing the positions of all null carriers. It holds Furthermore, the subset I 0 contains the indices of the subcarriers around zero frequency to avoid blockage due to high DC parts I DC ⊆ I 0 , the indices of upper and lower guard carriers I guard ⊆ I 0 , and further recessed OFDM subcarriers for receiver calibration I cal ⊆ I 0 . Hence, no subcarrier index is part of two subsets, meaning that the sets are disjoint so that and is fulfilled. The transmit antennas are separated for the channel estimation process using exclusive OFDM subcarriers. Therefore, the index set The OFDM subcarrier indices follow an interleaved assignment to the different transmit antennas to minimize the spacing between two neighboring subcarriers of one subset I d,m . The OFDM subcarrier spacing is defined as where B s represents the available signal bandwidth and T o is the OFDM symbol duration. Based on the defined index sets, the complex OFDM data frame for one OFDM symbol in the frequency domain X ∈ C M ant ×N c is constructed. The discrete OFDM time-domain signal with sampling time t = q · T o /N c and q ∈ {0, 1, . . . , N c − 1} can be written as [59], [71] where p ∈ {0, 1, . . . , N c − 1} denotes the indices for the OFDM subcarrier frequencies f p = p · f = p/T o . The channel response of a multipath channel can be represented by [72], [73] h(n, m, q) = where n ∈ C N ant ×N c accounts for the additive white Gaussian noise introduced during transmission. The multiplication of the transmit signals with a time-variant channel would lead to a cyclic convolution in frequency domain and thereby to intercarrier interferences (ICIs). To avoid ICI, the OFDM symbol duration has to be chosen smaller than the coherence time of the channel so that the complex channel coefficients h c (n, m, d, q) can be assumed constant over one OFDM symbol. With this assumption, the received signal in the frequency or symbol domain results after discrete Fourier transformation (DFT) to To avoid intersymbol interferences (ISIs), the same OFDM symbol is transmitted continuously, thereby omitting the need for a guard interval. Due to the cyclic properties of the transmit sequence, the time shifting property of the DFT can be exploited, leading to with the channel frequency responsẽ At the receiver, the channel can be estimated using the least squares estimation [74] H f (n, p) = R(n, p) · T ( p) −1 (14) with the known transmit data symbols As the transmitters are separated by their OFDM subcarriers defined in I d,m , the MIMO channel matrix can be estimated toĤ averaging over all subcarrier of each transmitter assuming a frequency nonselective channel for the full signal bandwidth B s = N c · f . It is therefore necessary that the receiver knows the training symbols as well as the OFDM subcarrier indices of the individual transmitters I d,m ∀m.
C. System Calibration and AoA Estimation
An important achievement of the hardware design is the determination of the strongest absolute AoA at the receiver. This requires a correction of the imbalances in amplitude and phase between the 16 RF receive branches, which results from cable length deviations, manufacturing tolerances of the PCBs and MMICs, deviations in soldering, and phase differences of the LO signals. Especially, the used SDRs cause a random phase offset between the branches because there is no possibility to harmonize the phase of the LO signals for IF upconversion and downconversion. Therefore, a calibration branch was added to the hardware design to correct these imbalances. The calibration branch consists of an additional SDR at the receiver generating the calibration signal, which is fed at the IF to a dedicated input port of the receiver antenna board. The receiver antenna board incorporates a mixer that upconverts the calibration signal to RF using an externally supplied LO signal at half the mixing frequency, as shown in Fig. 1. The calibration signal is then split symmetrically by a distribution network and added to the receive signal directly behind the 16 antenna elements using a coupled-line directional coupler. As the calibration signal is known at the receiver and is symmetrically coupled into each receive path, the relative differences between the amplification and phase of the receive branches can be estimated and corrected in the digital domain of the receiver. The amplitude and phase imbalances have to be only corrected with respect to a selected receiver branch. It is important to mention that to enable a real-time calibration, the received and calibration signals have to be separated to avoid interference. This separation is achieved by keeping selected OFDM carriers of the transmitted signal free for the calibration signal. As defined before, the OFDM subcarrier calibration index set is denoted by I cal and it holds I cal ∩I d = ∅. The introduced imbalances are measured for each OFDM symbol in the same manner as in (14) resulting ind whereR represents the received baseband signal matrix, including the superimposed calibration signal. Finally, the result is used to obtain the calibrated MIMO channel matrix with the calibration matrixD = diag{d 0 , . . . ,d N ant −1 }. Based on the calibrated channel matrix, the strongest Ao φ max can be determined. Therefore, the singular value decomposition (SVD) of the calibrated channel matrix is calculated H cal =Ûˆ V H to extract the first receiver-side beamforming vectorû 1 The radiation pattern over the azimuth angle φ using the first receiver-side beamforming vector results to whereû H 1 denotes the Hermitian transpose ofû 1 , C e contains the antenna element characteristics, k = 2π/λ denotes the wavenumber, and d represents a vector with the spatial positions of the active antenna elements. As the first receiver-side beamforming vector enables a beam steering into the direction of the strongest AoA, the corresponding angle can be extracted by finding the maximum in the radiation pattern. The strongest AoA is, therefore, given bŷ which can be compared with the physical azimuth angle between the position of the base station and the mobile transmitter φ bt . The angle φ bt can be calculated using the GPS coordinates of the base station and the mobile transmitter with respect to the view direction of the base station. The angular difference equals zero for scenarios with a dominant LOS path but can have an arbitrary value for NLOS scenarios. This means that the angular difference φ can give information about whether the scenario is LOS dominated. In principle, multiple AoAs can be extracted from the estimated and calibrated channel matrix, by analyzing the radiation characteristic, including all beamforming vectors given byÛ H .
D. Signal Quality and Performance Metrics
As a measure of the signal quality, the MER representing a quasi-signal-to-noise ratio (SNR) is calculated. Before estimating the MER, the received symbols are equalized assuming a frame-based data transmission with frame length L f with periodic appearing training symbols as it is common practice in wireless communications [75]. A one tap equalization is applied using as an equalization matrix whereĤ f ∈ C N ant ×N c ×L f is the result of (14) extended in time domain with sampling times t = k · T o for k ∈ {0, 1, . . . , L f − 1}. On the basis of the equalized receive symbols R eq ∈ C N ant ×N c ×L f , the MER averaged over all receivers and all OFDM carriers is defined by MER = 10 · log 10 with the error matrix (24) and the normalization matrix following the descriptions in [76]. Furthermore, the MER can be averaged over L s = L tot /L f subsequent OFDM frames, where L tot represents the total number of recorded OFDM symbols. As a performance metric serves the spectral efficiency or maximum achievable sum rate given in b/s/Hz and calculated by [77] with the normalized channel matrix ||Ĥ|| 2 = N ant M ant . For the SNR at the receiver γ , we use the calculated MER in the following analysis.
III. OUTDOOR MEASUREMENT SCENARIOS
For the channel measurements, we selected three different cell site scenarios to obtain a realistic picture of the wireless propagation channel. The scenarios were chosen due to their variability in foliage coverage, reflective surfaces, denseness of buildings, availability of LOS and NLOS measurement points, and their angular spread. Within each scenario, we positioned the base station at an elevated position with a fixed view direction and elevation angle. The position of the mobile transmitter is varied within a predesignated measurement area seen from the base station view direction, making an analysis of the different propagation scenarios and view angles to the base station possible. The determination of the exact spatial position of the transmitter and receiver is based on the recorded and over the measurement period averaged GPS data that are manually verified using a map of the scenario. The 50 channels GPS receiver by Jackson Labs Technologies, Inc., provides regarding the manufacturer a measured horizontal position accuracy of better than 0.7 m (root-mean-square value) utilizing a low-cost vehicle puck antenna operated on the roof of a building without any high adjacent buildings blocking the view. For each measurement position of the mobile transmitter, an around 20 s long recording is made. This allows us to analyze the time-dependent behavior of the channel, as, for example, the influence of foliage movement within the propagation paths. In total, 159 measurements are performed and evaluated. The measurements are performed on several days from May to July causing a high amount of foliage within the surrounding area. During the measurements, the weather was partly cloudy and dry. Three different small cell site scenarios were picked for comparison at Campus South of the Karlsruhe Institute of Technology (KIT). For each cell site scenario, the elevation and azimuth view direction of the base station is adjusted upfront to cover the desired area best possible. The scenarios are marked in the satellite image in Fig. 3. The image shows the respective position of the base station (B 1 , B 2 , and B 3 ), their view directions (i.e., the normal vectors to the antenna array plane), and the angular ranges of ±60 • around the view directions. The different cell sites can be described as follows.
1) Scenario I: In the first scenario, the base station is adjusted to cover an open courtyard, which is characterized by a small lake surrounded by buildings on its three sides serving as possible reflective surfaces. Furthermore, a fair amount of foliage belonging to tall trees in the center of the courtyard was present. These blocked a direct LOS propagation between the base station and the mobile transmitter at some of the measurement locations, which shows a significant impact on the SNR. At the furthermost end of the courtyard, two small building canyons run on either side of a building, possibly creating highly reflective environments. In addition, some parked cars prevented the direct LOS path. The base station is positioned on the balcony of an adjacent building in 13 m 2) Scenario II: The second scenario covers an intersection and is dominated by heavy foliage spread over a wide angular range, as shown on the right side of the satellite image in Fig. 3. The base station is thereby placed on the roof top of a building in 17 m height and tilted in elevation by 15 • downward from the horizontal view direction. The heavy foliage coverage is blocking the LOS path at multiple measurement locations, giving possibility to further investigate the influences of foliage onto the propagation channel. Compared with the first cell site scenario, a less reflective environment is present, with a wide street running through the scenario lined by trees and parked cars. Furthermore, occasional wind present on the day of measurement introduced time-variant scattering effects due to movements of the foliage during the measurement times.
3) Scenario III: In the third scenario, the base station is placed on a balcony in 35 m height and the antenna array is tilted downward in elevation by 28 • from the horizontal view direction. This scenario comprises few trees, which in combination with the base station height is leading to measurement distances up to 162 m. Here, urban NLOS propagation scenarios are present at several measurement locations.
The key figures of the different cell site scenarios are summarized in Table I. To enable a realistic mobile communication scenario, the antennas of the mobile transmitter are placed in a height of 115 cm in all scenarios to emulate the typical height of a cell phone carried by a user. Moreover, the height of the base stations is chosen following the urban micro and macro cell scenarios with high user density identified by 3GPP in [78]. Due to a maximum distance of 162 m within the measurements, the atmospheric gap around 28 GHz and the absence of rain during our measurements, the additional atmospheric path losses can be neglected [79].
IV. CHANNEL MEASUREMENT RESULTS
In Section III, the results of the channel measurements are presented. The system parameters used for our measurements are given in Table II. The considerably narrow bandwidth is selected to ensure a frequency-nonselective channel behavior. It should be noted that the presented measurements focus on estimating snapshots of the complex MIMO channel matrix. For more information about the broadband channel behavior or other characteristics as, for example, power delay profiles, we refer to the measurement results presented in [16], [18], [50], [52], and [80]. Nevertheless, it is generally possible to use the demonstrator for broadband channel measurements as the designed RF front ends cover the full n257-band. For this purpose, the IF frequency can be varied in time by controlling the utilized SDRs, and thus, a wide frequency range can be investigated. However, this presupposes a stationary channel over the entire measurement. The modular design also allows the replacement of the bandwidth limiting antennas and SDRs by analog-to-digital converter (ADC) and digitalto-analog converter (DAC) boards with higher sampling rates and processing speeds.
At first, the average MER over the full recording is calculated for each transmitter position and color-coded displayed into Fig. 3. The results show that the MER varies strongly depending on the position of the mobile transmitter. This is caused by the high number of trees in the propagation paths, which leads to large path losses at 28 GHz. For good propagation scenarios MER values of up to 26 dB could be reached. These values are achieved without using any antenna array gain, i.e., an equivalent isotropically radiated power (EIRP) of 11.5 dBm, due to the employed channel estimation technique. The MER values, therefore, look quite promising for future 28 GHz MIMO mobile communication systems. As expected, the highest values could be reached in short distance LOS scenarios at or close to the view direction of the base station. The group of measurement locations marked as g 1 in Fig. 3 shows multiple LOS measurements with different distances between the base station and the mobile transmitter ranging from 64 m to 150 m. The elevation angle decreases thereby from 33 • to 13 • , and furthermore, the transmitter moves away from the base station. The measurements show that the MER only slightly decreases with distance, as a lower elevation angle between the base station and the mobile transmitter leads to a higher antenna element gain at the transmitter To analyze whether the wireless propagation channel is LOS or NLOS dominant, the histogram of the angular difference φ calculated by (21) is shown in Fig. 4, including all mobile transmitter positions. The result shows an LOS dominance within the measurements made. The reasons for this are not only the selection of the mobile transmitter locations but also the fact that the likelihood for multipath propagation decreases compared to frequencies below 6 GHz. This is caused by the higher path losses and absorption by possible reflectors. Besides the peak around φ = 0 • , the angular difference is spread over the whole range. Note that due to the limited number of measurements, not every angular difference is present in Fig. 4.
To analyze the multipath nature of the wireless propagation channel in detail, Fig. 5 shows the cumulative distribution function (CDF) of the four eigenvalues of the channel, including all mobile transmitter locations of all cell site scenarios. The graph reveals the multipath nature of the wireless propagation channel. It can be seen that even in this LOS dominated cell site scenarios in 50% of the cases, the second eigenvalue is not more than 10 dB lower than the strongest one. Moreover, in 10% of the cases, the difference between the strongest and weakest eigenvalue is less than 14 dB.
For a deeper understanding of the multipath behavior of the 28 GHz propagation channel, taking a closer look at the difference between the first and second eigenvalue σ 1,2 is of interest. In Fig. 6, the eigenvalue difference is plotted over the MER for all locations of the mobile transmitter. The eigenvalue difference is averaged in time over the full recording. Furthermore, the distance between the base station and the mobile transmitter is color-encoded onto the measurement points. Fig. 6 shows that for low MER values, the differences between the first and second eigenvalues of the channel are low. This can be explained by the type of scenario causing the low MER. These scenarios mostly have no LOS connection and the distance between the base station and mobile transmitter is comparably high, as shown by the color-encoded points. Hence, if no dominant path exists, the difference between the eigenvalues most likely decreases. Going to higher MER values, the difference in the eigenvalues seems to increase in average as indicated by the trend line. This is mainly caused by LOS scenarios, as reflections over, e.g., buildings are much higher attenuated compared to the direct path. At medium to high values of the MER, all distances are represented supporting the thesis of LOS dominance. Furthermore, the results reveal that at medium and high MER values, the distance between the eigenvalues decreases predominantly if the distance between the base station and mobile transmitter is low. This means that at closer distances, multipath scenarios exist, which can be exploited for spatial multiplexing or diversity transmission.
To investigate the achievable spectral efficiency in the presented mobile communication scenarios, the spectral efficiency for all mobile transmitter positions and all scenarios is shown in Fig. 7. In addition, the course of the spectral efficiency is approximated as a third-order polynomial function using the measurement data. While the spectral efficiency increases with an increasing MER, the uncertainty also rises. This behavior is in line with the observations made in Fig. 6. For high MER values, the scenario may have only one dominant LOS path leading to a low spectral efficiency, as only the first eigenvalue contributes to the transmission. The spectral efficiency, in this case, is dominated by the eigenvalue distribution. At low MER values, all eigenvalues are highly attenuated, which means that the spectral efficiency is dominated by the MER. As, for wider bandwidths, additional frequency-selective distortions will reduce the signal quality, the presented results of the narrowband achievable spectral efficiency can be used as an indicator for the reachable performance. This helps designers of broadband communication systems to put the achieved spectral efficiency into perspective and indicates the amount of additional interference caused by broadband data transmission.
Next, the time-dependent behavior of the propagation channel is analyzed. For this, the beamforming matrices resulting from SVD may be applied to time-delayed instances of the channel matrix. Note that in real communication scenarios, the channel estimate is used during the transmission of the full frame until the channel estimation is updated, utilizing noncontinuous channel estimation approaches. This processing is valid as long as the coherence time is much larger than the frame duration or channel estimation update time. To evaluate the time-dependent behavior, the CDF of the Fig. 8. Comparison of wireless propagation channels using the CDF of the spectral efficiency. To analyze the behavior over time, the calculated beamforming matrices were applied over several channel matrices delayed with τ . (a) Measurement location marked as g 9 within Fig. 3 showing a slow-changing wireless propagation channel meaning a long coherence time. (b) Measurement location marked as g 10 within Fig. 3 showing a fast-changing wireless propagation channel meaning a short coherence time. spectral efficiency is calculated for a slow-and fast-changing environment shown in Fig. 8. In each case, different frame durations given in multiples of the OFDM symbol duration are used. Note that as the channel is changing, the employed outdated beamforming matrix could also lead to an improvement in spectral efficiency. This is caused by a general improvement of the eigenvalues or MER occurring over time. To illustrate the loss in spectral efficiency by using a time-delayed beamforming matrix, the spectral efficiency is calculated for each channel matrix with delayed versions of the beamforming matrices. This indicates the difference between the time-delayed beamforming matrix and the optimum spectral efficiency, which can be reached at this point in time. Note that the minimum delay time is limited by the OFDM symbol duration T o = 128 μs. For a scenario with a long coherence time, we selected the measurement point marked as g 9 within Fig. 3. The communication link is dominated by an LOS connection with an average MER of 16.7 dB and no foliage between the base station and the mobile transmitter. The results in Fig. 8(a) show a slow degradation in spectral efficiency with increasing estimation delay. This means that the channel is changing slowly over time. Even for a high delay time of τ = 16.2 ms, a drop in spectral efficiency of only 1.2 b/s/Hz is reached in 90% of the cases. In contrast, the measurement point marked as g 10 within Fig. 3 is analyzed, showing an average MER of 9.3 dB. Within this propagation scenario, the LOS path is covered by dense foliage, which rapidly changed the channel over time due to motions of leaves from the wind present that day. The difference is visualized in Fig. 8(b). Already, after τ = 128 μs, the wireless propagation channel and thereby the ideal beamforming matrix changed drastically leading to a drop of 8 b/s/Hz in 90% of the cases. Nevertheless, a saturation effect is visible caused by the static nonvariant parts in the propagation environment.
V. CONCLUSION
This work presents a measurement-based analysis of the wireless propagation channel around 28 GHz using an MIMO measurement system. Overall, 159 channel measurements at static mobile transmitter positions have been performed in three realistic small cell site scenarios. The spatial diversity of the channel is analyzed, showing less than 10 dB attenuation of the second path in 50% of the cases, which shows the possibility for spatial multiplexing techniques in future mobile communication scenarios at the edge of the mmWave regime. Moreover, the significant influences of moving foliage are investigated and their effects on the achievable spectral efficiency indicate the constraints for data frame durations. The channel sounder enables an estimation of the complex MIMO channel matrix, which can be fed into numerical simulations to investigate MIMO architectures and algorithms. | 9,336.6 | 2020-07-20T00:00:00.000 | [
"Engineering",
"Physics"
] |
Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso
Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs). Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.
Introduction
Brain-computer interfaces (BCIs), which are communication systems designed to transmit information between the brain and computers or other electronic devices, are currently the most popular technique used in neurological rehabilitation [1]. The system does not depend on the brain's normal pathways of peripheral nerves and muscles but relies on signal acquisition technology to capture the signal generated from brain activity, which is used to control external equipment after analysis and processing. The electroencephalogram (EEG) signal is the brain signal that is obtained by noninvasive electrode acquisition. EEG signal feature extraction and classification have become a hot topic in BCI research.
The biggest problem of BCIs based on EEG signals is the high dimensions of the EEG feature space and the limited number of samples. This has prompted research into EEG channel selection and BCI feature selection. Research into feature selection and channel selection of the EEG signal can be roughly divided into two types. The first type is feature selection methods. Coelho introduced a new artificial immune network algorithm to realize automatic feature selection using the EEG signal power spectral density feature, which used an extreme learning machine as a classifier [2]. Rejer used blind source separation, a genetic algorithm and a forward feature selection method [3,4]. Bhattacharyya proposed a differential evolution and mimetic algorithm for high-dimensional EEG signal power spectrum density feature selection [5]. Noshadi proposed an algorithm which combines Lempel-Ziv with EMD for feature extraction on the EEG signal, using t-test and a forward or backward feature selection method [6]. The second type is EEG signal channel selection methods. Arvaneh proposed a sparse common spatial pattern algorithm and a robust sparse common spatial pattern algorithm for channel selection. The classification results are better than the feature selection method based on Fisher criterion, mutual information, support vector machine, and common spatial pattern or a regularized common spatial pattern [7,8]. He proposed a genetic algorithm for feature selection based on the maximized Rayleigh coefficient feature [9]. Yang proposed a method for channel selection of a specific object based on Fisher discriminant analysis scoring criteria. This method can effectively reduce the number of channels from 118 channels to no more than 11 without significantly reducing the classification accuracy by shortening the training time [10]. Gonzalez proposed a combination of Fisher discriminant analysis and a multiobjective real/binary hybrid particle swarm algorithm, which can maximize the classification accuracy and minimize the number of channels while searching for EEG channels and the classifier parameters [11]. As can be seen, most studies undertake research on EEG signals by either feature selection or channel selection unilaterally. Lasso (least absolute shrinkage and selection operator) is a new regularization method which can be used to select high-dimensional features [12]. Group Lasso is an extended Lasso method [13], while Sparse Group Lasso is a regularization method which combines Lasso and Group Lasso [14,15]. Germán et al. proposed a Lasso feature selection method based on minimum angle regression using fusion characteristics of the EEG, such as the power spectrum, Hjorth parameters, AR model coefficients, and wavelet transform parameters. This method used linear discriminant analysis as the classifier [16]. Experimental results show that this method is superior to traditional methods. Yeh studied the image classification problem of audio and video, using the fusion of the Mel-frequency cepstral coefficients (MFCC) feature, scale invariant feature transform (SIFT) descriptor subfeatures, histogram of oriented gradients (HOG) descriptor subfeatures, Gabor texture features, and edge direction histogram (EDH) described characteristics, and then proposed a multicore learning framework which is based on Group Lasso for feature selection [17]. Xie studied the problem with selection of uncertainty characteristics based on Sparse Group Lasso for data mining and has done experiments on nine types of UCI machine learning datasets [18].
Based on the literature [16], this paper proposes the Sparse Group Lasso method for channel selection and feature selection of the EEG fused feature and estimates model parameters using a combination of the blockwise coordinate descent method and the coordinate gradient descent method. This has the ability to not only select features between channels but also select features within the channel and achieves high-dimensional EEG signal channel selection and feature selection simultaneously, while obtaining better sparse performance and classification accuracy. We conduct experimental verification on dataset 1 of the international BCI Competition IV. The EEG data is firstly preprocessed and then fused features are established from each channel of the multichannel signal; that is, the power spectrum, timedomain statistics, autoregression (AR) model coefficient, and wavelet features are extracted. The wrapped channel and feature selection method is then used. The logistic regression model is penalized with the Sparse Group Lasso to fit the training data, and parameter estimation is obtained using the blockwise coordinate descent method and coordinate gradient descent method. Finally, the test sample is classified using the trained model. The method proposed in this paper includes feature fusion, channel selection, and feature selection, as shown in Figure 1.
Feature Extraction
In study of EEG signal classification problems, an important factor in improving the recognition rate is to extract representative features to represent the EEG signal properly. In this paper, in order to extract the EEG signal features and establish high-dimensional feature fusion comprehensively, we jointly apply four types of feature extraction methods: frequencydomain analysis, time-domain analysis, analysis of time and space, and time-frequency analysis.
Power spectrum estimation can analyze the distribution and change in EEG signal rhythm [19] and capture the event-related desynchronization (ERD) and event-related For the time sequence of each channel of the EEG signal, we extract four commonly used statistical features: the mean value and standard deviation of the time sequence and the mean value of the first difference absolute value and the second difference absolute value of the time sequence.
The AR model is an effective tool for time sequence modeling and it has been widely used in BCI systems [20]. In our experiment, we establish a sixth order AR model for the time sequence of each channel and take the coefficient of the model as a feature of the EEG signal.
Wavelet transform is a type of variable resolution timefrequency analysis method; it has good localization in the time-domain and frequency-domain and is used for EEG signal feature extraction frequently. We use the Db4 wavelet as the mother wavelet in the experiment, make six decompositions of the time sequence of each channel, take the energy of the approximate coefficients and detail coefficients (sevendimensional) as features, and extract four features for each of them: the Shannon entropy, logarithmic energy entropy, and the mean value and variance of the Teager-Kaiser energy operator. This constitutes 55-dimensional wavelet features overall.
Channel Selection and Feature Selection
The feature extraction process described above is carried out for each channel in the time series. While tasks to imagine different movements activate different brain areas, not all regions of the brain's electrical activity are associated with each task, so the fused features established using every channel of the EEG signal have some redundancy. Hence, we need to complete channel selection and feature selection. Channel selection removes channels which are not related to the category of imagined movement. In addition, some of the features have nothing to do with classes other than the category of imagined movement, so feature selection is required. Feature selection considers whether each dimension's characteristic is associated with each category of imagined movement, and selections are made based on the features rather than the channel.
It is well known that the Lasso method can obtain a sparse solution from high-dimensional data. For feature fusion, the method extracts characteristics from each different channel without distinction, adopting the same selection standards, and can realize the process of feature selection, as shown in Figure 2(a). However, the method does not significantly reduce the number of channels. The Group Lasso method regards the fused features extracted from each individual channel as a feature set, and selection is made on a channel basis; that is, all characteristics of the channel are retained or discarded, as shown in Figure 2(b). However, with feature fusion, not all features extracted from a channel are necessarily associated with imagined movement categories, and therefore feature selection within the channel is needed. Therefore, a method is required to increase the sparsity of the feature set among channels and within each channel. The Sparse Group Lasso method is a combination of Lasso and Group Lasso, which can achieve sparsity between groups and within the group. Therefore, in this paper we propose the Sparse Group Lasso method to solve the problem of channel selection and feature selection for EEG signal feature fusion, as shown in Figure 2(c). Additionally, we propose a method that combines the blockwise coordinate descent and coordinate gradient descent to estimate the parameters of the Sparse Group Lasso model, where nonzero model parameters signify that the corresponding feature or feature group is selected and vice versa.
BioMed Research International
First, we provide the proposed logistic regression multiclassification model penalized with the Sparse Group Lasso of the EEG signal. The method can be described as follows: we assume that the training sample set is ( , ), = 1, . . . , , ∈ × is the observation vector, is the number of channels, and is the dimension of each channel. We let denote the multiclass response, ∈ {1, 2, . . . , }. The EEG data used in this paper is two-class data, but in order to make the algorithm more general in our description here, we give an example using a multiclassification model. The logistic regression model is used to represent the conditional probability; then the probability of sample belonging to class is described as Here, is the coefficient matrix, which represents the model parameters which need to be solved, and ⋅ is the th column of . We let = as a reference, and then we can obtain − 1 logistic models: Here, ⋅ = 0. Using maximum likelihood estimation to fit the model, we define matrix with elements as follows: ( The training dataset can be considered as independent observations to simplify calculations. We take the logarithm likelihood function as follows: After adding a Sparse Group Lasso penalty function to (4), the objective function becomes Here, > 0 and when is sufficiently large, is zero, ∈ [0, 1].
( ) is the th group of , which represents the coefficient vector of the th channel fused feature of each class, with dimension , = 1, . . . , .
( ) is the th feature coefficient of the th group of , is the th feature coefficient of , = 1, . . . , × × .
We can see that the Sparse Lasso penalty is a combination of Group Lasso penalty and Lasso penalty, and when = 0 or = 1, it converts to Group Lasso or Lasso estimation, respectively.
As described in a previous study [21], the model parameter estimation algorithm proposed in this paper is composed of three main loops: an outer coordinate gradient descent loop (Algorithm 1), a middle blockwise coordinate descent loop (Algorithm 2), and an inner modified coordinate descent loop (Algorithm 3).
The purpose of Algorithm 2 is to solve the quadratic optimization problem in (7). Since the penalty Φ is separable, (7) can be written as Here, we can use the blockwise coordination descent algorithm because Φ ( ) is convex. Taking the th group (the fused feature coefficients of the th channel), the problem can be simplified to Here,̂( ) represents the estimation of the th group.
Since H is a diagonal matrix, it can be broken down into block matrices of × size. By symmetry of H, we obtain Equation (10) can be rewritten as Here, g ( ) is the group gradient and For Algorithm 3, we rewrite (9) as The two first terms of (12) are considered to be the loss function and the last term is the penalty. The loss is not differentiable at zero due to the L2-norm, and thus we cannot completely separate out the nondifferentiable parts, so the coordinate descent method has been modified for this case. For the th iteration of the th group, we need to find the minimum of the function (̂( ) ): = , and ℎ is the th diagonal of the Hessian block H . Due to the convexity of ( ), we conclude that ℎ ≥ 0. Since the quadratic approximation ( ) is bounded by the constraints below, we obtain̂( ) = 0 when ℎ = 0. When ℎ > 0,̂( ) can be obtained as follows.
If > 0, > 0, and | | ≤ , then̂( ) = 0 and therefore We solve (15) by applying a standard root finding method. We define Δ ∈ × × and can then rewrite the descent direction at zero for function (12): Algorithm 3 (inner loop used in the model parameter estimation algorithm).
Experimental Process and Results
Analysis. The first step was to extract the features from the = 59 channels of the EEG signal. For each channel signal, a 5-dimensional power spectral feature, 4-dimensional time-domain statistical feature, 6-dimensional AR model coefficient feature, and 55dimensional wavelet decomposition coefficient feature were extracted. Thus, the fused features of each channel were 70dimensional.
In this paper, the Sparse Group Lasso method was firstly used for EEG signal processing. The features of each channel were extracted as a group, that is, ( ) , where = 1, . . . , , with = 59 groups in total. Here, = ( (1) , . . . , ( ) , . . . , ( ) ), where ( ) ∈ × . In experiments, we used the wrapped Sparse Group Lasso method for channel and feature selection. At first, the feature set consisted of features extracted from each channel of the EEG signal. The combined coordinate gradient descent method and blockwise coordinate method were then used to solve the objective function with a corresponding penalty term to get the parameter estimation results, based on the training data logistic regression model. A 10-fold cross-validation method was applied to select the parameter estimation with the highest training accuracy rate as a result of channel and feature selection. Finally, the test data corresponding to the selected channel and feature subset for the trained model under test was used to calculate the test error rate.
The first experiments use datasets A and E as follows. For dataset A, the method proposed in this paper can be compared to a type of feature extraction method and feature fusion method, respectively, with results shown in Table 2.
From Table 2 it can be observed that, compared with the AR coefficient and wavelet coefficient features, the feature fusion obtains a lower error rate for simultaneous channel and feature selection. For the power spectrum characteristic and the time-domain statistics characteristic, although the feature fusion error rate is slightly higher, the feature fusion method has obvious advantages for channel selection.
Electrode position point Therefore, it can be concluded that when considering comprehensive performance of the test error rate and channel selection number, a fused feature extraction method is better than a single feature extraction method. Figures 4 and 5 compare single feature extraction and feature fusion of the channel/feature selection for dataset A, respectively, and Figure 6 shows the fused feature channel selection result analysis. Figure 4 shows that the ratio of the number of channels and features selected is lowest when using feature fusion and it better reduces the redundancy of channel and feature. Figure 5 shows that, out of the 18 channels selected by the feature fusion method, 15 channels are included in the selection results from three or more extraction methods, a percentage of 83.33%. In addition, there are 10 channels, F2, F5, FCz, C4, CP1, CP3, P5, P6, O1, and O2, respectively, selected by all four feature extraction methods, and these channels are important for classification of dataset A. Finally, the proposed method includes all of these channels, which indicates that the feature fusion is superior at removing redundant channels and choosing the most relevant channels for signal classification.
As an example, we observe that the 12th (FC1) channel in Figure 6 only contains the power spectrum feature and the wavelet feature; that is, only these features contribute to the classification problem from the four types of heterogeneous feature of this channel. From analysis of all 18 channels, it can be observed that selection frequency of the power spectrum feature and the wavelet feature is 100%, while the time-domain statistic and AR coefficient feature have a selection frequency of 88.89%. Therefore, compared with the time-domain statistic and AR coefficient feature, the power spectrum feature and the wavelet feature are more important in the classification of dataset A. The second experiment followed the same experimental procedure and analysis for dataset E. The results are shown in Table 3 and Figures 7-9.
We can draw similar conclusions from analysis of Table 3; that is, when there is a lower or equivalent test error rate, the feature fusion method can achieve better channel and feature selection. Figures 7 and 8 compare single feature extraction and feature fusion, respectively, for the channel and feature selection of dataset E. Figure 9 is the fused feature channel selection result analysis.
From Figure 7, we can directly observe that the fused feature extraction method achieves better dimensional reduction on the selected number of channels and features. In Figure 8, 23 channels are selected by feature fusion, with 16 channels contained in the selection results from three or more extraction methods, a percentage of 69.6%. Seven of the channels (F6, FC6, CFC8, C5, C3, C4, and CP6) are selected by all four feature extraction methods, and of these six channels (all except CFC8) are selected by the feature fusion method, a percentage of 85.7%. This shows that the feature fusion method can more accurately choose channels which are relevant to the classification. Figure 9 shows that 23 channels (which all include the power spectrum characteristic and wavelet feature) are selected by feature fusion, 13 channels include the timedomain statistic, and 11 channels contain the AR coefficient features. From this, we conclude that the power spectrum characteristics and wavelet feature play a more important role for classification.
The above experiments have shown that the feature fusion extraction method can provide alternative features for Sparse Group Lasso. It is suitable for handling data with high dimensions and can select the most effective features from the data.
The third experiment is as follows. In the following experiments, the feature fusion extraction method is adopted. The comparative results of Lasso feature selection, Group Lasso channel selection, and Sparse Group Lasso channel and feature selection for dataset A are shown in Table 4.
From Table 4 we can see that, compared with the Lasso and Group Lasso, Sparse Group Lasso can guarantee a lower error rate. Sparse Group Lasso selects more characteristics than the Lasso method but chooses a lower number of channels. Since the four datasets were collected from 59 electrodes, and each electrode corresponds to an individual channel for experiments, the channel selection represents the selection of an electrode. As each channel contains 70 characteristics, removing channel redundancy has more significance than removing redundant features. So, Sparse Group Lasso can be used for channel selection and feature selection at the same time with lower error rates. Figure 10 compares different channel and feature selection methods for dataset A. Figure 11 shows the channels and characteristics when parameter = 0.5 on dataset A. Each channel is composed of 70-dimensional features: the power spectrum characteristics are 5-dimensional, the time-domain statistical features are 4-dimensional, the AR model coefficient characteristics are 6-dimensional, and the characteristics of the wavelet decomposition coefficient are 55-dimensional. As can be seen in Figure 11, 18 channels are selected from the full range of channels, and not all features are selected. For example, on the 12th channel, no features are selected between the 775th dimension and the 785th dimension. AR coefficients and time-domain statistics characteristics are stored within this interval, therefore we can determine that the 12th channel does not choose the time-domain statistics and AR coefficient characteristics (also this can be concluded from Figure 6), and similar findings can be observed through further channel analysis. Therefore, it is more intuitive to discover the sparsity between channels and within each channel and then obtain the important features. This further proves that the Sparse Group Lasso method can realize channel and feature selection at the same time.
For the fourth experiment, we compared the performance of Lasso, Elastic Net, Group Lasso, and Sparse Group Lasso for feature selection and the classification problem. The results are shown in Table 5. We present the number of selected channels and features based on Sparse Group Lasso with different values of parameter (0, 0.25, 0.5, 0.75, 1). Sparse Group Lasso is equivalent to Group Lasso when equals 0 and is equivalent to Lasso when equals 1. Group Lasso shares the same grouping method as Sparse Group Lasso, which takes the fused features of each channel as a group and trades off on the group level in order to make the channel selection. Lasso and the Elastic Net method treat the features extracted from all channels equally and compromise on the feature level in order to make the feature selection. We use the packaging method with fused features in the experiment on the four datasets based on Lasso feature selection, Elastic Net feature selection, Group Lasso channel selection, and Sparse Group Lasso channel and feature selection separately. As shown in Table 5, for different datasets, the larger is, the lower test error rate becomes. We can conclude that the Sparse Group Lasso method can obtain the lowest error rate when making channel and feature selection when the parameter setting is close to Lasso ( = 1). Table 5 shows the results of different channel/feature selection methods (Lasso, Elastic Net, Group Lasso, and Sparse Group Lasso) for different datasets. We can observe that, compared with other methods, Sparse Group Lasso obtains the lowest error rate, with the lowest number of selected channels, below 38.98% of the total number of channels for all datasets. The lowest number of selected channels is only 23.73% of the total, which reduces the redundancy of channels significantly. Since channel selection is equivalent to the choice of electrode, it has greater significance than feature selection. The number of features selected by Sparse Group Lasso is below 17.85% for all datasets, with the lowest only 7.97% of the total number of features. We conclude that the comparison shows that Sparse Group Lasso can achieve channel selection and feature selection simultaneously, while ensuring sparsity among channels and features when maintaining an error rate equal to or lower than other methods.
In comparison to other studies such as [22], we have only analyzed training sets, rather than using a testing set where the training set needs to be divided into 100 as 80% and 20% randomly. In the study in [22], 11 channels were chosen artificially: FC3, FC4, Cz, C3, C4, C5, C6, T7, T8, CCP3, and CCP4. This is different from the channels selected by our proposed method, since the previous study [22] used spatial pattern characteristics, while we use frequencydomain characteristics. The test set used, BCI Competition IV dataset 1, is continuous data, while we have piecewise processed the continuous data in order to increase the test samples. Therefore, it is not possible to compare our methods with other previous studies.
Conclusion
Classification of EEG signals is a core part of BCIs, so an effective feature extraction and selection method is the key to improving identification accuracy. For EEG signal processing, we present a new method: wrapped Sparse Group Lasso method for channel and feature selection. The joint application of a variety of feature selection methods was firstly used to establish high-dimensional feature fusion of the preprocessed EEG signals. Then, channels and features are selected in a wrapped way. The logistic regression model penalized with Sparse Group Lasso is fitted on the training data, and parameter estimation is obtained by a blockwise coordinate descent method and coordinate gradient descent method. The best feature subset is selected by using 10-fold cross-validation. Finally, the test sample is classified using the trained model, and the feature extraction method included the power spectrum, time-domain statistics, AR model, and the wavelet coefficients. Fusing multiple features to establish a collection to make a selection is a beneficial research area to explore for EEG signal classification. Experiments have shown that this method can extract the characteristics of the EEG signal more completely, so it is an effective way to improve the signal recognition accuracy. Compared with existing channel and feature selection methods, the results show that the method proposed is more suitable for selecting a subset of fused feature of the EEG signal, as well as being more stable and faster. It can also select a subset which is more relevant to the classification, and the test accuracy obtained on the data used from international BCI Competition IV reached 84.72%. This method is a good choice for future research in pattern recognition topics, such as speech recognition, face recognition, gene classification, remote sensing image recognition, and medical image recognition. | 6,220.2 | 2015-02-24T00:00:00.000 | [
"Computer Science"
] |
Socioeconomic determinants and reasons for non-acceptance to vaccination recommendations during the 3rd - 5th waves of the COVID-19 pandemic in Hungary
Background In Hungary, although six types of vaccines were widely available, the percentage of people receiving the primary series of COVID-19 vaccination remained below the EU average. This paper investigates the reasons for Hungary’s lower vaccination coverage by exploring changing attitudes towards vaccination, socio-demographic determinants, and individual reasons for non-acceptance during the 3rd - 5th pandemic waves of COVID-19. Methods The study’s empirical analysis is based on representative surveys conducted in Hungary between February 19, 2021, and June 30, 2022. The study used a total of 17 surveys, each with a sample size of at least 1000 respondents. Binomial logistic regression models were used to investigate which socio-demographic characteristics are most likely to influence vaccine hesitancy in Hungary. The study analysed 2506 open-ended responses to identify reasons for vaccine non-acceptance. The responses were categorised into four main categories and 13 sub-categories. Results Between the third and fifth wave of the pandemic, attitudes towards COVID-19 vaccination have significantly changed. Although the proportion of vaccinated individuals has increased steadily, the percentage of individuals who reported not accepting the vaccine has remained almost unchanged. Socio-demographic characteristics were an important determinant of the observed vaccine hesitancy, although they remained relatively stable over time. Individuals in younger age groups and those with lower socioeconomic status were more likely to decline vaccination, while those living in the capital city were the least likely. A significant reason behind vaccine refusal can undoubtedly be identified as lack of trust (specifically distrust in science), facing an information barrier and the perception of low personal risk. Conclusion Although compulsory childhood vaccination coverage is particularly high in Hungary, voluntary adult vaccines, such as the influenza and COVID-19 vaccines, are less well accepted. Vaccine acceptance is heavily affected by the social-demographic characteristics of people. Mistrust and hesitancy about COVID-19 vaccines, if not well managed, can easily affect people’s opinion and acceptance of other vaccines as well. Identifying and understanding the complexity of how vaccine hesitancy evolved during the pandemic can help to understand and halt the decline in both COVID-19 and general vaccine confidence by developing targeted public health programs to address these issues. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-19267-2.
Introduction
Vaccination has been proven effective in mitigating the health and societal impacts of COVID-19 globally and preventing millions of fatalities [1].However, in Hungary, the cumulative percentage of people receiving the primary series of COVID-19 vaccination was about 11% below the European Union average, and the percentage receiving at least one booster dose of vaccination was 18% below the EU average among the adult population in 2023 (Figure 4 of the Supplementary Material) [2].
In Hungary, both vaccination recommendations and six types of COVID-19 vaccines were widely available during the COVID-19 vaccination campaign in 2021-2022 [3], providing individuals with wide access, free of charge vaccines and a choice of vaccine types to increase vaccination uptake [4].Although the vaccination campaign initially achieved success, with vaccination coverage exceeding the EU average until August 2021, vaccination coverage did not significantly increase since early 2022, when it reached 71% in the adult population (refer to Figure 4 in the Supplementary Material) [2].As a result, a considerable segment of the Hungarian population has been left unvaccinated against COVID-19.The social patterning of COVID-19 vaccination coverage in Hungary since the third pandemic wave has hindered effective epidemic control.Primarily urban, less deprived areas have had the highest coverage, whilst the most deprived areas have had the lowest [3].Despite recommendations for vaccination, widespread vaccine hesitancy could be the reason for non-vaccination.
Vaccine hesitancy refers to delayed uptake or refusal of vaccination despite the availability of vaccination services [5].Underimmunisation and vaccine hesitancy is a major public health concern [6].In 2019, the World Health Organisation (WHO) listed vaccine hesitancy as one of the top ten global health threats [7].Although the prevalence of vaccine hesitancy was increasing before 2020 [8], the COVID-19 pandemic has exacerbated this issue.It has, probably resulted in a high number of deaths, many of which could have been saved if people had followed vaccination recommendations [1].
The Health Behaviour Model (HBM), the 3C, 5C and 7C models provide insight into the reasons behind vaccine hesitancy.The HBM's investigation of correlations between health and preventive behaviours, 3Cs model developed by WHO's Strategic Advisory Group of Experts on Immunization (SAGE) building on three factors with Complacency (perceived risks of the disease, vaccination as a non-priority), Convenience (availability, accessibility, affordability, health literacy) and Confidence (trust in vaccines, safety, delivery, and policy makers), moreover the further developed 5Cs model's supplemet with Collective responsibility (social norms, willingness to protect others) and Calculation (seeking information before the decision) [9,10].The 7C model incorporates two additional factors, Compliance and Conspiracy [11].
Several European countries have reported a high level of COVID-19 vaccine hesitancy, due to demographic factors, poor health literacy, concerns about vaccine effectiveness and safety and mistrust of government and scientific institutions [12][13][14][15][16]. Data collected in the European Union shows that trust in science is negatively correlated, while trust in social media is positively associated with vaccine hesitancy [17].A cohort study conducted in Hong Kong and Singapore between 2020 and 2022 found that four key factors were associated with vaccine refusal in both the 18-59 and over 60 age groups.These factors were mistrust in health authorities, low vaccine confidence, vaccine misconceptions, and political views [18].A Canadian study highlighted the importance of trust in relation to vaccine hesitancy.It found that individuals with high levels of vaccine hesitancy also had significantly lower levels of institutional trust [19].
The reasons for non-acceptance of vaccination recommendations and their prevalence in the population may vary across countries [15].Therefore, it is necessary to investigate the underlying reasons behind Hungary's substantially lower COVID-19 vaccination coverage compared to the EU average.
The aim of this study is to describe the changing attitudes towards COVID-19 vaccination over time during the 3 rd -5 th pandemic waves of the COVID-19, while investigating the socio-demographic determinants and the individual reasons for non-acceptance of COVID-19 vaccination in Hungary.A principal aim of this research was to determine the concerns, fears, and misunderstandings about COVID-19 vaccinations among individuals who did not comply with vaccination guidelines and declined COVID-19 vaccination.Our findings are intended to guide targeted public health interventions to reduce vaccine hesitancy and increase vaccine uptake.
Methods
The empirical analysis of this study is based on the data of MASZK study, hosted by the University of Szeged, in which surveys were conducted in Hungary [20].Data were collected using CATI (Computer-assisted telephone interviewing) methodology between April 2020 and June 2022, once a month, with a sample size of at least 1000 respondents.A multi-step, proportionally stratified, probabilistic sampling procedure was used for sampling, which included both landlines and mobile phone numbers.The sample was representative of the Hungarian population aged 18 years or older by gender, age, education, and type of settlement.Sampling errors were corrected using iterative proportional weighting after the data collection.The data collection was fully complying with the current European and Hungarian privacy data regulations, approved by the Hungarian National Authority for Data Protection and Freedom of Information, and by the Research Ethic Committee of the Medical Research Council of Hungary (resolution number IV/3073-1/2021/EKU). Informed consent was obtained from all survey participants.This analysis focuses primarily on data collected between February 19, 2021, and June 30, 2022, in 17 surveys (see Table 1), starting from the time when vaccines became widely available.However, the survey also evaluated the willingness to receive vaccination prior to the availability of COVID-19 vaccines to the general public, between December 16-22, 2020.To examine the key trends over time, pandemic waves defined by COVID-19 case numbers have been used [3].
We defined vaccine non-acceptance as an individual decision, at the time of the survey, to decline the COVID-19 vaccine when presented with the opportunity to be vaccinated [21].
To investigate which socio-demographic characteristics are most likely to influence vaccine hesitancy in Hungary, binomial logistic regression models were used.Models were fitted using the glm function of the stats package (version 3.6.2) of R [22].In order to compare the impact of the socio-demographic predictors across pandemic waves, Average Marginal Effects (AMEs) were calculated [23] using the margins package of R [24,25].
Multiple-choice questions measured respondents' selfreported vaccination status and willingness to vaccinate, while reasons for not accepting vaccination were measured with an open-ended survey question.Respondents who reported not having received a single dose of the COVID-19 vaccine were asked the following question, separately for each vaccine type available: "Do you plan to get vaccinated with the coronavirus vaccine currently available in Hungary from [manufacturer]?(1) yes, as soon as I have the opportunity; (2) yes, but only after some time; (3) no; (4) don't know."In case the mentioned type of vaccine was not yet available in Hungary, the following question was asked:
"If the [manufacturing country]based [manufacturer] coronavirus vaccine recommended by the health authorities became available, would you be vaccinated with it? (1) yes, as soon as I have the opportunity; (2) yes, but only after some time; (3) no; (4) don't know."
If the respondent indicated that they did not plan to receive the listed vaccines, they were asked the following question: "Please explain in your own words why you do not plan to be vaccinated with any of the vaccines." The responses to the open-ended questions revealed a differentiated picture of the concerns, fears, and misconceptions about COVID-19 vaccines among those who did not accept vaccination.For analysis purposes a category system was developed based on the WHO 5C model, as outlined by Betsch [26], with modifications made to fit the specific context of this study.It was not possible to fully adapt the model, as the questionnaire used only asked about the reasons for vaccine nonacceptance, rather than overall vaccine hesitancy.Thus, we do not have information from those who ultimately decided to take up the vaccine as the question for the reasons of rejection was only asked from those who reported that they have not got the vaccine and neither they plan to take it.Consequently, not all the 5C factors were relevant for this study.Only those factors were selected which could be used in the categorisation of the answers and used these factors in a deductive approach.Nevertheless, there were still answers that could not be classified into these categories.For this reason, an inductive approach was initiated for the classification of these answers.In summary, a mixed approach was employed for the coding of the open-ended responses.Initially, a deductive approach was adopted, with the model subsequently extended through the application of an inductive approach.In the analysis, we therefore developed four pre-defined main categories, including trust barriers, information barriers, risk perception, and other barriers, along with other sub-categories (refer to Table 3 in the Supplementary Material) for identifying reasons for not accepting the COVID-19 vaccination.Table 4 of the Supplementary Material provides an illustrative overview of typical responses categorised into the defined subcategories.
Regarding the reasons for vaccine non-acceptance, 2506 open-ended responses were coded and categorised by two researchers using the pre-defined main and sub-categories.This was carried out by two researchers working independently of each other.During the coding process a response could be coded in more than one category.The pre-defined category system was modified based on the feed-backs from the coding researchers collected after the first 200 responses in the pilot.The degree of agreement between the independent researchers was assessed using Cohen's Kappa for each category, by accepting at least "moderate" agreement with the Cohen's Kappa coefficient of >0.4 [27] (refer to Table 3).All disagreements were coded again by a third researcher.
Results
We categorised data of the 17 surveys into three pandemic waves.Although the time intervals of the data collection do not precisely coincide with the COVID-19 case numbers' defined pandemic waves [3], this categorization serves as a good approximation and help the interpretation and the understanding of results (Table 1).
An exceptionally large database with a total sample size of 17,001 was used for the analysis (Table 2).The analysis was complemented by an additional survey data collected on December 16-22, 2020 (n=1000) to compare the examined epidemic waves with the period immediately preceding vaccination.The overall distribution of the sample is representative of the adult population in Hungary (Table 2).
In December 2020, before vaccines became available for the wider public, only a quarter of the population (24%) expressed willingness to receive the COVID-19 vaccine immediately upon availability.Meanwhile 41% were uncertain, and 35% declined all forms of COVID-19 vaccinations.
During the third wave of the pandemic (25 th January 2021 -4 th July 2021), following the commencement of the COVID-19 vaccination campaign, 47% of people self-reported that they had received a single dose of vaccine and 16% would not be taking any type of vaccine.A significant proportion of individuals were either unsure whether they would be vaccinated (13%) or were waiting to be vaccinated (24%) (Fig. 1).By the fourth wave of the pandemic in 2021, the percentage of individuals waiting for vaccination and those uncertain about receiving the vaccine had almost disappeared (2 and 4%, respectively), and the majority of Hungarian society was basically divided into two groups, those who reported receiving the single-dose vaccine (80%) and those who declined the vaccine (14%).The percentage of individuals who reported not accepting the vaccine remained almost unchanged during the three waves of the pandemic, 16% in the third wave and 14-14% in the fourth and fifth waves of the pandemic (Fig. 1).Socio-demographic characteristics were important determinants of the observed vaccine hesitancy.In the 3 rd and 4 th pandemic waves (and overall, considering all the waves together), women were more likely to be vaccine-hesitant than men, although this significant difference disappeared in the 5 th wave.For all waves, individuals in younger age groups (especially the 30-39 years age category) and those with lower levels of education and lower income were more likely to decline vaccination, while those with chronic disease and those living in the capital city were the least likely.The social factors behind vaccine non-acceptance were relatively stable over time with only minor changes following the introduction of COVID-19 vaccines (Fig. 2).
Of the 2481 open responses, 2270 responses were categorised.The Cohen's Kappa values ranged from 0.41 to 0.92 (for concrete values, see Table 3).Out of the main categories (trust barrier, information barrier, risk perception and other barriers), a significant reason behind vaccine refusal can undoubtedly be identified as lack of trust, specifically distrust in science.68% of the participants identified trust barrier as one of the reasons for vaccine hesitancy.Among vaccine non-accepters a significant majority (66%) attributed their hesitancy to the lack of trust in science.This compares with 39% of respondents who also mentioned facing an information barrier, whereas only 5% associated with any other structural or individual reason (Fig. 3).
Although overall the trust barrier remains stable over time, its composition varies significantly across the three pandemic waves analysed.The reasoning that "the vaccine was developed too quickly" became less popular among responders, while confidence in the effectiveness and safety of the vaccine were declining (although changes in safety were not significant), with fewer people perceiving COVID-19 as a threat.It can also be seen that, as time progresses an increasing number of people believe that COVID-19 vaccination is the sole means of protection.Overall, the categories related to trust in science (and of all the categories surveyed), fear of side effects was the most common reason for not accepting vaccination, mentioned by 26% of the respondents.
Political views are not highlighted in the results, as this study did not focus on them.However, the proportion of respondents was around 4-5% in each wave who did not accept vaccination mentioned the lack of trust in decision-makers or media as a reason for their decision.
Incorrect information was the second most common answer.This category comprised of responses indicating that the decision was based on misinformation.The Fig. 1 Self-reported COVID-19 vaccination attitudes in Hungary during the 3 rd -5 th pandemic waves based on the results of monthly representative cross-sectional surveys Fig. 2 The socio-demographic determinants of vaccine non-acceptance in Hungary during the 3 rd to 5 th pandemic COVID-19 waves (binomial logistic regressions, average marginal effects) Fig. 3 The reasons for vaccine hesitancy among vaccine non-accepters in Hungary during the 3 rd to 5 th pandemic COVID-19 waves most prevalent sub-category was where the respondents referred to self-declared past infections (albeit not recent).The proportion of reference to previous infection significantly increased in the 4 th and 5 th waves (6% of respondents mentioned previous infection during the 3 rd wave, compared to 9-9% during the 4 th and 5 th waves).Additionally, the proportion of responses that fell outside of this subcategory but were identified as a type of information barrier increased in subsequent waves.
The third category was risk perception, where the respondents considered themselves not to be at risk from the virus on account of their own attributes (e.g.youth, robust immune system).The reference to this answer decreased over time (although not significantly).
Discussion
This large-scale, representative study used monthly cross-sectional surveys to show changes in public attitudes towards COVID-19 vaccination during the major pandemic waves in Hungary.
There was a substantial shift in attitudes towards COVID-19 vaccination from December 2020 to December 2021 in Hungary.In December 2020 -before the study period and prior to the introduction of the vaccines,vaccine hesitancy was at a high level in Hungary with a refusal rate of 35% and 41% of the population expressing uncertainty.However, after the introduction of the vaccines, there was a significant decrease in the ratio of vaccine hesitant groups (who were unsure or wanted to wait with vaccination) over time.At the same time, the ratio of vaccine non-acceptors (in our measure, those who have neither got, nor been hesitant to take the vaccine, but completely reject it) remained almost constant over the study period.During the third pandemic wave, vaccine non-acceptance was 16%, while it was 14-14% during the fourth and fifth waves.Our data (and administrative data as well -see Figure 4 in the Supplementary Material) indicates that a plateau was reached in the number of individuals who were willing to be vaccinated by the end of 2021, despite the availability of COVID-19 vaccines in Hungary.This finding remains crucial as the situation has not changed since then, and COVID-19 vaccination coverage with primary series doses has not increased in the following years in Hungary [2].
Hungary has a well-established history of successful disease prevention through compulsory vaccination programs, and thus childhood vaccination coverage is particularly high [28].However, a high uptake of compulsory childhood vaccinations does not necessarily mean that positive attitudes towards vaccination extend to voluntary vaccinations.Voluntary adult vaccines, such as the influenza and COVID-19 vaccines, are less well accepted in Hungary [29,30].
Our study showed, that the acceptance of the COVID-19 vaccine exceeded that typically observed for voluntary adult vaccinations such as the influenza in Hungary, probably due to initial widespread vaccination communication and vaccine-related benefits (e.g.COVID passport).Nonetheless, the vaccination coverage was increasingly lagging behind the EU average, and it was less able to markedly reduce virus transmission and protect vulnerable groups during pandemic surges.Since the primary series vaccination campaign, willingness to receive a COVID-19 booster vaccination has drastically decreased in Hungary [2].A limitation of this study, that it only analyses the first dose of vaccination.For individuals aged 18 years and older, the difference in vaccination coverage between the first and second doses of COVID-19 is only 2.6 percentage point, so our results are likely to apply well to the group of those not receiving the whole primary vaccination series in Hungary, but not for those not accepting the booster doses.Self-reported vaccine acceptance or willingness to receive COVID-19 vaccination may not be a reliable predictor of real-world vaccine uptake, as noted in the study by Andrejko et al. [31].In June 2022, 26% of Hungarian adults had not received a COVID-19 vaccine dose [2].Nonetheless, our findings indicate that the proportion of unvaccinated individuals during that time was only 18% according to self-reporting.It is possible that the overestimation of the percentage of vaccinated individuals in our study was due to selection bias.Individuals who are more likely to be concerned about the pandemic and therefore more likely to be vaccineted are also more likely to participate in COVID-19-related surveys.Furthermore, social desirability bias may lead some unvaccinated individuals to claim they are vaccinated [4,32].Due to this biases, despite striving to produce nationally representative outcomes through the design of the sampling methods and data weighting, the respondents may not fully represent the general adult population in Hungary.However, these biasing mechanisms were assumed to remain constant over time, making the data suitable for trend analysis.
Our results indicate that reluctance to receive COVID-19 vaccination was most prevalent among younger adults, and those with lower educational attainment or financial status, those with chronic disease, and those residing outside of the capital.These results are in line with the existing literature [33][34][35][36][37].A previous ecological study has already suggested a similar association in Hungary [3].Thus, socioeconomic inequalities strongly influence vaccination attitudes in Hungary, and this association appears to be stable over time.The difference in COVID-19 vaccination coverage between Hungary and the EU may be partly explained by the higher proportion of the socio-economically deprived population in Hungary.
The study found that the main reasons for vaccine hesitancy were a lack of trust, particularly in science, and information barriers.In East-Central Europe, institutional trust, including trust in health institutions, has historically been low [38], which was further reduced by the COVID-19 pandemic [39].The findings are consistent with other studies that highlight the link between a lack of trust (both in general and specifically in authorities, institutions, and science), and vaccine hesitant attitudes and behaviours [18,19].In interpreting the research results, it is important to note that the WHO vaccination recommendation at the time of the study was used to ensure that primary vaccination was recommended in the first line for those at the highest risk of severe COVID-19, then, if vaccines were plentifully available, as was the case in Hungary, for the high-risk group, followed by the medium-risk group (which contains all healthy adults) [40].Furthermore, the WHO recommended vaccination regardless of previous infection [40,41], with a specified time interval between the vaccination and the previous infection.These guidelines are especially important in the interpretation of the misinformation category of the reason for non vaccination, especially in the subcategory 'refers to previous (presumed) infection' .Although we could move out some of those respondents from the misinformation category, who have been infected in the last four months (and move them to the fifth, other category), but as the questionnaire only asked for the first PCR test, we could not detect those who has been infected twice, or who detected the infection with other types of tests.Therefore, in this study it was considered misinformed for previously infected individuals to decline vaccination against COVID-19 partially on the basis of naturally acquired immunity.
The results show that while the overall trust barrier remains stable over time, its composition varies significantly across the three pandemic waves analysed.The third wave of the pandemic was the most severe in Hungary in terms of the number of severe illnesses and deaths recorded, moreover, by this time pandemic fatigue had set in.Non-pharmacological measures were gradually withdrawn with the introduction of vaccination [42].Towards the end of the research period, the Omicron variant emerged, causing a milder course of disease than the previously dominant Delta variant [43].The reason for vaccine refusal was increasingly the fact that respondents no longer felt threatened by COVID-19.As public experience with vaccination increased, the argument that the vaccine had been developed too quickly became less prevalent among vaccine hesitants, while more people believed that the vaccine was ineffective as it became clear to people that they could still become infected and even transmit the virus despite being vaccinated [44,45].
As the pandemic progressed, more and more people mistakenly believed that the risks of vaccination outweighed the benefits, and concerns about vaccine safety became the most frequently cited argument, with its share rising steadily.
Conclusion
In this paper we analysed large-scale, representative monthly cross-sectional survey data to reveal changes in public attitudes towards COVID-19 vaccination from the beginning of the vaccination campaign in Hungary.Vaccine hesitancy related to COVID-19 vaccination heavily decreased from December 2020 to December 2021 in Hungary, however the size of the group who radically rejected vaccination did not change over time.Socio-demographic characteristics were an important determinant of the observed vaccine hesitancy in each observed pandemic waves.The analysis of the reasons for vaccine rejection showed that the main reasons behind not acceptance are the lack of trust (especially distrust in science) and having misinformation.Identifying and understanding the complexity of how vaccine hesitancy evolved during the pandemic can help to understand and halt the decline in both COVID-19 and general vaccine confidence by developing targeted public health programs to address these issues.
Table 2
Proportion of the vaccinated and non-vaccinated respondents in the 3 rd to 5 th pandemic waves, by socio-demographic characteristics | 5,818.4 | 2024-07-05T00:00:00.000 | [
"Sociology",
"Economics"
] |
The effects of oil price uncertainty on economic activities in South Africa
This paper investigates the link between oil price uncertainty shocks and key macroeconomic indicators of a net oil importing country, South Africa. Monthly data covering the period 1990:01 to 2015:12 is used. The Structural Vector Autoregressive (SVAR) methodology is applied incorporating realized volatility as an indicator of oil price uncertainty to investigate the short run effects of oil price uncertainty. The Generalised Impulse Response Functions (GIRF) analysis reveals that for most variables, oil price uncertainty shock has an adverse and persistent effect over time. Consistent with GIRF, the Generalised Forecast Error Variance Decompositions (GFEVDs) analysis also points out that oil price uncertainty shocks contributes substantially to variations in real output, inflation and various macroeconomic variables of South Africa. Therefore, SVAR analysis reveals the significant role of exogenous oil prices on the economy of South Africa when price uncertainty shocks exist. The policy implications of these findings are drawn. Subjects: Economics; Environmental Economics; Finance
PUBLIC INTEREST STATEMENT
South Africa is an oil importing economy which highly depends on non-renewable energy. It continues to dominate the region's consumption of both petroleum and total energy. The exogenously determined oil prices potentially generate uncertainties in the whole economy. The purpose of this paper is to characterize the impact of the oil price uncertainty shock on a small open economy like South Africa taking a holistic approach focusing on several macroeconomic variables. The study empirically tracks out how the oil price uncertainty shock affects economic activities in the economy using the Structural Vector Autoregressive (SVAR) model. It employs realised volatility as a proxy for uncertainty as volatility is known to display surges of uncertainty following major shocks. The transmission channels through which spill overs from policy specific shocks follow are also observed. The aim is to trigger conversations around the long-term energy policies of South Africa to mitigate economic uncertainties.
Introduction
The biggest risk facing companies in the oil and gas sector is uncertainty about energy prices. This makes it difficult to calculate the economic benefits of investments over a project's life cycle and may hence impact on economic activities. The characteristic relationship between oil price and macroeconomic activity remains one of the key, debatable and inconclusive subjects in energy economics (Bashar, Wadud, & Ahmed, 2013). Since the 1970s and at least until recently the global fluctuations in oil prices have been considered as a major source of macroeconomic uncertainties around the world. Incidences of low growth, high inflation and unemployment, occurred in most developed economies in the early and late 1970s making oil price changes a significant cause of concern for policymakers (Blanchard & Gali, 2007;Rafiq, Salim, & Bloch, 2009). The oil price uncertainty created by oil price volatility causes adjustments in an economy besides the adjustments from the oil price in levels. Sharp oil price changes affect various economic activities depending on the degree of uncertainty created by oil price volatility, and the attitudes of economic agents towards uncertainty (Ebrahim, Inderwildi, & King, 2014). Most sources of oil price volatility are known to originate from wars, terrorist attacks, conflicts in the Middle East and political events involving OPEC members (Bloom, 2009;Guo & Kliesen, 2005). The resultant uncertainty after such major shocks for example, the OPEC 1 oil-price shock, stem from events exogenous to economic activity (Hamilton, 1983;Plante & Traum, 2012).
While oil demand is more observed in industrialised developed economies, the demand for oil in developing economies is also on an upward trend (Birol, 2007). The South African economy has a convincing track record of growth through privatisation and extensive trade liberalisation as compared to other African economies. The economy continues to dominate the region's consumption of both petroleum and total energy (Nkomo, 2006). However, as a net oil importer South Africa continues to experience increased demand for energy and is expected to be susceptible to oil price shocks regardless of the phase of the business cycle (Kilian, 2008). Changes in oil prices impact real economic activities of oil importing countries on both the demand and supply side (Khan & Ahmed, 2011). From the consumer position, a rise in oil price causes a growth in energy bills, dwindling real money balances held by households and ultimately reducing the aggregate demand (Elder & Serletis, 2010). From the producer position, firms face an increase in production cost which leads to a decline in productivity which in turn negatively affects employment, core inflation, and investment (Lescaroux and Mignon, 2008). Since oil prices affects both production and consumption decisions, it has implications for the terms of trade for oil importing countries. The main interest in this paper is to characterize how uncertainty in oil price affects macroeconomic activity of a small open economy like South Africa.
Theoretical predictions about the effect of oil price uncertainty on economic activity are mixed. On one hand, higher uncertainty (example in oil prices) increases the option value of waiting on investment thus reducing growth (Bernanke, 1983;Bloom, 2009;Dixit, 1992;Elder & Serletis, 2010;Pindyck, 1991). In other words, higher oil price uncertainty will cause firms to postpone irreversible investment decisions, preferring to wait for more information if the cash flow from the investment is dependent upon the oil price. This will result in cyclical fluctuations in the economy (Jo, 2014). Similarly, the real options theory is applicable to durable consumption since durable good expenditures cannot be completely recovered once conducted (Kilian & Vigfusson, 2011a, 2011b. On the other hand, Plante and Traum (2012) show that an increase in oil price volatility is likely to result in increased investment and higher real GDP in a general equilibrium model due to the heightened precautionary savings motives though durable spending decreases temporarily. Also Baskaya et al. (2013) explain that higher uncertainty heightens precautionary savings, which in turn results in higher investment, in a scenario where agents have no access to alternative assets.
Our contribution in this study is to take a holistic approach by focusing on the effect of oil price uncertainty on several macroeconomic variables of South Africa namely: industrial production, inflation, interest rate, exchange rate, trade balance and money supply. This enables us to assess the overall performance of the economy under oil price uncertainty. In addition, this allows for dynamic interaction between the various variables and avoids any misspecification bias. This is lacking in most previous studies including the South African ones which focus on bivariate relationship (Aye, Dadam, Gupta, & Mamba, 2014;Aye, 2015;Dave & Aye, 2015;Elder & Serletis, 2010;Guo & Kliesen, 2005;Jo, 2014 among others). Ron, Kilian, and Vigfusson (2013) and Kilian and Vigfusson (2011b) observed that oil price uncertainty may not have been well captured by commonly used oil price volatility measures; hence some mixed theoretical and empirical findings. Therefore, since the previous South African studies relied on model based oil price volatility measure derived mainly from the GARCH process, the current study adds value by employing realized volatility which is nonparametric and hence does not depend on the model and/or distributional assumptions and also greatly improves the precision of the estimated uncertainty series (Andersen & Benzoni, 2009;Jo, 2014). Moreover, realized volatility measure provides empirical content to the latent variance variable and facilitate direct estimation of parametric models (Andersen & Benzoni, 2009). Although closely related to Rafiq et al. (2009) on the Thai economy in terms of wider variable coverage, the current study differs in that it employs more recent data covering the recent global economic and financial crisis and uses higher frequency (monthly) data which is more amenable to uncertainty analysis (Dedi et al., 2016). Also we employ a theoretically founded model (Structural Vector Autoregressive (SVAR model) which allows us to make the necessary restrictions on the estimated reduced form model, required for identification of the underlying structural model, provided by economic theory unlike the traditional VAR employed in Rafiq et al. (2009) which is an atheoretical approach. Moreover, the current study included additional variables such as money supply and the exchange rate series, the latter being an important variable for the transmission of oil price volatility given that South Africa is an oil importing country and hence integrated into the world economy. The results based on the SVAR model indicate that South African economy is adversely affected by the international oil price uncertainty shocks.
The rest of the paper is organized as follows: section two provides a review of empirical studies. The methodology is presented in section three. Results are discussed in section four. Section five concludes.
Literature review
Employing volatility to capture oil price uncertainty is quite visible in the literature (Elder & Serletis, 2010) and recent evidence documents evidence of volatility in oil prices (Baumeister & Kilian, 2016;Demirbas, Al-Sasi, & Nizami, 2017). This literature review focuses on studies that considered the effect of oil price uncertainty.
For instance, Guo and Kliesen (2005) investigate the impact of oil price shocks on U.S macroeconomic activity over the period 1984-2004. With the realised volatility measure built from daily crude oil future prices traded on the New York Mercantile Exchange (NYMEX), the findings indicate a negative and significant effect of oil price volatility on key US macro-economic variables for instance fixed investment, consumption, employment and the unemployment rate. The authors also hypothesised asymmetric effect stating that large oil price volatility, in the short run may adversely affect aggregate output as it delays business investment by increasing uncertainty or encourage costly resource allocation. The results also suggest that variations in oil prices are less significant as compared to the uncertainty about future prices.
The Vector Autoregressive (VAR) model used in this study has been widely used in empirical literature to identify the purely exogenous shocks to trace out how the economy reacts, thus impulse response analysis. Jo (2014) investigates how oil price uncertainty affects global real economic activity using quarterly data from 1958Q2 to 2008Q3. The study utilizes a VAR model with stochastic volatility, and it is to be noted that the study incorporated realized volatility as an additional uncertainty indicator. The results indicate that oil price uncertainty shock has significant negative effects on the world industrial production, the proxy used for economic activity. Bredin, Elder, and Fountas (2011) investigate the relationship between oil price uncertainty and industrial production for the G-7 countries using structural VAR adjusted for a multivariate GARCHin-mean. The key result indicates that oil price uncertainty has a significant negative effect in four of the G7 countries, Canada, France, the United Kingdom and the United States. In their result, the impulse response analysis also suggests that in the short run, positive and negative oil shocks may be contractionary. Bashar et al. (2013) in the multivariate SVAR framework analyse the direct effects of oil price uncertainty shocks on macro economy of a net oil exporting country, Canada. The results obtained suggest that although the aggregate level of output is not affected by the shocks to oil price level, the oil price uncertainty contribute significant impacts on the economy of Canada. The results also indicate resemblance of an adverse demand shock attributable to higher oil price uncertainty, which is found to considerably decrease both output and price levels.
A study on China is quite interesting to look at considering the size of the country and the nature of the economy. Caporale, Ali, and Spagnolo (2015) investigate the time-varying impact of oil price uncertainty on stock prices in China. The study covers the period 1997:01-2014:02 utilising weekly data and a bivariate GARCH-in-mean VAR estimation on ten sectoral indices. The results points out positive effects of oil price volatility on stock returns in phases characterised by demand-side shocks in all investigated sectors with the exception of the Financials, Consumer Services, and Oil and Gas. The latter two sectors are found to exhibit a negative response to oil price uncertainty during periods with supply-side shocks instead. In contrast, during periods with precautionary demand shocks, the oil price uncertainty effect appears to be insignificant.
Using a logistic transition based autoregressive model (LSTR), Ahmed and Wadud (2017) investigate the effect of oil price volatility on the Australian equity market. They show that a one period increase or a shock in oil price volatility raises volatility of equity in consumer discretion, consumer staple, finance, industry, telecom and consumer staple sectors with equity volatility in industries exhibit a larger and prolonged positive response compared to the equity volatility response of other sectors. Similarly, Luo and Qin (2017) examine the impact of realized volatility and the crude oil volatility index (OVX) on the Chinese stock market. They find that the OVX shocks have significant and negative effects while the impact of realized volatility shocks is negligible, especially after the recent financial crisis. Elder (2018) examine the effect of oil price volatility on disaggregated measures of industrial production namely indexes for industrial production excluding technology and motor vehicles, energy-related special aggregates and non-energy-related special aggregates. They find the effects of oil price volatility are concentrated in activities related to primary energy generation and oil and gas drilling relative to other energy-related market groups. In addition, oil price volatility affects a broad range of special aggregates among the non-energy-related market groups, including aggregates sorted by consumer goods and business equipment.
This analysis can also be viewed as part of the recent literature on the macroeconomic implications of oil price volatility on developing countries. Rafiq et al. (2009) using monthly data from 1993:1 to 2006:4 examines the impact of oil price volatility on essential macroeconomic variables of Thailand in a VAR framework. Realized volatility measure was used to construct the oil price volatility data. As the structural break test points out breaks during the Asian financial crisis, the paper applies two different VAR structures, one for the entire period and the other for the post-crisis period. For the total period, causality tests, impulse response functions and variance decomposition tests show that oil price volatility has substantial effect on investment and unemployment. Conversely, the post-crisis period review indicates that the effect of oil price volatility is transferred to the budget deficit. Dutta, Nikkinen, and Rothovius (2017) focused on analysing the impact of implied crude oil volatility index (OVX) on the realized volatility of Middle East and African stock markets using modified GARCH models. Their findings show that the oil market uncertainty has substantial effects on most of the markets.
With respect to South Africa, there is a growing literature on the impact of oil price shocks and uncertainty. Aye et al. (2014) analyse the impact of oil price uncertainty on manufacturing production of South Africa utilising a bivariate GARCH-in-mean VAR model. The results indicate that oil price uncertainty have a significant negative effect on manufacturing production. Additionally, the paper also establishes that the reactions of manufacturing production to positive and negative shocks are asymmetric.
Another study by Aye (2015) measures oil price uncertainty, utilising conditional standard deviation of the one-step-ahead forecast error for the change in oil price. The study investigates the effect of oil price uncertainty on South African stock returns using weekly data covering the period spanning from 1995:07:01 to 2014:08:30. In the analysis a bivariate GARCH-in-mean VAR framework is used. The results show a negative but marginally significant effect of oil price uncertainty on stock returns. In addition, there is evidence of asymmetric response of stock returns to negative and positive oil price uncertainty shocks. Dave and Aye (2015), using quarterly data from 1960 to 2014, in the GARCH-in-mean VAR model examines the impact of the volatility of oil prices on savings in South Africa. The study measures oil price uncertainty as the conditional standard deviation of the change in oil price one-step-ahead forecast error. The results show that oil price uncertainty negatively affects South Africa's savings. In addition, in both direction and magnitude there are symmetric responses of savings to a positive and negative oil price shocks.
From the foregoing literature review, there is evidence that oil price uncertainty shocks have an impact on macroeconomic variables such as GDP, investment, savings and stock returns among others. The literature also suggests an asymmetric effect of oil price uncertainty shocks on the economy (See also Baumeister & Kilian, 2016;Charfeddine, Klein, & Walther, 2018;Hamilton, 1996Hamilton, , 2003Kilian & Vigfusson, 2011a; on oil price shocks). In addition, there have been a few academic accomplishments made to analyse the impact of oil price uncertainty on economic activities in the context of developing countries. More importantly in South Africa, most of the studies focus on bivariate GARCH-in-mean VAR model and the conditional standard deviation of oil price as a proxy for oil price uncertainty. It would be important to apply the realised oil price variance as a proxy for uncertainty in a multivariate SVAR model apart from the bivariate models to examine the impact of oil price uncertainty shock. This is the gap this study aims to fill.
Data and variables
The study uses time series data on a monthly frequency spanning from January 1990 to December 2015. Monthly data is used as it allows effective capturing of the oil price uncertainty shocks over the years. Admittedly, the span of years covered for the study is quite limited starting from the 90s as this may affect the robustness of the results. However, this choice is practical in order to allow an adequately larger set of variables as some variables could not be found on a monthly frequency prior to 1990. Moreover, monthly data is used instead of quarterly data these provide large number of observations which is an important issue with volatility analysis (Dedi et al., 2016).
Daily oil price is important in this study in determining the key variable of interest, which is the oil price uncertainty, integrated into the SVAR to analyse the effects of the global oil price uncertainty shocks on the South African economy. South Africa is a small net oil importing economy and usually for such smaller economies, the world oil prices are anticipated to be exogenous. The estimated model utilized West Texas Intermediate (WTI) daily crude oil prices from the US Energy Information Administration. The proxy for oil price uncertainty is the series of the realised oil price volatility RV t : ð Þ built from daily crude oil prices.
Industrial production index IP t ð Þ obtained from OECD Statistics was used as a proxy for output as GDP is not available on monthly frequency. Inflation INFL t ð Þis calculated as the percentage change in the consumer price index (CPI), where CPI is acquired from the Federal Reserve Economic Data. Also Trade balance TB t ð Þ, found by subtracting the aggregate imports from the aggregate exports is collected from International Financial Statistics (IFS) as well as South Africa's National Definitions of Money M2 t ð Þ currency. The data for real effective exchange rate ER t ð Þ, short term interest rate IR t ð Þ were sourced from Quantec Financial Conditions Index (QFC) Monthly Report.
Therefore, seven variables as shown in Table 1 are incorporated in the analysis to capture the inter-relationship within a SVAR framework. However, there is no unison on the number of variables essential in any given VAR model to provide a credible analysis of the economy. In a study by Dungey and Pagan (2000) their VAR model had 11 variables, whilst in another study by Kim and Roubini (2000) the authors discuss that seven variables are enough. Rafiq et al. (2009), suggests that most of these variables are appropriate for summarising the relevant dynamics in the macro economy. In the estimation, all the variables were expressed in logarithmic notation with the exception of the inflation.
Measuring uncertainty
Before discussing the effect of oil price uncertainty on the economy, it is crucial to initially discuss how oil price uncertainty is to be measured. This study employs realised volatility to capture the oil price uncertainty following Andersen, Bollerslev, Diebold, and Labys (2001);(2003) who suggests that, given proper conditions, realised volatility is a highly efficient unbiased estimator of returns volatility.
The proxy for oil price uncertainty is the realised volatility series constructed from high frequency daily prices of crude oil. In this study, other measures of volatility are not being used as they are difficult to adopt in a multivariate framework (Rafiq et al., 2009). The realised volatility utilised in this study follows Andersen, Bollerslev, Diebold, and Labys (2003), Guo and Kliesen (2005) and Rafiq et al. (2009), but, applying daily price changes in a month.
Where RV D;t ½ is the past realised oil variance. D t , a positive integer is the aggregate working days in a month thus period of return daily d ð Þ oil price return in a month t ð Þ. The expression (c td À c tdÀ1 Þ is the logarithmic change in c ð Þ the closing crude oil prices in a day d ð Þ and the day before, in month t ð Þ. Realised volatility (RV) is backward looking, that is, it is an ex-post measure of uncertainty. The realised oil volatility is shown in Figure 1. The realised oil volatility series is the summation of daily squared oil returns in a month. In other words, RV is a sum of squared price changes when prices are sampled at high frequency based on daily price data. Hence, realised volatility of oil price on particular day captures uncertainty around the underlying daily closing oil price. This makes realised oil volatility quite a very useful measure of actual change in oil price uncertainty following a policy action. Looking at the recent portion of RV in Figure 1, there appears to be a spike around 2008 which may be associated with the recent global financial and economic crisis.
Empirical model
To address the effect of oil price uncertainty shock on the South African economy the study employs the Structural Vector Autoregressive (SVAR) approach. The model provides multivariate structure where variation in a specific variable relates to changes in its individual lags and to variations in other variables. The SVAR approach is one of the effective methods applied to test interdependence relationships among variables and is useful for structural inference. The SVAR model permits the identification of purely exogenous structural shocks with respect to fundamental economic theory to trace dynamic effects, and has better empirical suitability than other VAR classes (Khan & Ahmed, 2011), especially in a multivariate framework.
To conclude on the suitable number of lag length of the SVAR model, the Akaike Information Criterion (AIC) and Schwarz Bayesian Criterion (SBC) are used. For policy analysis an appropriate lag length assists in reflecting a long-term effect of variables on others and removing serial correlation and in addition make errors stationary (Sims, 1980;Sims, Stock, & Watson, 1990). Equally, it is crucial to avoid choosing longer lag length as it results in multi-collinearity hitches and will decrease the degrees of freedom; therefore for this nature of models, thus the SVAR multivariate models, the sequential modified Likelihood Ratio (LR) test propose that the best is lag order 1-3 (Wooldridge, 2006).
In the model no great concern is placed on the stationarity of the variables. The SVAR analysis mainly aims to determine an inter-relationship amongst variables not parameter estimates therefore, it may not be crucial to difference variables despite the fact the variables might contain unit root (Sims, 1980;Sims et al., 1990). Differencing variables might discard key signals relating to the co-movements found in the data (Enders, 2004). In addition, (Enders, 2004) also argues against detrending the data, particularly if the objective is to estimate the structural model, as the form of variables in the VAR ought to mimic the actual data generating process. In this case, the estimation of the SVAR in levels is fit on condition that the error terms of the separate VAR equations are stationary and serially uncorrelated (McCallum, 1993;Parrado, 2001). The estimated model could only be estimated in levels given the condition that the error terms in separate VAR equations are stationary and serially uncorrelated. However, as a robustness check we also estimated the model with variables in growth rates.
Following Amisano and Giannini, (1997) and, Khan and Ahmed (2011) the SVAR is specified as: and Ae t ¼ βε t Where, Y t ðY t ¼ RV t : ; INFL t ; IR t ; ER t ; IP t ; TB t ; M2 t . . . ð Þ Þ is a nx1 ð Þ vector of n variables included in the model, A 0 is a nx1 ð Þ vector of intercept terms. A i 's for all i ¼ 1; . . . ; p are nxn ð Þ coefficient matrices with p number of lags, which allows the capturing sufficient dynamic interactions between the n variables within the model. Structural error terms ε t are a nx1 ð Þ vector, assumed normally distributed with mean of zero plus mutually orthogonal thus diagonal variance covariance matrix E ε t ε t 0 ð Þ¼I is an identity matrix. A is an invertible nxn ð Þ vector of coefficients of contemporaneous relations between all endogenous variables comprising of ones on the diagonal axis and important in the identification process. Similarly, β is nxn ð Þ vector of structural coefficients indicating effects of structural shocks.
However, the fundamental problem with equation (2) is that it cannot be estimated and observed directly to obtain specific values of ε t and coefficients A and A i s. The standard form of equation 2 can be derived by pre-multiplying the structural VAR in equation 2 with A À1 represented below.
Where A i à ¼ A À1 A i , and e t (e t ¼ A À1 βε t ) represents the standard form VAR residuals vector uncorrelated with variables Y t . Forecast error e t is normally and independently distributed and the residual variance covariance matrix Ω ¼ E e t e t 0 ð Þ. Consequently, ordinary least squares (OLS) estimation provides consistent estimates of A i à and the estimates of Ω can be derived from the fitted residuals.
The VAR equation only specifies lagged terms on the right-hand side, therefore the standard form VAR in equation (4) is incapable of displaying the contemporaneous relationship between variables which causes cross correlation between residual series. Tang, Libo, and Zhang (2010), specifies that while covariance matrix of residuals E ε t ε t 0 ð ÞÞI, does not affect the efficiency and unbiasedness characteristics of estimation, the contemporaneous link can have an impact on the impulse responses. Since equation (2) is not directly observable, the answer is acquired through an alternative relation between the structural VAR equation (2) and the reduced form VAR equation (4) as: Applying the relations in (5), the structural coefficient in equation (2) can be recovered from the standard form equation (4). The residuals e t are presumed to be linearly linked to structural shocks, such that e t ¼ βε t , where forecast errors e t are linear combinations of the structural shocks. To recover structural shocks and parameters it is crucial to impose identifying restrictions on matrix A and β, thus AΩA 0 ¼ ββ 0 based on economic intuition, using reduced form estimation.
Given the 2n 2 unknown elements in A and β, it is objective to impose ðn 2 þ nÞ=2 restrictions. In addition, Amisano and Giannini (1997) specify that additional n 2 À ðn 2 þ nÞ=2 restrictions on matrix β are required. In this case, there are seven variables in the model; therefore, 21 additional restrictions are necessary for the model to be estimated.
The reduced form VAR is multiplied by A and β matrices to get the structural model shocks and the contemporaneous relations among variable. Thus AA À1 is equivalent to an identity matrix, as a result the structural parameters can be uncovered. Since industrial production and trade balance is not contemporaneously affected by financial variables, a lower triangular matrix appears to be more consistent with economic theory. Once convergence is obtained and the estimated output being displayed in the VAR window, there is need to compute an LR test to check overidentification.
Another way of analysing the role of uncertainty is by analysing the symmetry of the response functions to positive and negative oil price changes (Jo, 2014). The impulse-response analysis is conducted to evaluate the short-run dynamics of the variables by tracing out the effect of realized volatility shock on the economy. Variance decompositions are also analysed to track the transmission channels.
Results
The preliminary results of the VAR estimates indicate that the residuals are stationary as shown in Figure 2 and also not serially correlated. The stability condition is also satisfied and no roots lie outside the unit circle. 1 Although our preliminary VAR results show a significant negative effect of realised volatility on South Africa's industrial production, and a rise in inflation and interest rates, a critical analysis in relation to the sign relationships and exactly how long the effect of oil price uncertainty shock remain effective through the system on macroeconomic variables is warranted. 2 The variance decomposition and impulse response functions are very suitable in order to observe these relationships.
In relation to the impulse responses and the variance decompositions, it is frequently observed that the outcomes of these tests remain sensitive to the ordering of the variables. The ordering chosen in this paper is; realised volatility, industrial production, trade balance, M2, interest rate, exchange rate, and inflation. However, to avoid the problems that may come with ordering of variables the mean of Generalised Impulse Response Functions (GIRF) analysis and Generalised Forecast Error Variance Decompositions (GFEVDs) analysis are used in the SVAR analysis.
The SVAR model is estimated in levels of the variables, using 8 lags. On the basis of the Akaike Information Criteria (AIC) 3 lags were suggested but though could not be used in the model as it was observed to be too small resulting in residual autocorrelation in a number of equations. Therefore, following Bashar et al. (2013) the study utilises lag order of 8 which is found to be smallest lag order that resolves residual correlation. About 21 restrictions are imposed to identify the model. The contemporaneous short run restrictions on SVAR on levels are imposed in-order to get the coefficients. The coefficients indicate that oil price uncertainty has a significant negative impact on industrial production and inflation. This is consistent with the theoretical expectations of disruption to the general price level and investments that may discourage firms to produce. 3 The discussion that follows focuses on the two main tools of analysis consistent with existing literature on SVAR: impulse responses and the variance decompositions.
In examining the short run dynamics, the study employs the Generalised Impulse Response Functions suggested by Pesaran and Shin. (1998). The GIRFs appears more suitable in contrast with Sims's (1980) orthogonalised impulse response functions, since they do not change with the ordering of the variables (Galesi & Lombardi, 2009 positive standard unit shock is applied on the realised volatility up to a limit of 24 months horizon. The GIRF of each variable to the monthly realised volatility, thus oil price uncertainty shock is displayed in Figure 3.
The paper only focuses on tracing out the dependent macroeconomic variables' responsiveness to realised volatility shock in the SVAR; hence other impulses for the system were excluded. From Figure 3, output proxied by industrial production index, responds negatively to the oil price uncertainty shock and drops immediately after the shock, though still negative. It starts increasing in the second month and drops again in the third month then recovers slowly over the time lags. The impact remained negative, significant and persistent throughout the period. This may imply that when uncertainty exist around oil prices a decline in output is likely to occur due to discouraged and/or delayed investments. Under such uncertain condition, firms would prefer to adopt a wait and see option until perhaps they obtain more information about future cash flow. This is particularly important when these investments are irreversible making capital more risky. As firms delay their investment decisions, this will affect the level of output in the economy negatively.
As expected, inflation rose immediately after the oil price uncertainty shock, but returns to tendency after 3 months lags. It fluctuates a lot in the time lags then in the 13 th month dies out slowly. This suggests that the oil price uncertainty shock may cause inflationary pressure on the South African economy. This is not surprising given that oil is one of the important factors of production and also used for distribution of goods and services. Hence uncertainty surrounding oil prices may lead to higher oil prices which will lead to higher production and distribution costs (Basnet & Upadhyaya, 2015). The increased costs will be transferred to the consumers in a form of higher consumer prices.
The response of short term interest rate indicates an immediate increase after the oil price uncertainty shock reaching a peak in month 3. It reverts slowly in the fifth month somehow indicating an insignificant response over time but however still indicating an increase on impact. This may be due to the reaction of the monetary policy authority to curb the rising inflation during uncertainty. The money balances (m2) response to oil price uncertainty shock observed is at first positive thus rising above levels of oil price shocks. After a 2 months lag the money balances rose but returns to its tendency. The impact fluctuates around the tendency and declines slowly after month 5 and continues to decline slowly after the 8th month lag. The response of M2 may hint how monetary authorities adversely respond to oil price shocks. Again this may be connected to the increasing inflation in the economy arising from oil price uncertainty shocks. Theoretically, the monetary authority will be expected to reduce the supply of money in such situation.
The immediate response of trade balance to a shock in oil price uncertainty is negative though this became positive within the first two periods which may be due to a temporary postponement of energy imports. The response fluctuates a lot and stabilises over time and after the 17 month lags and remains stagnant up to the 24 months. The immediate fall in trade balance is intuitive. This is because uncertainty arising from oil price volatility may reduce international trade flows since it increases the risks faced by both importers and exporters. Oil price fluctuations can increase uncertainty about the future path of the oil price, causing consumers to postpone irreversible purchases of consumer durable goods and also causing firms to postpone irreversible investments. The reduction in domestic consumption and investment expenditures implies a reduction in aggregate demand, and thus international trade (Bernanke, 1983;Bloom, 2009;Chen & Hsu, 2013). Also another intuitive explanation is that high fuel costs and fuel cost variability induce an increase in transport costs, which will reduce international trade directly (Chen & Hsu, 2013). As increase in oil prices increases the cost of imported raw materials and capital goods this will cause trade deficit and consequently trade balance will decline.
The real effective exchange rate appreciates instantaneously subsequent the shocks to realised volatility up to the fourth month. After the fifth month, it declines slowly and revert its tendency in the 7 month and depreciating over the next 24 months. This is in line with the reverting behaviour of real effective exchange rate as it is known to return to its pre shock levels once all prices and wages have adjusted. Moreover, the standard theory of exchange rate determination suggests that an increase in the oil price causes the currency of an oil-exporting country to appreciate as the demand for its currency increases in the foreign exchange market while an increase in oil price depreciates the currency of an oil-importing country because the supply of its domestic currency in the foreign exchange market increases (Basnet & Upadhyaya, 2015). It is then not surprising that the Rand depreciates in the long term following an oil price uncertainty shock. Overall, according to Figure 3, all variables had an immediate response with expected signs to the oil price uncertainty shock and the impulse responses indicate that for most of the variables, the oil price uncertainty shock appear to be persistent. Although, only output and inflation exhibit significant responses, the negative effect of oil price uncertainty on the macroeconomic variables of South Africa is of concern.
The Generalised Forecast Error Variance Decompositions (GFEVDs) analysis is very crucial in a multivariate framework. GFEVDs offer an analysis tool as it point out on the information each variable contributes in explaining the variations in the other variable. They provide the proportion of the movements in the dependent variables as a result of their own shocks, against shocks to the other variables (Rafiq et al., 2009), in other words the transmission channels of policy specific shocks spill overs. The results of the GFEVDs over a 24 months forecast horizon shocks are displayed in Table 2.
There is a noteworthy contribution of oil price uncertainty shock to output of 5.01% over the 24 months horizon thus oil price uncertainty shock explains between 0.30 and 5.01% variations in domestic output. The same pattern is also observed for the all the variables. Similarly there is a notable contribution to money balances over the 24 months horizon where the oil price uncertainty shock explains between 0.37 and 5.05% variations in money balances. The variations in money supply are theoretically expected as increased volatility may result in reduced money balances over time for an oil importing country.
Moreover GFEVDs displays that inflation rate is significantly affected by the oil price uncertainty shocks. Similarly the contribution of oil price uncertainty shock to inflation is persistent over the months. The results propose that the contribution to inflation is quite significant between 2.77 and 6.25% between 2 month and 24 month forecast horizon. The observed influence of oil price uncertainty shock on inflation implies the quick adjustment of domestic prices in relation to international oil prices when uncertainty is present. In general, the GFEVDs indicates that the oil price uncertainty shocks contribute substantially to variations in inflation, real output and various macroeconomic variables of South Africa, and persistent impact is observed over the longer forecast horizon. Therefore, SVAR analysis reveals the significant role of exogenous oil prices on the economy of South Africa when price uncertainty shocks exist.
Our results thus far indicate that the economy is significantly affected by the international oil price uncertainty shocks. To investigate the robustness of our results, we re-estimate the SVAR using alternative volatility measures, variable growth rates and incorporating an adjusted sample period. Given that RV seems to exhibit a break in 2008M12 we ran a robustness check using data from 2009 to 2015. These results are presented in Figure 4 and confirm the adverse effect of oil price uncertainty shock. We also use the growth rates of all the variables as a second robustness check since the variables have unit root; for the full sample ranging from1990 to 2015. As seen in Figure 5, the conclusions are qualitatively similar to the case of level variables. As a third check we use the CBOE Crude Oil ETF Volatility Index (OVX) instead of the realized volatility also with sample ranging from 2009 to 2015. The results as depicted in Figure 6 though not as precise as the realized oil price volatility measure seems to still support the adverse effect of oil price uncertainty shock. Overall, all the robustness results still indicate the expected immediate response to the oil price uncertainty shock for most of the variables.
Conclusion and policy implications
The study empirically examines the short run effects of oil price uncertainty shock on the macroeconomic variables of South Africa from January 1990 to December 2015. The SVAR methodology is applied incorporating realized volatility as an indicator of oil price uncertainty.
The generalised impulse response analysis reveals that for most of the variables, the oil price uncertainty shocks proxied by realised volatility respond persistently and the impact is significant for output and inflation. It is also visible in the results that oil price uncertainty shocks dampen output due to the postponement of irreversible investments by firms. The GFEVDs analysis also points out that oil price uncertainty significantly contributes to changes in real output and various macroeconomic variables of South Africa with minor impact on interest rate where a marginal contribution is realised. The results additionally suggest that oil price uncertainty shocks transmit mostly through industrial production, inflation, trade and money balances and the exchange rate channels creating significant pressure on South Africa's economic activities. The findings have important policy implications. South African industries greatly benefit from the locally mined coal which accounts for about 70% of the total energy whilst the imported crude oil accounts for about 15% of the aggregate energy used in South Africa (Department of Minerals and Energy, 2006). However, as a net crude oil importer the impact on output is quickly noticed because the country imports vast amounts of crude oil which creates a spill over effect on the economy. . Although the fluctuations in world oil prices cannot be controlled at domestic level, South Africa as a country should adopt such policies that can curb the adverse effects of oil price uncertainty shocks. Policy makers should always be alert on matters linked to oil price uncertainty so that they can be able to manage expectations of economic agents in order to steer expected outcomes of the South African economy. The expected outcomes include more emphasis in the prudent macroeconomic policy to deal with output boost when oil price uncertainty exists. Moreover, given the significant impact of oil price uncertainty shock on inflation, South African monetary authority should be more vigilant in their price stabilization policies to curb this effect. Furthermore, since South Africa depend more on non-renewable sources; renewable energy sources could possibly be an essential option to hedge oil price uncertainty shocks. The country for instance can increase developmental efforts towards hydropower, wind, solar and waste energy investments. Other policy decisions may include transitional arrangements and energy subsidy reforms. Regulations, standards and targets may be implemented more rigorously and effectively to enhance energy efficiency. The need for a policy coordination, not only the macroeconomic and regulatory but also social policy could help to reduce external shocks such as the oil price uncertainty shocks. | 9,878.4 | 2018-01-01T00:00:00.000 | [
"Economics"
] |
Some kinematics of halo coronal mass ejections
Abstract We present an investigation of halo coronal mass ejections (HCMEs) kinematics and other facts about the HCMEs. The study of HCMEs is very important because HCMEs are regarded as the main causes of heliospheric and geomagnetic disturbances. In this study, we have investigated 313 HCMEs observed during 1996-2012 by LASCO, coronal holes, and solar flares. We find that HCMEs are of two types: accelerated HCMEs and decelerated HCMEs. The mean space speed of HCMEs is 1283 km/s while the mean speed of decelerated HCMEs and accelerated HCMEs is 1349 km/s and 1174 km/s, respectively. The investigation shows that 1 (0.3%) HCME was associated with class A SXR, 14 (4.7%) HCMEs were associated with class B SXR-flares, 87 (29.4%) HCMEs were associated with class C SXR-flares, 125 (42.2%) HCMEs were associated with class M SXR-flares and 69 (23.3%) HCMEs were associated with class X SXR-flares. The speed of HCMEs increases with the importance of solar SXR-flares. The various results obtained in the present analysis are discussed in the light of the existing scenario of heliospheric physics.
Introduction
Coronal mass ejections (CMEs) are regarded as the main causes of heliospheric and geomagnetic disturbances. The Halo Coronal mass ejections (HCMEs) propagating near the Sun-Earth direction, either toward or away from the Earth, and appear as an annulus surrounding the solar disk (Chen 2011). General information on CMEs has been well-reviewed by several authors (Chen 2011;Gosling 1997;Howard et al. 1997;Hundhausen 1999;Webb and Howard 2012). The HCMEs are important to us because they are geo-effective and the source regions of front side HCMEs are likely to be located within a few tens degrees of the center of the solar disk (Webb 2002;Gopalswamy 2004). Yashiro et al. (2004) reported that the average velocity of HCMEs is nearly two times of normal CMEs and this makes HCMEs as a very special class of CMEs. Andrews (2002) proposed that many faint and slow speed HCMEs are not observed by coronagraphs and maybe the reason that the average velocity of the observed HCMEs is high. Zhang et al. (2010) used Monte Carlo simulations to investigate how the brightness of CMEs with an average velocity of 523 km/ s is reduced when they are observed as halo events. According to Gopalswamy et al. (2010a,b) partial and full HCMEs occur at a rate of~10% and~4% of all CMEs, respectively. Verma (2011) studied the relationship of X-class soft X-ray flares with CMEs and HCMEs and found that energetic X-class solar flares are related to 79% solar CMEs and 46% HCMEs, respectively.
Early measurements of the speeds of CMEs suggested that there are two distinct types of speed profiles: slow CMEs, and fast CMEs (Gosling et al. 1976). Sheeley et al. (1999) also classified CMEs into two classes: gradual CMEs having speeds 400 to 600 km/s, associated with erupting prominences, and impulsive CMEs having > or equal 750 km/s, associated with solar flares. Moon et al. (2002) support the concept of two types of CMEs: flares and eruptive filament associated CMEs. Low & Zhang (2002) found that the speed-height profiles of all CMEs do not form two discrete populations but show a continuous spectrum that does not support the view of two classes of CMEs. Low & Zhang (2002) presented a qualitative theory in which the two kinds of CMEs are represented by different initial states of the erupted magnetic configuration. Chen & Krall (2003) concluded that one mechanism is sufficient to explain the bimodal speed distribution. Yurchyshyn et al. (2005) analyze the statistical properties of CMEs and they found that the speed distributions for accelerating and decelerating events are nearly the same and they can be fitted with a single lognormal distribution. Zagainova & Fainshtein (2015) presented a detailed study of CMEs relation with powerful flares which not related to eruptive filaments. Fainshtein et al. (2018) also studied the kinematic characteristics of two types of HCMEs (accelerating and slowing down). Recently, Michalek et al. (2019) found that CMEs can be divided into two categories: regular and specific events. The regular events are pronounced and follow the pattern of sunspot numbers. On the other hand, special events are poorer and more correlated with the general conditions of the heliosphere and corona. Reeves et al. (2019) presented a model to simulate a coronal mass ejection using a three-dimensional magnetohydrodynamic code that includes coronal heating, thermal conduction, and radiative cooling in the energy equation. Must recently, Verma & Mittal (2019) find that all HCMEs were observed when there were CHs and solar flares within 10 ∘ to 60 ∘ . They also find that the 128 (40.8%) and 88 (23.6%) HCMEs events were observed when there were CHs and solar flares within 10 ∘ and 20 ∘ , respectively. Verma & Mittal (2019) are of the view that the HCMEs may have been produced by some mechanism, in which the mass ejected by solar flares or active prominences, gets connected with the open magnetic lines of CHs (source of high-speed solar wind streams) and moves along them to appear as an HCMEs. Verma & Mittal (2019) are also of the view that CMEs formation is a two-step process: The first step, Triggering include releasing of materials by flares (etc.) involved in CMEs formation is a necessary condition while in the second step, the reconnection of a bipolar magnetic field of flares or active prominence region with an open magnetic field of CHs is a sufficient condition.
In the present paper, we propose to investigate the HCMEs observed by LASCO/ SOHO satellite during the interval period of January 06, 1997, to September 30, 2012, to understand some facts about HCMEs. We have also plan to classify the HCMEs events as accelerating HCMEs and decelerating HCMEs to understand them in a better way. In section 1 of the paper, we try to give an introduction to various facts of HCMEs related to research work. In section 2 of the paper, we mentioned observational data and analysis. In section 3 we have discussed the results obtained in the present investigation. A summary and conclusions are delivered in the last section 4.
Observational Data and Analysis
After the launch of the SOHO satellite in December 1995, the LASCO telescope with C1, C2, and C3 coronagraph, observed thousands of CMEs (Brueckner et al. 1995). LASCO data is available online (https://cdaw.gsfc.nasa.gov/CME_list/index.html) as described by Gopalswamy et al. (2010a,b). The LASCO instrument is a set of three coronagraph telescopes and recording white light images of the solar corona from 1.1 through 30 solar radii since its launch. The C1 coronagraph record images of the corona from 1.1 to 3 solar radii, the C2 coronagraph record images of the corona from about 1.5 to 6 solar radii, and C3 coronagraph record images of the corona from about 3.5 to 30 solar radii. The LASCO C1 coronagraph failed in June 1998. Our data sample includes 518 HCMEs observed from 1996 April 29 to 2012 September 30. The list of HCMEs is downloaded from the 'CDAW CME catalog available online (https://cdaw.gsfc.nasa.gov/CME_list/HALO/index.html). Meanwhile, the first halo CME recorded by SOHO occurred on 29 April 1996, and its source was located on the backside of the Sun. An HCME with the source on the visible disk and the associated solar flare was recorded on 6 January 1997. During the above period, LASCO coronagraph recorded 518 HCMEs but 205 HCMEs were excluded from the study because of their association with back-sided disk events and thus we are left with 313 halo CME events for the present study. HCMEs data can be obtained from other space missions but we have used only CDAW catalog data to study the kinematics of HCMEs.
The HCMEs list downloaded from the above website show many parameters of each halo CME event. The HCMEs catalog lists many parameters of each event. In particular, the date and time of the first appearance of the CME in C2, the apparent speed (km/s), the space speed, the acceleration (m/s 2 ), the measured position angle, the locations of the associated source (flares, etc.), the soft X-ray class of the associated SXR flare, the onset time of the flare, the related daily movies and plots and remark about the event.
HCMEs and solar flares
The various detail of HCMEs observed between 1996 and 2012 is shown in Table 1. Column 1 shows the year of observation, the columns 2-6 show the numbers of all CMEs, HCMEs, HCMEs with incomplete data (ID), accelerating HCMEs, and decelerating HCMEs. From Table 1 it is clear that the numbers of HCMEs are increasing from solar minimum to solar maximum, following sunspot cycle indicating CMEs belong to a class of solar active phenomena. It is also clear from Table 1 that the numbers of HCMEs are only 2.67% of CMEs observed in the period 1996-2012. In this investigation, we have considered 313 HCMEs for study but the location and class of 17 SXR flares associated with HCMEs are not known and thus we excluded 17 halo CME events. The data of 296 HCMEs associated with SXR flares whose locations and importance class are known and the data 295 HCMEs are used to study the disk distribution of HCMEs and relation between the speed of HCMEs associated SXR flares. We investigated the relationship between the space speed of HCMEs and the importance of SXR solar flares. We also found that the space speed of HCMEs is positively associated with the class of SXR flares as shown in Table 2.
The columns 2-6 lists the number and the mean and median space speed of the HCMEs associated with an X-ray class A, B, C, M, and X flare, respectively. The values are given for all (row 2), decelerating (row 3), and accelerating (row 4) HCMEs." The locations and classes of 313 HCMEs associated with SXR flares are investigated and found that the location and class of 17 HCMEs associated with SXR flares are not known. The solar disk locations of SXR flares associated with HCMEs are shown in Figure 1. In Figure 1 we have plotted solar disk locations SXR flares associated with HCMEs on the x-axis as an east (−90 ∘ to 0 ∘ ) to the west (0 ∘ to 90 ∘ ) longitude in degree and on the y-axis as south (−90 ∘ to 0 ∘ ) -north (0 ∘ to 90 ∘ ) latitude in degree. It is clear from Figure 1 that the source locations of~95% HCMEs associated solar SXR flares occur within ±30 ∘ solar latitude while solar longitude (east-west longitude) shows that the~70% HCMEs related solar events sources are mostly concentrated within ±40 ∘ solar longitude. Gopalswamy et al. (2010a,b) in their study for 247 CMEs shows that about 70% of events occur near the central meridian in the range of ±30 ∘ solar longitude. Hence our result is in good agreement with the previous study of Gopalswamy et al. (2010a,b).
The solar soft X-ray classes of 296 HCMEs associated with SXR flares are known and the plot of the class of SXR flares versus the number of HCMEs is shown in Figure 2. The Figure 2 and Table 2 show that 1 (0.3%) HCME was associated with class A SXR, 14 (4.7%) HCMEs were associated with class B SXR flares, 87 (29.4%) HCMEs were associated with class C SXR flares, 125 (42.2%) HCMEs were associated with class M SXR flares and 69 (23.3%) HCMEs were associ- ated with class X SXR flares. Gopalswamy et al. (2010a,b) in their study shows that halo CMEs have high kinetic energy because they are wider. Gopalswamy et al. (2010a,b) also shows that the mean and median flare size for associated halo CMEs is M1.0. Our result of the present investigation is in good agreement with the result of Gopalswamy et al. (2010a,b).
It is clear from row 2 of Table 2 that the mean and median speed of HCMEs increases with the classes (A to X) of SXR flares. It is also clear from rows 2 and 3 of Table 2 that the mean and median speeds of decelerated HCMEs are more than the accelerated HCMEs.
Kinematics of HCMEs
As mentioned earlier we have 313 HCMEs whose space speed, accelerations, and onset time of flares are known The CME space speeds that are corrected using the cone model by Xie et al. (2004). Further, out of 313 halo CME events, the 133 halo CME events show average positive accelerations while 180 halo CME events show average negative accelerations or decelerations during their journey from the solar surface to solar corona and beyond. The space speed of HCMEs is an important parameter to understand the origin of HCMEs. The space speed of 20 HCMEs is not known and thus after excluding 20 halo CME events, we are left with 292 HCMEs for space speed distribution study. The detail of HCMEs number used in the plot is indicated in Figure 3. The plot of space speed of HCMEs versus the number of HCMEs is shown in Figure 3.
In the upper part of Figure 3, we have shown a plot of space speed of HCMEs versus the number of HCMEs and upper part of Figure 3 also show the number of HCMEs, mean and median values of space speed of HCMEs as 1283 km/s and 1151 km/s, respectively.
In the middle part of Figure 3, we have shown space speed of decelerating HCMEs versus the number of HCMEs and middle part of Figure 3 also show the number of HCMEs, mean and median values of space speed of decelerating HCMEs as 1349 km/s and 1260 km/s, respectively. In the lower part of Figure 3, we have shown a plot of space speed of accelerating HCMEs versus the number of HCMEs and lower part of Figure 3 also show the number of HCMEs, mean and median values of space speed of HCMEs as 1174 km/s and 1051 km/s, respectively.
All CMEs have acceleration at the beginning as CMEs lift off from rest and in this situation, the propelling force of the CMEs exceeds gravity force and other restraining forces. In Figure 4, we have shown a plot of the space speed of HCMEs versus the acceleration of HCMEs. This figure also shows that the most HCMEs with space speed < 1500 km/s has the accelerations < 50 to −50 (m/s 2 ) while HCMEs with space speed > 1500 km/s have also higher accelerations >50 to −50 (m/s 2 ). We try to fit a linear equation and a twodegree polynomial equation to Figure 4 data and we find a very small correlation value which indicates that space speed of HCMEs and acceleration of HCMEs has no relation. From Figures 4, we could not draw any specific conclusion between the space speed of HCMEs and the acceleration of HCMEs which shows poor correlation coefficient; therefore we decided to investigate HCMEs in two parts: decelerating HCMEs and accelerating HCMEs speed, separately.
In Figure 5, we have shown a plot of space speed of HCMEs versus the acceleration of decelerating HCMEs. The linear equation (Y=−0.037X+22.51) is fitted to Figure 5 data and we have calculated the value of the correlation coefficient as R=0.65, which indicates that the acceleration of decelerating HCMEs and the space speed of HCMEs has a fair correlation. The two-degree polynomial equation (Y=−4E−06X 2 −0.024X+18.06) is also fitted to Figure 5 data and we have also calculated the value of the correlations coefficient as R=0.65, which indicate that the acceleration of decelerating HCMEs and the space speed of HCMEs has a fair correlation.
In Figure 6, we have shown a plot of space speed of HCMEs versus the acceleration of accelerating HCMEs. The linear equation (Y=−0.025X−10.30) is fitted to Figure 6 data and we have also calculated the value of the correlation coefficient as R=0.55, which indicates that the acceleration of HCMEs and space speed of HCMEs has a fair correlation. The two-degree polynomial equation (Y= 4E−06X 2 +0.015X−4.21) is also applied to Figure 6 data and we have also calculated the value of the correlation coefficient as R=0.55, which indicate that the acceleration of accelerating HCMEs and space speed of accelerating HCMEs has a fair correlation. An investigation relation between space speed of HCMEs and acceleration of decelerating HCMEs in Figures 5 and an
On Distribution of HCMEs
The distribution of the HCMEs source is shown in Figure 1. From Figure 1 It is obvious that 95% of HCMEs associated solar flares are located between ±−30 ∘ solar latitude on the solar disk and 70% of HCMEs associated solar flares located between ±40 ∘ solar longitude on the solar disk. This shows that majority of HCMEs associated with solar flares are located near the center of the disk and support the earlier results (Webb 2002;Gopalswamy 2004;Yashiro et al. 2004). According to Gopalswamy (2004) full HCMEs for period 1996-2003, occur at a rate of~4% of all CMEs but in the present study, we find that full HCMEs for period 1996-2012 occur at the rate of out of~2.67% which is less than the value reported by Gopalswamy (2004). The Table 2 shows that 1 (0.3%) HCMEs was associated with class A SXR flare, 14 (4.7%) HCMEs were associated with class B SXR flares, 87(29.4%) HCMEs were associated with class C SXR flares, 125 (42.2%) HCMEs were associated with class M SXR flares and 69 (23.3%) HCMEs were associated with class X SXR flares. Earlier Gopalswamy et al. (2010a,b) investigated the relation between HCMEs and SXR flares their results are presented in the form of Figure 2 which shows HCME counts of B, C, M, X-class flares. Komitov et al. (2010) showed that out of 14775 SXR flares observed during period 1996-2009 the number of class X SXR flares were 126 (0.85%), number of class M SXR flares were 1443 (9.77%) and number of class C SXR flares were 13206 (89.38%). This clearly shows that HCMEs are mostly associated with class M and X SXR flares but sometimes smaller SXR flares of class A, B, and C can also produce HCMEs.
On Kinematics of HCMEs
The investigation of Yashiro et al. (2004) indicates that the average velocity of HCMEs is twice normal CMEs and this is the main reason to make HCMEs a very special class of CMEs. In the present study, we have also found that the mean value of space speed of HCMEs is 1283 km/s while the mean value of accelerated HCMEs and decelerated HCMEs is 1173 km/s and 1349 km/s, respectively. As mentioned in section 1 of the paper, there is controversy about two types of CMEs and to examine it we have carried a comprehensive detailed study to address the question of whether two classes of CME events exist or not. The acceleration and speed of HCMEs are important tools to know the kinematics of HCMEs. The plot of the space speed of HCMEs and the acceleration of HCMEs is shown in Figure 4 and we try to fit a linear and two degrees polynomial equation to Figure 4 data and we found a very small value of the correlation coefficient as R < 0.2, clearly shows that there is no relation between the speed of HCMEs and acceleration HCMEs. Figure 4 also tells us that the most HCMEs with space speed < 1500 km/s has the accelerations < 50 to −50 (m/s 2 ) while HCMEs with space speed > 1500 km/s have higher accelerations > 50 to −50 (m/s 2 ) Thus we conclude that the there is no relation between the space speed of HCMEs and the acceleration of HCMEs. Figures 4 show that the space speed HCMEs and the acceleration of HCMEs do not follow linear or two-degree equations and thus ruled out the possibility of the origin of hallo CMEs as a single step process. Now to understand HCMEs in detail we investigated the HCMEs by dividing HCMEs as accelerating HCMEs and as decelerating HCMEs. Figure 5 shows the plot of the space speed of HCMEs and the acceleration of decelerating HCMEs and we also fitted a linear equation and two-degree polynomial to these figures data. We calculated the value of correlations coefficients R > 0.65 for the space speed of HCMEs and the acceleration of decelerating HCMEs, which indicate that the speed of HCMEs and the acceleration of decelerating HCMEs have a fair level of correlations. Similarly, Figure 6 shows the plot space speed of HCMEs and acceleration of accelerating HCMEs and after fitting a linear equation and two degrees polynomial to these figures data, we find the value of correlations coefficient as R > 0.60, which indicate that space speed of HCMEs and acceleration of accelerating HCMEs has a fair level of correlations. These suggest that the HCMEs are of two types: accelerating HCMEs and decelerating HCMEs.
As mentioned earlier Verma & Mittal (2019) investigated the origin of HCMEs and found that HCMEs observed when there were CHs and solar flares within 10 ∘ to 60 ∘ . They find that the 128 (40.8%) and 88 (23.6%) HCMEs events were observed when there were CHs and solar flares within 10 ∘ and 20 ∘ , respectively. Verma & Mittal (2019) are of the view that the HCMEs may have been produced by some mechanism, in which the mass ejected by solar flares or active prominences, gets connected with the open magnetic lines of CHs (source of high-speed solar wind streams) and moves along them to appear as an HCMEs, earlier suggested by Verma and Pande (1989) and Verma (1998Verma ( , 2002. Further, we are also of the view that CMEs formation is a two-step process: The first step, Triggering include releasing of materials by flares (etc.) involved in CMEs formation is a necessary condition while in the second step, the reconnection of the bipolar magnetic field of flares or active prominence region with an open magnetic field of CHs is a sufficient condition. The present work is the continuation of earlier work by Verma & Mittal (2019) and using the same set of data in the present investigation. As discussed above, we are of the view that the HCMEs are of two types: accelerated HCMEs and decelerated HCMEs and these HCMEs may be originated through the following mechanism: 1. Accelerated HCMEs may be originating through mass released by solar flares and early reconnection to open the magnetic field of coronal holes at a lower height in corona and moves as HCMEs to higher coronal height including earth and beyond. 2. Decelerated HCMEs may be originating through mass released by solar flares and late reconnection to open magnetic field of coronal holes at a higher height and moves as HCMEs to higher corona height including earth and beyond.
The above hypothesis is based on various investigations carried out in the preceding paragraph and we suggest that a detailed theoretical investigation should be carried out to understand this phenomenon. The mechanism involved in the origin of CMEs using the reconnection scenario is discussed by Verma (1998Verma ( , 2002 and Verma & Mittal (2019).
Summary and Conclusion
In this study, we have investigated 313 HCMEs observed during 1996-2012 by LASCO, coronal holes, and solar flares. We find that 95% of HCMEs associated solar flares are located between ±−30 ∘ solar latitude on the solar disk and 70% of HCMEs associated solar flares located between ±40 ∘ solar longitude on the solar disk. We also find that 1 (0.3%) HCME was associated with class A SXR, 14 (4.7%) HCMEs were associated with class B SXR flares, 87 (29.4%) HCMEs were associated with class C SXR flares, 125 (42.2%) HCMEs were associated with class M SXR flares and 69 (23.3%) HCMEs were associated with class X SXR flares. The speed of HCMEs increases with the importance of solar SXR flares. We also find that HCMEs are of two types: accelerated HCMEs and decelerated HCMEs. The mean space speed of HCMEs is 1283 km/s while the mean speed of decelerated HCMEs and accelerated HCMEs is 1349 km/s and 1174 km/s, respectively. | 5,616.8 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
The contemporaneous phase of GRB Afterglows — Application to GRB 221009A
The TeV observations of GRB 221009A provided us with a unique opportunity to analyze the contemporaneous phase in which both prompt and afterglow emissions are seen simultaneously. To describe this initial phase of Gamma-Ray Burst afterglows, we suggest a model for a blast wave with intermittent energy supply. We treat the blast wave as a two-element structure. The central engine supplies energy to the inner part (shocked ejecta material) via the reverse shock. As the shocked ejecta material expands, its internal energy is transferred to the shocked external matter. We take into account the inertia of the shocked external material so that the pressure difference across this region determines the derivative of the blast wave’s Lorentz factor. Applied to GRB 221009A, the model yields a very good fit to the observations of the entire TeV lightcurve except for three regions where there are excesses in the data with respect to the model. Those are well correlated with the three largest episodes of the prompt activity and thus, we interpret them as the reverse shock emission. Our best-fit solution for GRB 221009A is an extremely narrow jet with an opening angle θ j ≈ 0 . 07 ◦ (500 / Γ 0 ) propagating into a wind-like external medium. This extremely narrow angle is consistent with the huge isotropic equivalent energy of this burst and its inverse jet break explains the very rapid rise of the afterglow. Interestingly, photon-photon annihilation doesn’t play a decisive role in the best-fit model.
INTRODUCTION
Gamma Ray Bursts (GRBs), see e.g.(GRBs, see e.g.Piran 2004, for a review) are observed across the entire electromagnetic spectrum -from radio frequencies to TeV energies.Nevertheless, many important aspects of their physics remain poorly understood and every out-of-the-ordinary observation offers new insights.
GRB emission is divided into two phases: prompt and afterglow.The prompt emission is often rapidly variable with a complex lightcurve, whose structure is attributed to the activity of the central engine.It is followed by a gradually declining afterglow emission with a smooth lightcurve, which is attributed to a relativistic blast wave that forms when the material ejected by the central engine during the prompt phase interacts with the circumburst medium.It is natural to expect that the early afterglow (i.e., the emission from the external shock) overlaps in time with the prompt emission (see e.g.Zou & Piran 2010).However, this overlapping initial stage of afterglows is hidden from view: afterglows' X-ray signal is indiscernible against a much stronger prompt X-ray signal.
High-energy gamma-ray emission from GRBs was predicted before actual detections.From efficiency arguments, it was anticipated that GRBs have synchrotron-self-Compton (SSC) component in the TeV range and that it carries about 10 percent of the emitted energy (Derishev et al. 2000(Derishev et al. , 2001)).
Early attempts to estimate the parameters of the high-energy SSC component from the first principles placed it in the GeV band (Sari & Esin 2001).Numerous other efforts to estimate this emission (Zhang & Mészáros 2001b;Razzaque et al. 2004;Pe'er & Waxman 2005;Fan & Piran 2006;Fan et al. 2008;Panaitescu 2008) followed.Later the expectations were extended to the TeV band for dense stellar wind environments (Vurm & Beloborodov 2017).It was not until the appearance of the pair-balance model (Derishev & Piran 2016) that firstprinciples predictions of the strength of the SSC component became possible.The pair-balance model places the peak of the SSC component at around 1 TeV (rather insensitive to GRB parameters) and predicts that its power is always comparable to the power of the synchrotron component.
When Fermi-LAT detected early GeV emission from several GRBs, it was suggested (Kumar & Barniol Duran 2010;Ghisellini et al. 2010) that this is an afterglow and not prompt emission, even though there was some overlap of the observed GeV signal with the lower energy gamma-rays.However, there was not enough data to explore the contemporaneous prompt-afterglow phase.Recently, the TeV observatory LHAASO (LHAASO Collaboration et al. 2023) detected a strong TeV signal while numerous satellites (Frederiks et al. 2023;An et al. 2023;Lesage et al. 2023) measured a simultaneous low-energy gamma-rays from GRB 221009A.The lightcurve in low-energy gamma-rays shows the usual complex structure with several pulses and rapid variability, whereas the TeV lightcurve is smooth as expected for the signal from an expanding blast wave.It should be noted that GRB 221009A was the brightest so far, and there is a good chance that its signal-to-noise ratio in the very first unambiguous afterglow onset detection will remain unbeatable for a long time.
The unique observations of GRB 221009A have already attracted a lot of attention, and several authors attempted to explain different features of this burst (Zhang et al. 2023;O'Connor et al. 2023;Khangulyan et al. 2023;Ren et al. 2023;Sato et al. 2023).Here we focus on the fact that the observation of the initial afterglow stage, overlapping with the prompt phase, offers a unique opportunity to extract information that would be inaccessible by other means.In particular, this includes the density profile of the circumburst medium as well as some characteristics of the central engine, such as the width of the jet, the variability of its Lorentz factor, and the radiative efficiency during the prompt phase.
A model for the earliest afterglow has to take into account that the blast wave continuously acquires more energy from the central engine.So far, attention has been given to the behaviour of the reverse shock in the presence of the variable energy supply from the ejecta Nakar & Piran (2004, 2005); McMahon et al. (2006).This is natural as the reverse shock is the one that directly interacts with the ejecta.Less attention was given to the impact of variable energy supply on the forward shock.Indeed, models of blast waves with energy supply (and leakage due to radiation) have been known for a long time: the seminal Blandford & McKee (1976) paper describing a self-similar solution (hereafter the BM solution) of a constant energy blast wave also presented blast wave solutions with increasing energy.Various authors considered plain archetypal scenarios: continuous energy injection at a rate that varies as a power of time (Cohen & Piran 1999;Zhang & Mészáros 2001a;Sari & Mészáros 2000) or impulsive energy injection (Sari & Mészáros 2000;Kumar & Piran 2000).Other works simply considered a BM solution in which the energy increases as a function of time.However, these models, which assume that a self similar solution has already been established, are not suitable for the very early phase in which the reverse shock is still active and it is gradually transferring its energy to the forward shock.The case of GRB 221009A, with simultaneous measurements of both the central engine's activity and the emission from the blast wave fed by this activity, enables us to explore this stage.A suitable theory is necessary to use these observations and build a coherent physical concept of the earliest afterglow.
Our goal is to construct a minimalistic (i.e., as simple as possible) but functional (i.e., verified against observations) model for an expanding relativistic blast wave that is continuously fed with energy from the central engine at an arbitrary time-dependent rate.Observation of the earliest afterglow from GRB 221009A provides a testbed for such a model.Incorporating model elements one by one and comparing the theoretical results to the observations, we were able to justify that our model is indeed the minimal functional model.
Our model is based on the common reverse shock, forward shock and contact discontinuity structure.Namely, at the initial phase, the blast wave is composed of a shocked ejecta material and a shocked external material that are in pressure balance at the contact discontinuity between them.We include two new ingredients to this picture.First, noticing that the energy from the ejecta is supplied through the reverse shock to the shocked ejecta material which gradually pushes on the shocked external material, giving the forward shock additional energy, we consider the energy transfer between those two regions.Second, we explicitly take into account the inertia of the shocked external matter, by relating the blast wave's acceleration (or deceleration) to the pressure gradient inside.We treat both regions as single elements and therefore ignore all possible internal dynamics.Although we do not resolve the spatial structure of the forward shock region, the pressure difference across it is taken into account, being an essential constituent of the model.To reach a better quantitative agreement with observations at later times one needs to consider a narrow jet instead of a spherical outflow, or otherwise the luminosity decline is too slow.
This model provides a good fit to the measured TeV lightcurve of GRB 221009A, which likely makes it a valid choice for modeling the afterglow onset phase in other GRBs as well.In addition, the model serendipitously produced a fairly convincing case for emission coming from the reverse shock.The reverse-shock emission component turns out to be (i) much weaker than emission from the forward shock and (ii) in principle discernible in TeV lightcurves.Although the reverse shock contribution appears to be small in the TeV range, it may be stronger in other spectral bands (Sari & Piran 1999a,b;Mészáros & Rees 1999;Wang et al. 2001;Nakar & Piran 2004;Razzaque et al. 2004;Genet et al. 2007).
The paper is organized as follows.In Sect.
(3) we present the new model for relativistic blast wave with continuous and intermittent energy supply, and discuss solutions for afterglow lightcurves for several typical scenarios.In Sect.(4) we describe the method of lightcurve modeling.We then apply the model to TeV observations of GRB 221009A.We find an excellent best-fit solution and we analyze its properties.In Sect.(5) we list our most important findings also mentioning possible alternative interpretations.
GRB 221009A OBSERVATIONS
The prompt emission of GRB 221009A was so bright, that it saturated all the X-ray telescopes that attempted to detect it.However, Konus-WIND and ART-XC teams (Frederiks et al. 2023) were able to reconstruct the prompt lightcurve.combining their measurements.Figure (1), which is compiled from the Frederiks et al. (2023) data, presents the prompt lightcurve.A similar lightcurve was obtained by and GECAM-C (An et al. 2023).The three brightest pulses together contain about 99 percent of the entire energy release during the prompt phase of GRB 221009A.Their energy shares are in proportion 6:3:1.We will name these pulses P2, P3, and P4, following the notation of Frederiks et al. (2023).
TeV lightcurve of GRB 221009A was reported by LHAASO Collaboration et al. (2023) and is shown in the upper panel of Fig. (2).To make the lightcurve less noisy we combined the original data points between 10 s and 500 s into sets of three, while the other data points remain unchanged.Following LHAASO Collaboration et al. (2023), we measure time from T * ≡ T0 + 226 s, where the TeV signal appears first, with the sharp rise that is delayed by several seconds with respect to the beginning of the strongest pulse in the prompt Our assumption is that the prompt kinetic luminosity follows the count rate.lightcurve.Unlike the prompt low-energy gamma-ray emission, the TeV lightcurve is smooth and almost featureless.It has a rather sharp peak at approx.17 s and two barely distinguishable features (local excesses) in its decaying branch, one narrow feature at approx.40 s and one broad feature between approx.250 s and 600 s.
The drastic difference between the rapidly and strongly variable prompt low-energy gamma-rays and the smooth TeV lightcurve (the two do not show any apparent correlation) immediately led to the conclusion that, despite the temporal coincidence with the prompt low-energy gamma-rays, the TeV emission is due to afterglow and it arises from the blast wave expanding into the circumburst medium (LHAASO Collaboration et al. 2023).
For an afterglow, it is convenient to complement the lightcurve (i.e., temporal dependence of energy flux F (T )) with another plot for quantity E eff = 4πD 2 L F × (T − T * ), that is source luminosity multiplied by observer's time.For a self-similar blast wave, the bolometric luminosity is kin /t obs (Cohen & Piran 1999;Derishev 2023), where ϵr is radiative efficiency, E (iso) kin the shock's kinetic energy (isotropic equivalent), and the numerical coefficient C L is of the order of unity (it depends on the external density profile).Therefore, the quantity E eff has the meaning of the proxy to the blast wave energy (we will call this effective blast wave energy).It is plotted in the lower panel of Fig. (2).The effective energy reaches its very broad maximum at approx.70 s, i.e. much later than the lightcurve's maximum.Apparently, this fact reflects the history of energy injection into the blast wave.A subsequent decline in E eff may be either due to the deceleration of the structured-jet blast wave or due to radiative energy losses, or both.The former brings into view parts of the jetted structure that move at larger angles to the line of sight and have lower isotropic equivalent energy, thus reducing the average isotropic equivalent energy.The two features (excesses) in the lightcurve, described earlier, become more visible in the effective energy plot.
The model
Put in simple words, the observed lightcurve of GRB 221009A afterglow (see Fig. 2) poses two challenges for any blast wave model that attempts to describe it.First, the model has to explain the very rapid rise of the lightcurve, where the rise timescale trise is much smaller than the decay timescale t decay .Second, the lighcurve has a very simple shape, with its maximum located between two dominant episodes of prompt emission (these must be as well the main episodes of energy injection to the blast wave), showing no visible response to the second episode.With some caution, one can interpret this combination of features in the following way: the blast wave is much more responsive to energy input when it accelerates than when it decelerates.
In this section we describe the minimalistic blast wave model that can reproduce observations.Consider, first, the classical BM solution.The energy of the blast wave is: where Mej is the mass of the shocked ejecta, M fs the sweptup mass, Γ0 the Lorentz factor of ejected material at the moment of ejection, and the factor CE is a numerical factor that depends on the density profile of the external medium.
In the case of ISM (a constant density) circumburst medium C E = 6/17, and in the case of wind-like (ρ ∝ R −2 ) circumburst medium C E = 2/9.Radiative losses change the blast wave's deceleration law, thus altering the coefficient C E (Cohen et al. 1998).We restrict our analysis to power-law density profiles (ρ ∝ R −k ) that allow us to relate the swept mass and the local density (2) It is straightforward to take into account energy injection by the kinetic energy of the ejecta and radiative losses (generalizing Cohen & Piran 1999): Here ϵr is the fraction of shocked gas energy that is converted into radiation and is the difference between blast wave propagation time and free coasting time for the ejected material, calculated in ultrarelativistic approximation.Note that if Γ0 depends on tinj, then Eq. ( 4) may have more than one solution.In this case, internal shocks form within the ejected material at the coasting phase.We do not consider this possibility.
The model described by Eqs.(1-4) fails to explain the afterglow of GRB 221009A -it proves to be too responsive to energy injection and it always produces a prominent bump following the second episode of prompt activity.The reason for such behavior is the absence of inertia: whenever energy is injected into the blast wave, the latter immediately reacts, increasing its Lorentz factor.Dermer & Humi (2001), revisited by Nava et al. (2013), suggested that the rate at which the blast wave's internal energy changes with distance, rather than the energy itself, determines the variation of the Lorentz factor.In its original form, the model's limit to the Lorentz factor growth is Γ sh ∝ R 3/4 .However, these works contain a tricky flaw.The authors take the change of the entire comoving volume of the shocked gas (∝ R 3 /Γ sh ), to measure the comoving volume change of an individual fluid element.But, along the radial flow lines, the comoving volume of a fluid element scales as R 2 Γ sh (from the relativistic continuity equation).This alters the anticipated blast wave dynamics.With the correct expression, this model leads to Γ sh ∝ R, which is the common acceleration law of a hot fireball.Even this acceleration is too slow to explain the fast rise in GRB 221009A.
We adopt a different approach.Noting that in GRB 221009A the afterglow was caught in its rising phase when there is an energy supply from the central engine and the blast wave still builds up, we recall that at this stage, the system is enclosed between two shocks (Sari & Piran 1995) -a forward shock that propagates into the external medium and a reverse shock that propagates into the ejecta (illustrated in Fig. 3).A contact discontinuity separates the shocked external matter and the shocked ejecta1 .Energy is supplied to the blast wave from the jet through the reverse shock.As the reverse shock and the forward shock regions are connected via a contact discontinuity, the internal energy of the reverse shock region lost in its adiabatic expansion is transferred to the forward shock region.The forward shock region guides the evolution of the blast wave's Lorentz factor.Unlike other models, we explicitly take into account the inertia of the shocked material so that the acceleration (or deceleration) rate is given by the average pressure gradient acting on the swept mass.
We provide here a simple analytic model for this system.While this model ignores the internal dynamics within the two shocked regions, such as possible appearance of additional shocks within them, the model fits nicely, as we show here, the observations of GRB 221009A.Our approach essentially averages the hydrodynamic equation of motion (in Lagrangian formulation) over the shocked external material.A fluid element in its locally comoving frame experiences acceleration a = −c 2 ∇p/w, where p is pressure and w specific enthalpy (w = 4p for a relativistic equation of state).Then it's Lorentz factor, measured in the lab frame, changes as dΓ el / dR = a/c 2 .
Averaging over the external shocked material, we obtain a differential equation that describes the evolution of the forward shock Lorentz factor (which is √ 2 times larger than the bulk Lorentz factor of the shocked gas): Here, ∆R ′ is the width of the forward shock zone in the comoving frame.
The average enthalpy ⟨w⟩ can be expressed in terms of the energy of shocked external material E fs The pressure at the outer boundary is determined from the jump conditions at the shock, Substituting Eqs. ( 6), ( 7), and (2) into Eq.( 5) we obtain When the energy of shocked external material is equal to the energy of the self-similar BM solution, i.e.E fs = E BM ≡ C E Γ 2 sh M fs c 2 , then Eq. ( 8) must reproduce the behaviour of this self-similar solution, i.e. (R/Γ sh ) (dΓ sh /dR) = (k − 3)/2.This constrain allows us to derive pin pout The pressure ratio pin/pout for a general solution is different from Eq. ( 9).To estimate it, we note that E fs ∝ (pin + pout) and pout depends only on the local density and Lorentz factor of the forward shock.Then we calculate the ratio pin/pout for any value of E fs from the relation We combine Eqs. ( 8), ( 10) to obtain the equation that describes evolution of the forward shock Lorentz factor in its final form: In the case of ISM circumburst medium C A = 34/3 and in the case of wind-like circumburst medium C A = 6, so that Eq. ( 11) allows for a very rapid increase of the Lorentz factor, explaining the observed trise ≪ t decay . .The equality in Eq. ( 11) is approximate if the solution is not exactly self-similar, and hence, we consider C A as a semiempirical adjustable parameter.The best-fit model may require a value of C A that is somewhat different from the value given in Eq. ( 11).When fitting GRB 221009A (see Sect. 4.2), we vary this coefficient and find that the calculated lightcurve is rather sensitive to the adopted value of C A .
To estimate E fs we recall that energy transferred from the free coasting jet is transferred first to the shocked ejecta matter.This region has a kinetic energy Γ sh Msec 2 and an internal energy Ese.As the blast wave propagates, the expanding shocked ejecta does work on the adjacent region of the shocked external gas.In this process, the internal energy of the shocked ejecta, Ese, decreases, and the internal energy of the shocked swept material, E fs , increases by the same amount.We calculate the internal energy change through the relativistic adiabatic equation, pV 4/3 = const, together with expressions for internal energy (enthalpy), Eint = 4pV , and for the comoving volume, V ∝ R 2 Γ sh .Consequently, the equations for shocked ejecta 's mass, Mse, and internal energy are dMse dR = L kin Γ0c 2 dtinj dR ( 12) and We take Γ sh for the shocked ejecta Lorentz factor.In an exact hydrodynamic solution, it is approximately equal to the Lorentz factor of material at the contact discontinuity, which can be estimated through the Lorentz factor of shocked material immediately behind the forward shock, Γ sh / √ 2. However, an attempt to account for the difference of the shocked ejecta Lorentz factor from Γ sh would be fundamentally inconsistent with our treatment of the whole blast wave as a single object with zero width.
The first term in Eq. ( 13) describes energy injection from the central engine (minus energy associated with the rest mass of the reverse shock region, and minus energy radiated at the reverse shock).The second term describes the adiabatic losses.In our model for GRB 221009A we neglect radiative losses at the reverse shock, setting its radiative efficiency ϵ (rs) r to zero.This assumption is subsequently justified by the best-fit lightcurve model (see Sect. 4.3).
The energy lost in the adiabatic expansion of the shocked ejecta is transferred to the forward shock zone, and it is partially radiated.From the energy balance equation for the entire blast wave (see Eq. 3), Here the last term is the energy radiated from the reverse shock.Substituting into it Eqs.( 12), we calculate the energy of the forward shock zone Note an important difference between the integral equation for E fs and the differential equation for Ese: the first term in Eq. ( 15) refers to local value of Γ sh at a given distance, whereas the first term in Eq. ( 13) and the last term in Eq. ( 15) imply integrating Γ sh over distance.If Γ0 = const, then the derivative of the injection time over the blast wave radius is simply If Γ0 varies in time, then one has to solve Eq. ( 4) first and then differentiate.This expression completes the model's set of equations.
Dimensionless equations of the model
For convenience, we summarize all the model's equations, rewriting them in a dimensionless form to reveal the model's independent parameters.An afterglow is characterized by two dimensional and two dimensionless parameters.The first two are the isotropic equivalent kinetic energy, E kin and the dimensional deceleration time The latter is related to deceleration distance R d defined from where is the deceleration mass.One may equivalently choose density as the second dimensional parameter, but this is less convenient when the density varies with distance.
The two dimensionless parameters are the radiative efficiency of the blast wave ϵr (plus radiative efficiency of the reverse shock, ϵ (rs) r , if needed) 2 and the reduced opening angle of the jet Γ0θj, where θj is the effective opening angle of the jet.We use a Gaussian jet profile, but other choices also work with the model.Neither θj nor Γ0 is an independent parameter alone.The former only enters the lightcurve model through the reduced opening angle in combination with Γ0, whereas the latter also enters the model's equations through t d , R d , and M d , but never explicitly.For simplicity, we consider the Lorentz factor of the ejecta Γ0 as constant.Again, this is consistent with GRB 221009A observations.In addition to the source's parameters, the parameter k defines the density profile of the circumburst medium, ρ ∝ r −k , which can be either wind-like (k = 2) or a constant density (k = 0).Finally, C A is a model's adjustable factor, for which we test in our fitting procedure values around the estimate given in Eq. ( 11).
We introduce the following dimensionless variables: The dimensionless energies A dimensionless time A normalized source's kinetic power A dimensionless blast wave radius A dimensionless swept mass 2 In our simulations we assume a constant radiative efficiency.This is consistent with GRB 221009A observations.At the same time, the equations that we derive are valid for any ϵr(R) dependence.Judging from our model for GRB 221009A, the radiative efficiency of the reverse shock can be neglected in models of blast wave dynamics.
We will also use a normalized Lorentz factor The dimensionless equations of our blast wave model are: The injection time equation (from Eq. 16) The shocked ejecta energy equation (from Eq. 13) The forward shock zone energy equation (from Eq. 15) The blast wave Lorentz factor equation (from Eq. 11) A dimensionless swept mass -radius relation Equation ( 12) for the reverse shock region mass is auxiliary.The jet's kinetic power L kin (t) is not directly observed.Our assumption throughout this paper is that the released kinetic energy is proportional to the GRB's prompt luminosity.
Application to narrow jets
The model that we construct here is one-dimensional.Although the immediate application is for spherically symmetric explosions, it also works without any change in a situation when the flow lines are exactly radial.This is a good approximation if the blast wave's angular scale (set by the jet's opening angle θj) is much larger than 1/Γ sh , because the flow lines separated by an angle larger than 1/Γ are causally disconnected.All one needs in this case is to calculate the model's dimensional parameters (E (iso) kin , t d , R d , and M d ) as functions of angle in accordance to the jet's angular profile, and then solve the equations independently for each direction.
A blast wave that has decelerated to Γ sh < 1/θj has to be treated with caution.It has been suggested that in this case, the shocked material may expand sideways with near-sonic velocity, so that at later times the blast wave keeps its opening angle approximately equal to 1/Γ sh and decelerates exponentially with distance (e.g., Rhoads 1999;Sari et al. 1999).This theoretical conclusion was employed to explain some observations where afterglow lightcurves become steeper after some time.Such lightcurve breaks are now commonly considered as a manifestation of change in the blast wave's deceleration law at the moment when Γ sh ∼ 1/θj, hence the name -jet breaks.Subsequent numerical simulations, however, showed a much slower than anticipated lateral evolution (e.g., Kumar & Granot 2003).This is supported by several analytic works (see e.g.Granot & Piran 2012;Govreen-Segal & Nakar 2023).
No matter when exactly the lateral expansion starts to dominate the blast wave's deceleration law, the jet break in afterglow lightcurves always occurs when Γ sh ∼ 1/θj.Even in the absence of sideways expansion, starting from this moment, an observer sees the blast wave filling only a part of the visible zone, which is restricted to the cone with opening angle ∼ 1/Γ sh around the line of sight (the outer parts appear negligibly faint because of rapidly declining Doppler factor).So, the apparent luminosity (and the equivalent isotropic energy) is smaller compared to a spherical blast wave.We find (see Sect. 4.2) that the observed GRB 221009A lightcurve is best-fitted with a narrow jet whose opening angle is θj ∼ 1/Γ0.Moreover, our model, which assumes radial flow lines and thus ignores lateral spreading of the blast wave, demonstrates very good agreement with observations.This suggests that relativistic blast waves are indeed not so prone to lateral expansion and may preserve their original angular structure even after they decelerate to Lorentz factors less than 1/θj.
The different types of solutions
To explore the possible lightcurves expected with our model, we consider a GRB whose prompt emission has a single pulse of triangular shape and a total duration of t GRB .Again, we assume that the jet's kinetic power is proportional to the prompt luminosity, and we calculate the afterglow lightcurves for six qualitatively different scenarios.These include: very slowly decelerating blast wave with t d = 100t GRB (we will call this situation a lagging afterglow), a rapidly decelerating blast wave with t d = 0.01t GRB (we will call this situation fast afterglow), and an intermediate case with t GRB = t d .We repeat this for wind-like circumburst density profiles (we will call these lightcurves wind-type) and for constant-density circumburst media (ISM-type lightcurves).In all cases, we compare lightcurves from narrow on-axis jets with Γ0θj = 3 to lightcurves from spherically symmetric outflows (or very wide jets).
Our model lightcurves are bolometric, calculated with the theoretical value for the coefficient C A (see Eq. 11).In this section, we don't take into account photon-photon annihilation, which is included in the models fitted later to GRB 221009A.
Figure ( 4) shows wind-type lightcurves.The most remarkable feature is that all these lightcurves peak at t ≈ t GRB , and the tendency to increase the peak time with increasing t d is almost unnoticeable.However, the ratio t d /t GRB significantly influences the shape of the peak: lagging afterglows have very broad peaks, whereas fast afterglows have relatively sharp peaks.In all cases, the shape of the prompt pulse has little influence on the afterglow's lightcurves.Note that in jet models, the lightcurves not only may decay faster compared to lightcurves for spherically symmetric models, but may also have a faster rise.This is a manifestation of the same effect as in jet breaks -when the shock's Lorentz factor is smaller than 1/θj, then the blast wave does not fill the entire cone potentially visible to the observer.This reduces the averaged over visible region shock power and decreases the apparent brightness.Dimming due to this effect becomes unimportant when the accelerating blast wave starts to fulfill the condition Γ sh ≳ 1/θj, after this moment, the lightcurve's behaviour is similar to the isotropic case.We will call this phenomenon the inverse jet break.1.In all models, the radiative efficiency is ϵr = 0.2.Models with jetted outflows are for the Gaussian jet profile with a reduced width Γ 0 θ j = 3 and an on-axis line of sight.E iso,0 is the isotropic equivalent energy on the jet's axis.The effective energy plots for the same models (see Fig. 5) show a qualitative difference between them.In particular, intermediate and lagging afterglows have a very broad maximum/plateau phase, which is shifted to a later time compared to the lightcurve's peak.
Figure ( 6) shows ISM-type lightcurves.Unlike the windtype lightcurves, the peak location depends on the blast wave deceleration time and can be estimated as ≈ min(t GRB , 0.1t d ).Overall, the shape of the lightcurves' peaks follows the same pattern as for wind-like lightcurves -fast afterglows have sharp peaks and lagging afterglows have broad ones, though they never become as broad as in windtype lightcurves.In the effective energy plots (see Fig. 7), we again see the formation of a very broad maximum/plateau phase, for lagging afterglows.However, in the ISM-type solutions, and this is an important difference from the wind-type solutions, the plateau phase shifts to a later time together with the lightcurve's peak.In other words, if the plateau phase is present, then both its beginning and its end are delayed with respect to the peak of the prompt emission.
Figure 6.ISM-type lightcurves for afterglows with different t d /t GRB ratios.The prompt luminosity is scaled down by a factor 0.1.In all models the radiative efficiency is ϵr = 0.2.Models with jetted outflows are for Gaussian jet profiles with a reduced width Γ 0 θ j = 3 and an on-axis line of sight.E iso,0 is the isotropic equivalent energy on the jet's axis.8) presents separately a wind-type lightcurve that illustrates all features that we expect: the inverse jet break at an early time, then the peak, then the (regular) jet break.
As expected, the jet breaks are more pronounced in the ISM-type lightcurves.The inverse jet breaks are more pronounced in the wind-type solutions.Indeed, for an inverse jet break to appear there must be a phase of blast wave acceleration, and it readily occurs in wind-like density profiles.The existence of such a phase in a constant-density surrounding is possible only if the central engine's power as a function of time satisfies some conditions.Note that very narrow jets may have their inverse jet break coincident with the lightcurve's peak.Comparison of simulated wind-type bolometric lightcurves for spherical blast wave and for a jet with reduced opening angle Γ 0 θ j = 3.The thick gray polygonal chain follows the lightcurve for the jet solution and highlights its main features.
Lightcurve computation
We calculate jet's kinetic power under the assumption that it is proportional to the prompt luminosity, i.e.L kin = (1/η rad − 1) Lpr, where η rad is the radiative efficiency at the prompt phase.In practice, we use the count rate shown in Fig. 1 to measure Lpr and hence L kin .
The lightcurves that we calculate are bolometric, and we take into account photon-photon annihilation that arises from interaction of afterglow photons with prompt photons.The jet is structured with a constant Lorentz factor but variable kinetic power as a function of θ.The blast wave follows the angular profile of the jet, that we assume to be Gaussian.
To compute a model lightcurve we split the blast wave into many small segments, each of them subtending an angle θseg ≪ θj.Then we sum signals from all segments to obtain lightcurve for the whole blast wave.
For every segment, we calculate Γ sh (R, θ) dependence along the flow line that goes through the center of this segment at angle θ to the jet's axis.To do this we solve dimensionless equations (26, 27, 28, 29, and 30), thus we in effect ignore sideways expansion of the blast wave.Dimensional quantities are recovered by applying angle-dependent scale factors according to We consider a Gaussian jet profile and a jet's axis aligned with the line of sight.An off-axis geometry does not result in a measurable improvement of the lightcurve fit, but it increases the already large central engine's power estimate.We assume a constant and independent on the angle jet's Lorentz factor, and the same temporal The jet width correction factor fw, defined as the ratio of the apparent luminosity of a jet observed on-axis to the isotropic equivalent luminosity calculated on the jet's axis (Eq.34), for a Gaussian jet profile (Eq.32) and in the ultrarelativistic limit.The difference of the function fw(Γ 0 , θ j ) from its ultrarelativistic limit profile for L kin (t) in all directions, which is to be rescaled into dimensionless time to obtain L(τ ) that enters the equations.
Since we do not limit our study to the case Γ0θj ≫ 1, we apply jet width correction factor to relate the apparent isotropic equivalent luminosity (and hence energy) to actual on-axis luminosity (energy): Once the hydrodynamic evolution of the blast wave is calculated along all necessary directions, we proceed with calculating the bolometric lightcurve.This is done by Monte Carlo method.Namely, at every timestep we generate a photon at a random, isotropically distributed direction in the shocked matter comoving frame and assign a weight that equals the comoving-frame energy of the shocked mass element times the radiative efficiency, i.e., where dΩ is the solid angle subtended by the blast wave segment.In the next step, we apply a Lorentz boost into the lab frame (with Lorentz factor Γ sh / √ 2) and check if the photon moves at a small enough angle with respect to the line of sight (other photons are rejected).If the photon passes the direction check, its weight is multiplied by the Doppler factor, which derives from Lorentz boost from the comoving frame into the lab frame.Finally, we propagate the photon from its place of origin to the observer, calculating the probability of two-photon annihilation with another photon along its path.The main source of opacity is the prompt emission photons, for which we take the actual lightcurve (see Fig. 1) and use a spectral fit with a Band function suggested for the largest prompt emission pulse (α = −0.76,β = −2.13,E peak = 3.038 MeV, (see Frederiks et al. 2023)).Unless stated otherwise, we propagate 1 TeV photons.Those of them that escape without interaction contribute to the lightcurve.
We fit the simulated lightcurves to the set of 9 reference points shown in the upper panel of Fig. (2).The best fit is obtained by varying two parameters, C A and t d , for several values of the reduced jet width, Γ0θj.In all simulations, we keep the same radiative efficiency ϵr = 0.2.We adjust the TeV radiative efficiency, ϵ (TeV) r , to match the observed TeV fluence, varying it between different simulations but keeping it constant throughout each individual simulation.This assumption of a constant ϵ (TeV) r is motivated by the lack of statistically significant spectral evolution in the TeV part of GRB 221009A spectrum (LHAASO Collaboration et al. 2023).It is confirmed by the fact that this assumption leads to a good lightcurve fit.
Calculating the energy injection rate we use the entire lightcurve from Fig. ( 1), and all its parts are important.Even the very weak initial pulse at the trigger time (at T * − 226 s) plays a role.By the beginning of the main phase of the central engine's activity, the blast wave started by the initial pulse already has a measurable mass.This slows blast wave acceleration and delays the afterglow's rise.
Wind models
Wind-type models produce very good fits to the data.Table 1 summarizes the parameters of the best-fit lightcurve models for several values of a reduced jet width.Good solutions not only exist but also show a degeneracy with respect to the reduced jet width.All narrow jets produce similarlylooking solutions, where the main difference is at early time -the smaller is jet width, the faster is the rise in the model lightcurve.Narrower jets require a larger normalized efficiency of TeV emission and are closer to expectations in this respect.Wider jets with Γ0θj > 1 result in progressively worse fits as the reduced jet width increases.The inverse jet break, discussed in Sect.(3.4) and seen better in the windtype model solutions, contributes to the (a priori unexpected) fast rise of the lightcurve.
Our reference solution with Γ0θj = 0.6, C A = 7.0, and t d = 130 s is shown in the top panel of Fig. (10).To illustrate influence of the adjustable parameter C A , we plot the reference solution together with two others, obtained with a smaller and a larger values of C A .The bottom panel of this figure compares the reference solution with a few others.Figure (11) shows how the blast wave's Lorentz factor evolves with observer time (i.e., the arrival time for photons emitted along the shock's normal) for the reference solution.The figure shows only a moderate decline in the Lorentz factor, about a factor 2.5 from peak to T = 3000 s.This is consistent with the absence of spectral evolution in this temporal range.
The reference lightcurve model, as well as other good fits, has discrepancies with observational data in three regionsat an early time (T < 9 s), in the interval between ≈ 33 s and ≈ 40 s, and in the interval between ≈ 250 s and ≈ 650 s.In Sect.(4.3) we argue that the discrepancies are not due to the model's deficiency but rather due to an additional signal from another source of TeV photons that closely follows the prompt activity.Two effects mainly contribute to the puzzling fast rise of the TeV lightcurve.One is the rapidly increasing Lorentz boost at the time when energy associated with the largest pulse of the prompt emission is supplied to the blast wave and the latter starts to accelerate in response.Another equally important, factor is the inverse jet break phenomenon that we discussed in Sect.(3.4).At a very early time, when the blast wave accelerates, the observer's effective viewing angle decreases, the blast wave's emitting zone fills a larger fraction of the effective viewing angle, and the equivalent isotropic energy increases.This continues until the blast wave's Lorentz factor grows above 1/θj.Later on the emitting region fills the entire observable angle until the blast wave starts to decelerate and its Lorentz factor drops below 1/θj thus producing the usual jet break.For narrow jets, the inverse jet break phase continues all the way to the lightcurve's peak.
The observed lightcurve is also shaped by photon-photon annihilation between the outgoing afterglow photons and those from the prompt emission.The magnitude of this effect can be seen from Fig. ( 12), where we plot the fraction of escaping photons (i.e.absorbed luminosity divided by unabsorbed luminosity) as a function of time.For the reference solution (actually for all narrow jet solutions) the effect of absorption is moderate, counter to what one may naively expect.This happens for several reasons, all rooted in the fact that the target prompt photons move unidirectionally, straight along radial lines.First, the interaction rate per unit distance is proportional to (1 − cos θ), where θ is the propagation angle of the afterglow photon.The usual estimate, θ ∼ 1/Γ sh , holds for jets with not too low reduced opening angle, namely for Γ sh θj ≳ 1. Otherwise one has to use the estimate θ ∼ θj instead, thus reducing the interaction rate and increasing the pair creation threshold at the same time.This effect is clearly visible when the lightcurves calculated for different jet opening angles are compared -lightcurves for wider jets have stronger absorption features.Second, the angle between the photon's momentum and the radial direction decreases as 1/R 2 when the photon propagates outwards and, after integrating over the distance, this introduces an additional factor 1/3 to the optical depth estimate due to reduced interaction rate alone, plus the pair creation threshold rises accordingly.Finally, the photons that are emitted (in the shock-comoving frame) within a small solid angle Ω around the radial direction have their optical depth lower by the factor Ω/(4π) compared to an average photon.Hence an optical depth τ abs results in only (1 + τ abs ) −1 rather than exponential suppression.
Constant density models
Models with a constant-density circumburst medium produce significantly worse lightcurve fits compared to wind-type models.Since the peak in ISM-type solutions is determined by the blast wave's deceleration time (see Sect. 3.4), an ISMtype model for the GRB 221009A afterglow must have small t d .In order to reproduce the plateau phase in the effective energy plot, clearly visible in the observational data (see bottom panel of Fig. 2) and extending to ≈ 120 s, a jet with a large reduced opening angle is necessary, so that the jet break is sufficiently delayed.Our search for a good fit revealed exactly this -the best ISM-type solution has parameters Γ0θj = 7, C A = 14.2, t d = 4.5 s, and Γ0 = 1000; it is shown in Fig. ( 13).The ISM-type solution is obviously a much worse fit compared to our reference wind-type model -the residuals are much larger, especially around the peak and before it.Moreover, these residuals don't correlate with the prompt activity and have regions of both excess and deficit.The most likely interpretation of such residuals is that constant-density models are not valid for GRB 221009A.Note also that by the time the TeV observational data end (at approx.3700 s) the blast wave's Lorentz factor in the best-fit ISM-type model drops to about 7 percent of its value at the lightcurve's peak.It is unclear whether this dramatic change is consistent with the absence of spectral evolution in the TeV band.Note that the count rate (but not the residuals in this region) in the last pulse (P4) is not to scale -it is multiplied by an additional factor of 5 for better visibility.When integrating the excess over time, we find that the excess associated with P4 is larger by a factor ≈ 3 than the excesses associated with P2 and P3 combined.
Residuals: possible evidence for the reverse shock emission
In Fig. ( 14) we plot the residuals for the reference lightcurve model Γ0θj = 0.6, C A = 7.0, and t d = 130 s together with appropriately scaled count rate for the prompt phase.There are three regions of statistically significant excess in observational data as compared to the model.They follow with a short delay the three largest prompt count rate pulses (P2, P3, and P4).The most natural interpretation is that the excesses are signals from the reverse shock.The brightness of the reverse shock is expected to follow the pattern of the central engine's activity (see Eq. 13), and the delay is due to the time required for the jet material to reach the reverse shock location.Interpreting the excesses as radiation from the reverse shock, we can compare its TeV radiative efficiency to that of the forward shock -in the case of P2 and P3, the reverse shock appears to be more than an order of magnitude less efficient.
The lightcurve's response to the last (P4) pulse of the prompt emission is disproportionally strong.The integral excess associated with P4 exceeds the aggregated P2 and P3 excess by the factor ≈ 3.If this is the reverse shock signal, then the radiative efficiency of the reverse shock goes up drastically at this moment, reaching a value typical for the forward shock.Alternatively, this may be due to a larger ratio of kinetic power to the jet's prompt luminosity at P4. Finally, it is also possible that at this late stage the blast wave's structure is substantially extended and the interaction of the additional kinetic energy of the jet with the blast wave takes a different form.
Another feature worth mentioning is the response to the double-peaked pulse P2.In the residuals, there is a clear signal associated with the second peak, whereas the first peak manifests itself by a much weaker signal at T ≈ 1.7 s.Most likely, the intrinsic responses to the first and the second peak of P2 are comparable, but the former appears greatly attenuated due to the inverse jet break effect.At the time when the response to the first peak of P2 should appear, the blast wave has a significantly smaller Lorentz factor and the radiation is beamed into a wider angle thus reducing the apparent brightness.This explanation implies that the jet's reduced opening angle is indeed small, Γ0θj ≲ 1, in agreement with the parameters of our reference solution.
While the residuals for the reference wind-type solution are correlated with the prompt activity and allow interpretation as reverse shock signal, the much larger residuals for the ISM-type fit do not show any correlation with the prompt activity and can be interpreted as a drawback of the ISM-type solutions.
Inverse Compton of the prompt radiation by the external shock
TeV photons in the afterglow emission originate from Comtonization of lower-energy synchrotron photons and both components are radiated by the shock-accelerated electrons.
If the afterglow overlaps with the prompt emission, the prompt photons are present at the place (and time) where the afterglow photons are produced.The prompt radiation is external with respect to the emission zone in the blast wave.So, the standard synchrotron-self-Compton model is to be complemented by an external Compton component.The impact of the external Compton component depends on how large is the energy density of prompt radiation in comparison to the energy density of synchrotron radiation from the shock-accelerated electrons.
For an estimate, consider a GRB with duration t GRB and an isotropic equivalent radiated energy E (iso) rad ≡ η rad E (iso) GRB .In the comoving frame of the post-shock material the energy density of the prompt radiation, calculated via average luminosity, is It vanishes at large distances, where the blast wave delay with respect to light propagation time exceeds t GRB .The energy density of the post-shock material itself (in the comoving frame) is So, the ratio of these energy densities is For estimation purposes, we relate the blast wave's radius R to observer's time t obs as R = f R Γ 2 sh ct obs , where the factor f R takes into account evolution of blast wave's Lorentz factor.3Then we re-write Eq. ( 38) in terms of normalized dimensionless variables (γ, m, and τ ) introduced in Sect.(3.2): Here we calculate the kinetic energy of the GRB's ejecta as The value of ϵpr is to be compared with the energy fraction in synchrotron radiation produced by the afterglow ϵsy, and the latter is always smaller than the energy fraction in the accelerated electrons ϵe.
The value of ϵpr (given by Equation ( 39)) depends on whether the afterglow is fast (t d < t GRB , i.e. τ GRB > 1) or lagging (τ GRB < 1), and on density profile of the circumburst medium.
For a lagging afterglow in constant-density environment γ ∼ 1 and m ≈ τ 3 obs , that gives ϵpr ∼ 1/ τ GRB τ 2 obs > 1/τ 3 GRB ≳ 1.In this case ϵpr is largest at the beginning of the prompt phase and lowest towards its end.It increases dramatically for very lagging afterglows.
For a fast afterglow in constant-density environment the blast wave decelerates for most of the time.During the deceleration phase γ 2 m ≈ τ obs /τ GRB , that gives ϵpr ∼ 1.In the earliest phase, at τ obs < 1/ √ τ GRB , additional factor τ 2 obs τ GRB −1 applies.
To sum up, in every possible situation external (i.e.prompt) radiation dominates the energy density in the afterglow's emitting zone during the earliest phase, when afterglow emission overlaps with the prompt emission.From our reference model for GRB 221009A we estimate that in this burst τ GRB ≈ 0.1 (for the largest pulse), and therefore ϵpr is about two orders of magnitude larger than ϵsy (assuming fast cooling, otherwise the disproportion is even larger).Nevertheless, the observed TeV lightcurve well matches the simulated bolometric lightcurve, as if the overwhelmingly large energy density of the external radiation had no influence on Comptonization.A possible solution to this puzzle is strong Klein-Nishina suppression of the external Compton component.If the prompt radiation spectrum follows a Band function with a low-energy photon index α and peak energy E peak , then locating the afterglow synchrotron peak below Ep,sy ≲ ϵsy ϵpr ensures that Klein-Nishina suppression is strong enough to make external Compton unimportant compared to self-Compton.For the parameters of GRB 221009A (α = −0.76 and E peak = 3.038 MeV, see Frederiks et al. (2023)) and assuming ϵsy = 0.15, Eq. ( 40) implies Ep,sy ≲ 100 keV.This limit is rather constraining, but at the same time, it is not too stringent even for the earliest afterglow.Comptonization of ∼ 100 keV afterglow synchrotron photons, and even more so ∼ 3 MeV prompt photons, is indeed expected to be in the Klein-Nishina regime.Assuming Γem = 300 for the Lorentz factor of the emitting zone, we find that electrons producing 1 TeV inverse Compton photons must have the Lorentz factor γe ≳ 10 4 in the emitting zone comoving frame.This means that possible target photons are being upscattered in the Klein-Nishina regime if their energy exceeds 1 TeV/γ 2 e ≈ 10 keV (observer's frame).
DISCUSSION
Our most important findings are the following: • We find that a wind-type solution is strongly preferred over an ISM-type solution.Our best-fit solution has a narrow jet with an opening angle of θj = 0.6/Γ0 and a deceleration time t d = 130 s.For Γ0 = 500 the estimated density at the deceleration radius, R d ≈ 2 ly, is n d ≈ 3 × 10 −3 cm −3 .This corresponds to Ṁ ≈ 1.2 × 10 −6 M ⊙ /yr with V wind = 3 × 10 3 km/s.The density at the deceleration radius is proportional to Γ −8 0 and the wind's mass-loss rate estimate is proportional to Γ −4 0 .These strong dependencies enable us to limit the range of the initial Lorentz factor: 300 ≲ Γ0 ≲ 800.Here the lower limit comes from the requirement Ṁ < 10 −5 M ⊙ /yr and the upper limit from the requirement n d > 10 −4 cm −3 .
• We find three time intervals, where the observed lightcurve has statistically significant deviation from the theoretical one.All these deviations are excesses, all are well correlated in time with the three largest pulses of the prompt emission, all are slightly delayed with respect to the prompt pulses Combination of these facts motivates us to interpret the residuals as additional emission component from the reverse shock.If so, then the radiative efficiency of the reverse shock is in general much lower than that of the forward shock, being (in the TeV band) about 30 times less.
However, the last of the excesses, which presumably results from the last episode of the central engine's activity, apparently demonstrates a much higher radiative efficiency comparable to that of the forward shock.This may be a signature of a more efficient particle acceleration mechanism switching on when the reverse shock becomes stronger.Another possible interpretation of the excesses is that they result from the interaction of the jet's plasma component with free neutrons in a way suggested by (Derishev et al. 1999): the neutrons released by the jet close to its origin subsequently decay at some distance between the central engine and the reverse shock and their decay products interact with the casting jet's material.The time delay in this case is attributed to the apparent decay time of free neutrons ≈ 880 s /Γn.Interpretation of the excesses as TeV component of the prompt emission itself is the least, rather remote, possibility: in this case it is difficult to explain the systematic time delay, and the jet's Lorentz factor needs to be very large (Γ0 > 1000) to avoid two-photon annihilation of the prompt TeV photons." • We discovered that the jet of GRB 221009A is extremely narrow, its reduced width is Γ0θj ≈ 0.6.Given a Lorentz factor of ∼ 500 this corresponds to 0.07 • .This value is smaller by an order of magnitude than the value of 0.8 • derived by estimating a jet break in a constant-density surroundings by LHAASO Collaboration et al. (2023).This finding is unexpected but, besides being the best-fit solution, it is supported by several independent arguments.A likely explanation for the transition from fast rising phase to a slowly rising phase in the very early afterglow is the inverse jet break that occurs when the Lorentz factor of accelerating blast wave exceeds 1/θj which happens close to the maximum value of the Lorentz factor.
An alternative explanation for the rising phase as the result of photon-photon annihilation at early time cannot explain the entire amplitude of this phase or otherwise the absorption will form a strong depression (see Fig. 12) at the time of the second highest pulse in the prompt emission between 40 and 70 s from T * , which is not observed.Wider jets show a stronger photon-photon annihilation (as long as Γ sh θj ≲ 1).The absence of a corresponding absorption feature effectively sets a limit on the actual jet width, θj < 5 × 10 −3 rad (for Γ0 = 500).
Note also that a small reduced opening angle Γ0θj leads to a higher estimate for the relative efficiency of TeV emission.This higher TeV efficiency would put the afterglow of GRB 221009A closer to values seen in other TeV afterglows.
• We calculated lightcurves for several qualitatively different scenarios.The features that may appear in the lightcurves are: the jet break and transition to the phase of declining blast wave energy, both after the lightcurve peak, and the inverse jet break before the lightcurve peak.We were able to identify all these features simultaneously in the lightcurve of GRB 221009A.The best-fit model for this GRB is wind-type, and this conclusion is supported by the notion that inverse jet breaks are characteristic for wind-type solutions (with narrow jets), but not for ISM-type solutions.Another feature specific to wind-type solutions is present in GRB 221009A: the peak of its lightcurve is confined to the largest episode of energy release despite the fact that the blast wave deceleration time is significantly longer.We, therefore, conclude that the blast wave of GRB 221009A propagates into a stellar wind, and this is the first case where such a conclusion can be made directly from the analysis of observational data.
Towards the end of LHAASO observing time, our best-fit lightcurve (which is wind-type) declines with temporal index ≈ −1.55, that joins consistently with the X-ray afterglow decline changing from ≈ −1.5 (at approximately the same time) to ≈ −1.67 (at later time) (Williams et al. 2023).For the ISM-type model one needs an additional wide component in the jet to match the relatively slow decline after the jet break (O'Connor et al. 2023).
• We advocate the use of the effective energy plots (luminosity of the afterglow multiplied by time since its beginning, E eff ≡ Lt).The relationship between E eff and L resembles in a sense the relationship between νFν and Fν .Almost all the lightcurve features discussed in the previous paragraph (except the inverse jet break) are much better visible in such plots compared to the lightcurve representation.
• We find that a very good fit to the data assuming no spectral evolution at all.This is unexpected: as we show in Sect.(4.4), during the contemporaneous phase of the after-glow the prompt photons always dominate energy density in the afterglow's emitting region.The only way to avoid their vigorous interference with the SSC mechanism is strong Klein-Nishina suppression.In the case of GRB 221009A, this consideration implies that Klein-Nishina suppression should extend to at least down to 100 keV (in the observer's frame) placing an upper limit on location of the afterglow's synchrotron peak.
CONCLUSIONS
In this paper we propose a relatively simple hydrodynamical model for a relativistic blast wave with continuous energy supply at an arbitrarily varying rate.We treat the blast wave as a two-element structure.The central engine supplies energy to the inner part (shocked ejecta material) via the reverse shock.As the shocked ejecta material expands, its internal energy is transferred to the shocked external matter.We take into account the inertia of the shocked external material so that the pressure difference across this region determines the derivative of the blast wave's Lorentz factor.
This model was tested against the observed TeV lightcurve of GRB 221009A, which provided a unique opportunity to explore the contemporaneous phase of prompt and afterglow GRB emission.The excellent quantitative agreement between the model's predictions and the observational data, suggests that the model is valid.This, furthermore, lends support to the values of our best fit parameters that we find that indicated that the event was powered by a very narrow jet θj ≈ 0.07 • (500/Γ0) and propagated into a wind-like external medium.These properties may explain the huge isotropic equivalent energy observed in the prompt phase.Small excesses over the model's predictions that are correlated with the prompt suggestpresence of TeV signal from the reverse shock as well.
Figure 1 .
Figure 1.The count rate for the prompt phase of GRB 221009A, compiled from Konus-WIND measurements (Frederiks et al. 2023).Our assumption is that the prompt kinetic luminosity follows the count rate.
Figure 2 .
Figure 2. Upper panel: the lightcurve of GRB 221009A (energy flux F integrated over the energy range from 0.3 to 5 TeV).Lower panel: the effective blast wave energy, E eff = 4πD 2 L F × (T − T * ).The gray circles indicate reference points adopted to estimate quality of lightcurve fits (see Sect. 4).These points are located at 10 s, 12 s, 15 s, 20 s, 30 s, 60 s, 100 s, 200 s, and 1000 s.
Figure 3 .
Figure3.Schematic representation of the blast wave model.The shocked external material occupies the region between the contact discontinuity and the forward shock, whose width is ∆R ∼ R/Γ 2 sh ≪ R. The region between the contact discontinuity and the reverse shock is filled with shocked ejected material.Note that the model treats the region between the forward shock and the reverse shock as infinitely thin, yet it takes into account the pressure difference across ∆R.
Figure 4 .
Figure4.Wind-type lightcurves for afterglows with different t d /t GRB ratios.The prompt luminosity is scaled down by factor 0.1.In all models, the radiative efficiency is ϵr = 0.2.Models with jetted outflows are for the Gaussian jet profile with a reduced width Γ 0 θ j = 3 and an on-axis line of sight.E iso,0 is the isotropic equivalent energy on the jet's axis.
Figure 5 .
Figure 5. Same models as in Fig. (4) presented as plots of the efficient energy E eff = Lt.
Figure 7 .
Figure 7. Same models as in Fig. (6) presented as plots of the efficient energy E eff = Lt.
Figure (
Figure (8) presents separately a wind-type lightcurve that illustrates all features that we expect: the inverse jet break at an early time, then the peak, then the (regular) jet break.As expected, the jet breaks are more pronounced in the ISM-type lightcurves.The inverse jet breaks are more pronounced in the wind-type solutions.Indeed, for an inverse jet break to appear there must be a phase of blast wave acceleration, and it readily occurs in wind-like density profiles.The existence of such a phase in a constant-density surrounding is possible only if the central engine's power as a function of time satisfies some conditions.Note that very narrow jets may have their inverse jet break coincident with the lightcurve's peak.
Figure 8.Comparison of simulated wind-type bolometric lightcurves for spherical blast wave and for a jet with reduced opening angle Γ 0 θ j = 3.The thick gray polygonal chain follows the lightcurve for the jet solution and highlights its main features.
Figure 9 .
Figure9.The jet width correction factor fw, defined as the ratio of the apparent luminosity of a jet observed on-axis to the isotropic equivalent luminosity calculated on the jet's axis (Eq.34), for a Gaussian jet profile (Eq.32) and in the ultrarelativistic limit.The difference of the function fw(Γ 0 , θ j ) from its ultrarelativistic limit β0 cos θ) 4 .(34) The function fw(Γ0, θj) in the ultrarelativistic limit becomes the function of reduced jet width, fw(Γ0θj).It is plotted in Fig. (9).
Table 1 .
Parameters of the best-fit lightcurve models in a windlike density profile for several values of the reduced jet width.The asterisk denotes our choice for the reference model.The value Γ (max) sh is the maximal value of the blast wave's Lorentz factor during its evolution and X TeV = emission required to fit the observed TeV fluence.
Figure 10 .
Figure 10.Upper panel: A comparison of our reference solution (obtained with C A = 7.0) to the solutions obtained with smaller (C A = 6.0) and larger (C A = 8.0) values of this coefficient.All other parameters are the same.Lower panel: The reference lightcurve (black) compared to the best-fit lightcurves obtained for other values of reduced jet width Γ 0 θ j .For numerical values of parameters that characterize all the best-fit solutions see Table (1).The reference points are the same as in Fig. (2).
Figure 11 .
Figure 11.The normalized blast wave's Lorentz factor as a function of shock propagation delay (i.e., the arrival time for photons emitted straight along jet's axis), calculated for parameters of the reference wind-type model from Table (1).
Figure 12 .
Figure12.The fraction of escaping photons for best-fit solutions with different reduced width of the jet.In these simulations we take Γ 0 = 500.
Figure 14 .
Figure14.Residuals for the reference lightcurve model (black crosses) together with prompt emission count rate (orange line, arbitrary scale).Note that the count rate (but not the residuals in this region) in the last pulse (P4) is not to scale -it is multiplied by an additional factor of 5 for better visibility.When integrating the excess over time, we find that the excess associated with P4 is larger by a factor ≈ 3 than the excesses associated with P2 and P3 combined. | 15,045.8 | 2023-12-03T00:00:00.000 | [
"Physics"
] |
Utility of Continuous Disease Subtyping Systems for Improved Evaluation of Etiologic Heterogeneity
Simple Summary This paper presents an extended version of the Cox regression model to examine heterogeneous effects of risk factors on disease subtypes defined by a continuous biomarker. This approach can be easily applied to cancer studies and is accessible to researchers via user-friendly R scripts. Abstract Molecular pathologic diagnosis is important in clinical (oncology) practice. Integration of molecular pathology into epidemiological methods (i.e., molecular pathological epidemiology) allows for investigating the distinct etiology of disease subtypes based on biomarker analyses, thereby contributing to precision medicine and prevention. However, existing approaches for investigating etiological heterogeneity deal with categorical subtypes. We aimed to fully leverage continuous measures available in most biomarker readouts (gene/protein expression levels, signaling pathway activation, immune cell counts, microbiome/microbial abundance in tumor microenvironment, etc.). We present a cause-specific Cox proportional hazards regression model for evaluating how the exposure–disease subtype association changes across continuous subtyping biomarker levels. Utilizing two longitudinal observational prospective cohort studies, we investigated how the association of alcohol intake (a risk factor) with colorectal cancer incidence differed across the continuous values of tumor epigenetic DNA methylation at long interspersed nucleotide element-1 (LINE-1). The heterogeneous alcohol effect was modeled using different functions of the LINE-1 marker to demonstrate the method’s flexibility. This real-world proof-of-principle computational application demonstrates how the new method enables visualizing the trend of the exposure effect over continuous marker levels. The utilization of continuous biomarker data without categorization for investigating etiological heterogeneity can advance our understanding of biological and pathogenic mechanisms.
Introduction
In clinical medicine, patients who share common symptoms and disease characteristics are grouped into a certain disease entity. However, molecular pathological diagnosis is a part of routine clinical practice, especially in oncology. Pathogenic mechanisms commonly vary between patients with the same disease entity. Therefore, when appropriate, patients with the disease are subclassified into groups (disease subtypes) based on their molecular pathological diagnosis to improve clinical management and treatment outcomes. Different disease subtypes are regarded as developing through distinct pathological mechanisms, on which risk factors may exert differential influence [1][2][3][4]. Therefore, the disease-subtyping framework and associated etiological heterogeneity have been widely applied in analyses of both neoplastic and non-neoplastic diseases [5][6][7]. For example, subtype heterogeneity has been identified when investigating the specific effects of a polygenic risk score and breastfeeding for breast cancer subtypes: basal-like and ERBB2 (HGNC ID: 3430; so-called HER2)-overexpressing breast cancer [8].
Despite continuous measurement readouts of many biomarkers used for disease subtyping, such continuous biomarker measures are commonly reduced to a small number of categorical levels (sometimes only two or three) to define disease subtypes, which can simplify the statistical analysis and generate readily interpretable data. Therefore, most existing statistical methods for studying etiological heterogeneity have focused on categorical disease subtype settings [9]. However, this categorization leads to reduction of information in the biomarker data, and is prone to bias due to arbitrary selection of cutoff values. For example, a weakness of categorical subtyping is evident when the exposure effect is limited to patients corresponding to extreme ends of the biomarker measures. In such situations, the patients associated with the exposure effect will likely be submerged among other patients not associated with the exposure effect. As a result, analysis using limited disease subtype categories may fail to discover existing exposuredisease associations. To maximize the value of disease subtyping biomarker information, this article presents an analytical framework for assessing the heterogeneity of exposuredisease subtype associations using continuous biomarker measures instead of categorical subtyping [10].
For illustration, we applied the proposed method to assess how the association of alcohol intake with colorectal cancer incidence changes across DNA methylation level at long interspersed nucleotide element-1 (LINE-1), measured in tumors. We used data from two prospective cohort studies, the Nurses' Health Study (NHS) and Health Professionals Follow-up Study (HPFS).
Materials and Methods
To evaluate the association of an exposure with an incident disease in a cohort study, researchers typically use the Cox model [11], in which the hazard function is modeled as where λ 0 (t) is the baseline hazard at time t, X i is the possibly time-varying exposure for the i-th individual, the coefficient β of X, represents the exposure-outcome association, W i is a p × 1 vector of potential confounders, which may also be time-varying, for the i-th individual, and γ is a p × 1 vector of regression coefficients for W. Without further specification, we assumed that the exposure is a scalar throughout this paper for notational simplicity. Now, it is of interest to evaluate how the association of an exposure with the disease risk changes over the level of a disease marker. Extending Equation (1), we model the causespecific hazards [12] of the disease subtypes by incorporating a function of the marker's value as the coefficient of the exposure. Our model is where Z is the continuous disease marker (cause), λ z0 (t) and λ z (t) are the baseline hazard and hazard functions for disease with marker level Z, and g(φ, Z) is a given realvalued function of Z with unknown parameters φ. The association between the exposure and the disease with marker level Z can be then represented by the hazard ratio HR(Z) = exp{g(φ, Z)}. If the exposure is a q-dimensional column vector, its coefficient will also be vector-valued with the form g 1 φ (1) , Z , g 2 φ (2) , Z , . . . , g q φ (q) , Z , where g k is the function of the disease marker corresponding to the coefficient of the k-th element of the exposure, and φ (q) is a scalar or vector parameter of interest, k = 1, . . . , q.
The regression coefficients in the standard Cox model (1) are typically estimated by maximizing the partial likelihood [13]. Under the cause-specific proportional hazards model (2), we can construct the corresponding partial likelihood [14] as follows: where C is the set of all cases and T is the time to event, which in a cohort study is typically age at disease diagnosis. Statistical software for the standard Cox model does not work here, as the marker level Z in g(φ, Z) is defined only among cases. In this partial likelihood, the subjects in a risk set are assigned the marker value of the case in that risk set so that the numerator and denominator in PL correspond to the hazard defined at the same marker level. The parameters φ and γ in Model (2) can be estimated through maximizing this partial likelihood. Similar to the standard Cox model setting, the variances of the parameter estimates can be estimated based on the inverse of the Hessian matrix. We suggest using the restricted cubic spline approach [15] to characterize g(φ, Z). The restricted cubic spline approach has advantages of parsimony while allowing for great flexibility in characterizing nonlinear curves. A restricted cubic spline function g(φ, Z) with K (≥ 3) knots includes one intercept, one linear, and K − 2 nonlinear terms of Z; that is, where S j (Z) is the j-th basis function of the restricted cubic spline, evaluated at Z. See Supplementary Material Section S1 for details. If K = 2, g(φ, Z) only includes the intercept and the linear term. The unknown parameter φ contains the intercept and all the coefficients of the linear and nonlinear terms. The number of knots can be determined using the Akaike information criterion (AIC) or the Bayesian information criterion (BIC) [16], and typically, the knots can be evenly spaced over the distribution of Z. We used the likelihood ratio test to test for zero elements of φ. All elements of φ being zero implies no exposure-outcome association. Non-zero intercept and zero coefficients of all the linear and nonlinear terms imply an exposure-disease association that is independent of the disease marker. A non-zero coefficient of the linear term along with zero coefficients of all the nonlinear terms implies that the exposure-outcome association increases or decreases linearly over the marker level.
Simulation Study
We conducted a simulation study to assess the finite sample performance of the method when K = 3. See Supplementary Material Section S2 for details. This simulation study shows that the point estimateφ of φ performs satisfactorily (Table S1 in the Supplementary Material Section S2). When the number of cases was 900, the percent bias ofφ was 4 to 8% in five out of six configurations and 11% in the last configuration. It was 0.3 to 4% in five out of six configurations and 9.7% in the last configuration when the number of cases was increased to 4500. The empirical standard error ofφ decreased by about 60% when the number of cases were increased from 900 to 4500.
Results of Illustrative Example
We used colorectal cancer (adenocarcinoma) and its subtyping biomarker, LINE-1 methylation (with continuous unitless values) [17], as a disease biomarker example to illustrate the method. We utilized data from ongoing large prospective cohort studies, namely the Nurses' Health Study (NHS) [18,19] and Health Professionals Follow-up Study (HPFS) [20,21]. The main exposure was cumulative average alcohol intake (0, ≤15, >15 g/day). Detailed descriptions of the study population, assessment of main exposure and covariates, ascertainment of colorectal cancer cases, and quantification of LINE-1 levels are described in Supplementary Material Section S3. The age-standardized characteristics of participants in the two cohorts are summarized in Table S2 (Supplementary Material).
Shown in Figure 1 and Figure S1 (Supplementary Material) are the curves of the hazard ratios (HRs) representing the association between alcohol intake and incidence of colorectal cancer subtype as a function of continuous LINE-1 methylation level. These curves were constructed by plotting exp g φ , Z over the LINE-1 marker values (Z) within the plausible range (25 to 85). The number of knots considered were K = 2, 3, 4. The knots were evenly spaced over the LINE-1 distribution. Figure 1 and Figure S1 were drawn based on the results using the combined cohort, HPFS alone, and NHS alone. We considered two models: the main model, with stratification factors only, and the full model, which adjusted for additional covariates as described in the Methods section. Since the inclusion of additional covariates in the full model had little impact on the set of estimated coefficients φ, we simply utilized the estimation results from the main model hereafter.
Results of Illustrative Example
We used colorectal cancer (adenocarcinoma) and its subtyping biomarker, LINE-1 methylation (with continuous unitless values) [17], as a disease biomarker example to illustrate the method. We utilized data from ongoing large prospective cohort studies, namely the Nurses' Health Study (NHS) [18,19] and Health Professionals Follow-up Study (HPFS) [20,21]. The main exposure was cumulative average alcohol intake (0, ≤15, >15 g/day). Detailed descriptions of the study population, assessment of main exposure and covariates, ascertainment of colorectal cancer cases, and quantification of LINE-1 levels are described in Supplementary Material Section S3. The age-standardized characteristics of participants in the two cohorts are summarized in Table S2 (Supplementary Material).
Shown in Figures 1 and S1 (Supplementary Material) are the curves of the hazard ratios (HRs) representing the association between alcohol intake and incidence of colorectal cancer subtype as a function of continuous LINE-1 methylation level. These curves were constructed by plotting { , } over the LINE-1 marker values ( ) within the plausible range (25 to 85). The number of knots considered were = 2, 3, 4. The knots were evenly spaced over the LINE-1 distribution. Figures 1 and S1 were drawn based on the results using the combined cohort, HPFS alone, and NHS alone. We considered two models: the main model, with stratification factors only, and the full model, which adjusted for additional covariates as described in the Methods section. Since the inclusion of additional covariates in the full model had little impact on the set of estimated coefficients , we simply utilized the estimation results from the main model hereafter. Table 1 and Table S3 (Supplementary Material) present p-values from testing the following null hypotheses for the same choices of knot numbers and cohort settings as in Figure 1 and Figure S1: (i) the intercept and all the coefficients in g(φ, Z) are zero (the overall test); (ii) all the coefficients in g(φ, Z) except the intercept are zero (test for heterogeneity); (iii) all the coefficients of the nonlinear terms in g(φ, Z) are zero (test for nonlinearity). For the NHS cohort and the combined cohort, the linear model (K = 2) had the smallest BIC and AIC, and for the HPFS cohort, the linear model had the smallest BIC and the model with K = 3 had the smallest AIC. For the comparison between >15 g/day intake and 0 g/day based on the models with K = 2, 3, as shown in Table 1, there were significant associations between alcohol and cancer risk in the HPFS cohort (overall test p < 0.001) and the combined cohort (overall test p < 0.001), but there was insufficient statistical evidence to establish such an association in the NHS cohort. There was insufficient statistical evidence to establish a difference in the comparison of ≤15 g/day intake versus 0 g/day in the NHS, HPFS, or the combined cohort (Table S3). Furthermore, in the comparison of >15 g/day versus 0 g/day in the combined cohort, the heterogeneity tests were statistically significant (p < 0.001) under K = 2, 3, and the alcohol effect changed with the LINE-1 level linearly (nonlinear test p = 0.54 for K = 3). 10) for the choices of knot numbers and data settings considered in Figure 1. As shown for the HPFS and combined cohorts in Figure 1 and Table 2, the alcohol-cancer association (for >15 g/day vs. 0 g/day) tended to decrease with increasing LINE-1 methylation level, as seen from the two g(φ, Z) functions with K = 2, 3 as selected by AIC and BIC.
Discussion
In this paper, we have presented a Cox proportional hazards regression model method to fully utilize a continuous biomarker measure for disease subtyping. This statistical method can examine subtype heterogeneity of diseases in the exposure-disease association with more comprehensive and versatile utilization of continuous marker measurements. The ability of this method to potentially reveal more complicated patterns in subtype heterogeneity can help us gain deeper insights into etiologies in molecular epidemiological research and provide further evidence in the development of personalized precision medicine.
Statistical methods for investigating disease subtype heterogeneity for categorical and ordinal subtypes have been studied previously under several common study designs [9]. However, a concern may be raised about defining discrete subtypes based on categorization of biomarker values when there is little or no evidence supporting biomarker cut-point values that are often arbitrarily determined. In addition, the categorization of a continuous measure of a biomarker can lead to loss of information from the biological and statistical perspectives. The proposed method is less prone to these problems and has the potential to reveal more detailed and granular subtype heterogeneity than established approaches using categorical and ordinal subtypes.
Many biological phenomena and related biomarkers (including expressions of genes and proteins) are continuous in nature [6]. LINE-1 methylation level (i.e., the percentage of the amount of C nucleotides divided by the sum of the amounts of C and T nucleotides at CpG sites), which we used in the illustrative example, is a surrogate marker for genomewide DNA methylation and widely distributed in colorectal cancer tissue from 20 to 90% [22,23]. Currently, it remains unclear how to set the best cut-points for defining subtypes based on quantitative LINE-1 methylation levels. Accordingly, the proposed method can be applied to this biomarker without using arbitrarily cut-points. Another example for continuous tissue biomarkers is immune cell infiltrates in tumor tissue. Ample evidence supports the biological importance of the immune system in cancer [24][25][26][27]. Tumors exhibit considerably heterogeneous phenotypes according to types and quantities of immune cell infiltrates in tumor tissue [28,29], and higher immune cell infiltrates in cancer have often been associated with better cancer survival [26,[30][31][32]. Related to immune cells, microbial species are often quantitatively measured in biospecimens including tumor and normal tissue in population studies [33,34]. Readouts of quantitative microbial assays are continuous in nature without prior knowledge on any biological cut-points (or threshold effect). Categorizations of such variables are often used [35][36][37][38]. However, simple categorizations may lose biological information. It is evident that standardized definitions of tumor subtypes based on immune cell infiltrates or tissue microbiota have not been developed. There is a clear need to analyze tumor biomarker data in a way that exploits the underlying continuous nature of the biomarker.
The real-world application of this method in the two large prospective cohort studies has demonstrated its capability to depict the trend of the exposure effect across continuous molecular marker levels in contrast to use of solely categorical subtypes [10]. Further, this method allows for the flexible modeling of the heterogeneous effect of exposure on the disease of interest across biomarker levels, using models ranging from linear functions, to functions of any hypothesized form, to a case-by-case understanding of the disease.
A user-friendly R program that implements this method is publicly available (https: //www.hsph.harvard.edu/molin-wang/software/, accessed on 31 March 2022). This R function fits a Cox regression model for either incidence analysis or post-diagnosis survival analysis, where the model can include one or more exposure variables, a set of confounders (optional), and one or more stratification variables (optional). Left truncation and timevarying covariates, which are common in cohort data analyses, can be handled by putting the data in counting process form [39] before applying our R function. In the counting process data structure, a new data record is created for each questionnaire cycle at which a participant was at risk, with covariates set to their values at the time the questionnaire was returned. Furthermore, in addition to AIC and BIC, the cross validation approach [40] could also be used to choose the number of knots in the restricted cubic spline approach. The proposed method can be easily applied to studies of various diseases and risk factors and is accessible to researchers with limited experience with time-to-event data analysis.
In this article, we follow the nomenclature guideline for gene products using the Human Genome Organization (HUGO) Gene Nomenclature Committee (HGNC) standards, recommended by the expert panel [41].
Conclusions
To summarize, we have presented a Cox proportional hazards regression model for analyzing heterogeneous exposure-disease associations across disease subtypes defined by continuous biomarker measures. This method is helpful in decreasing bias caused by arbitrary subtype categorization and in increasing statistical power, as well as flexibility of assumptions about the pattern of pathologic heterogeneity. The utilization of continuous marker data without categorization for investigating subtype heterogeneity will advance our understanding of etiological heterogeneity and possibly contribute to precision medicine. Figure S1: Heterogeneous Effect of Cumulative Categorical Alcohol Intake (≤15 g/day Versus <0 g/day) on Continuous Subtypes of Colorectal Cancer; the 3 × 3 plot panel illustrates the combination of three choices of the knot number in g(φ,Z) and three cohort settings. | 4,443.6 | 2022-04-01T00:00:00.000 | [
"Biology"
] |
Development of Criteria for Analysis of Point-contact Sensor Characteristics in Complex Gas Media
The electric conductivity of point-contact multistructured sensors in a complex medium of the human breath gas has been studied. Considering a large number of parameters which characterize response curves of point-contact sensors, we proved the possibility of using a statistical procedure to assess the reproducibility of sensor operation. To select sensors with similar parameters from a studied set of sensors, the method of cluster analysis was employed. As a result, we first propose a criterion for selection of uniform sensors from sample sets based on sensing arrays, each containing over 200 point-contact sensing elements. We demonstrate the effectiveness of the proposed approach for the selection of uniform sensors in experiments with breath gas exhaled by a volunteer. In this case, pairs of random elements from the formed cluster show a good reproducibility of their sensor images. The selected elements are thus proved to be uniform samples which can be used to study complex gas media, for example, in clinical practice to develop methods of noninvasive diagnosis based
Introduction
An obligatory part of the process of samples creation and their further utilization in Yanson point-contact spectroscopy is selection of suitable ones among a set of prepared point contacts using the accepted quality criteria [1]. Point contacts with reproducible parameters can be produced by using appropriate procedures. These samples are suitable for both investigations of their own properties and study of various physical effects [2]. Such an approach proved to be a very good one at low temperatures (i.e. at liquid helium temperatures). The discovery of the point-contact gas-sensitive effect [3] has raised a number of new problems which have to be solved in order to enable further progress of the method of Yanson point-contact spectroscopy. One of them is development of new criteria and procedures which could help to effectively characterize point-contact sensitive elements at room temperatures. The fact is that working at room temperatures one is unable to use the criteria [1] any time for the targeted selection of produced point-contact nanosensors. Therefore, there is a vital need for further development of this important procedure in the technology of Yanson point-contact spectroscopy and its application in the area of point-contact gas-sensitive effect at room temperatures.
Another point is that there is also an additional difficulty in sensor investigations of complex gas media. It is connected with the probability of gas mixture state varying with time due to a possible interaction of gaseous components to each other [4]. This problem is of particular importance in the area of human breath gas investigations. The breath gas is a multi-component gas mixture which contains more than 600 volatile organic compounds with a low concentration of active substances (a few ppm or ppb and lower). Many of the breath gas components are markers of certain states of a human organism. This circumstance determines the importance of breath research for medical diagnosis [5,6]. But interaction of breath components with each other can lead to variations in the gas mixture state at different time periods, i.e. to changes in the breath profile. New compounds resulted from interaction of gas mixture components are able, in turn, to become a catalyst for subsequent chemical reactions inside the gas mixture, thereby having an additional influence on the breath profile.
Another variable in the system "breath gas -sensitive element" can be a point contact itself. It can be conditioned by a set of factors such as technology imperfections at the initial stage of sensitive elements development, variations in structure parameters of various point contacts, change in purity of the material in the contact area, etc. Thus, applying the sensor technique to study a breath gas, one should be confident that the obtained variations in a breath profile or changes in the electric conductivity of a sample are due to metabolic processes in the human organism and not to some peculiarities of the transducer behavior. Taking into account the above statements, the aim of this work is to study electric conductivity of point-contact sensors in human breath gas media and statistically analyze the obtained results to find criteria for selection of sensors with reproducible characteristics.
The presented paper consists of four sections. After the introductory section, the second section presents information about the material used for point-contact sensors creation as well as data concerning the advanced method of sensitive samples preparation and the procedure of data registration. The third section describes the results of measurements of electric conductivity of point-contact multistructured sensors in the complex medium of the human breath gas, discusses the obtained data, exhibits the prerequisites to utilization of cluster analysis for evaluation of the degree of similarity of sensor operation among the prepared point-contact samples, and demonstrates the cluster analysis results followed by a selection of uniform sensors from the sample sets based on point-contact sensing arrays. At the end, several conclusions are drawn in section four.
Materials and Methods
Creation of supersensitive point-contact transducers for analysis of complex gas media is of great importance for technology development in the emergent area of nanosensor investigations [7,8]. In this view, we designed laboratory samples of point-contact nanosensors based on an organic conductor. To obtain gas-sensitive elements we used a compound of the known organic conductor 7,7,8, with N-alkylquinolinium cation (N-Alk-iso-Qn). The organic conductor [N-C 4 H 9 -iso-Qn](TCNQ) 2 was synthesized at the laboratory of the Chemistry department of V. Karazin Kharkov National University. The procedure of synthesis is described in detail in [9]. It should be noted that purity of the initial substances plays an important role in the process of synthesis and greatly influences the compound parameters. Presence of impurities can lead to irreproducibility of composition and parameters of the synthesized compound. Furthermore, in most cases organic substances degrade during their storage under laboratory conditions. Therefore they need to be purified in order to obtain compounds with predicted characteristics. Cleaning of the TCNQ initial substance was performed using the equipment designed at B.Verkin Institute for Low Temperature Physics & Engineering. The technology of the process is described in [10].
Samples were produced on a dielectric glass-cloth-base laminate substrate with an area of 5x10 mm 2 . The substrate was covered with a copper foil which served as current-feeding electrodes. During the samples preparation, a part of the foil with dimensions of 0.15x5 mm 2 was removed from the substrate to form an interelectrode gap. A point-contact sample of TCNQ compound was manufactured in this gap as a plane mesostructure. To get a gas-sensitive substance we used a saturated solution of the [N-C 4 H 9 -iso-Qn](TCNQ) 2 salt in an organic solvent with a high vapor elasticity. That provided for a fast evaporation of the solvent at normal (room) conditions. Sensitive elements were produced using an original electrochemical technology which allowed formation of the mesoscopic point-contact multistructure. The multistructure consists of stable point-contact elements which are built between edges or side surfaces of the needle crystals of TCNQ compound ( Figure 1) similarly to the Chubov displacement technique in Yanson point-contact spectroscopy [1,2]. Each of the point-contact elements is a single point-contact sensor. The created multistructure presents an original version of the point-contact sensor array which realizes the point-contact gas-sensitive effect discovered by our research group [3,11]. The point-contact sensor array can include up to 200-300 point contacts or more. Using original preparative and technological approaches, we succeeded in producing multistructure samples of active-type sensors. These samples contain an energy source which makes it possible to record variations in sensor electric conductivity upon gas action without using an external source of current. The energy source of the sensor is formed at the contact interface of copper and the organic conductor during the electrochemical synthesis of the multistructure.
The conventional concept of electrochemical processes at the interphase boundary served as the prerequisite for the creation of active-type sensors. The simplest two-electrode electrochemical circuit includes two interphase boundaries arising in the area of contact of the electrolyte with the solid phase of each electrode. The nature of charge carriers is changed at an interphase boundary when current flows in the circuit. This process is connected with the generation of Development of Criteria for Analysis of Point-contact Sensor Characteristics in Complex Gas Media products of electrochemical transformations. The latter are accompanied by a number of attendant effects connected with the inhibition of transport processes in the reaction zone as well as of phase transformations. These transformations are provided with energy of an external electric field. The electrode polarization, i.e. electrode potential bias, is a qualitative expression of the energy consumption. Under certain conditions, the products synthesized during the process of electrolysis can be fixed on the surface of the solid phase of the electrode. In this case, the nature of the electrode gets transformed. This change can be detected by comparing the stationary potentials of the electrode without current before and after electrolysis. Any difference in the values means that new exchange processes are in progress at the interphase boundary, i.e. a new electrode is being created. An electrode potential difference which is stable in time without current is indicative of the presence of an accumulated electric energy. Samples of active-type point-contact sensors designed in such a way are able to operate autonomously without an external source of current. This finding will contribute to the miniaturization of sensor devices developed on the basis of point-contact transducers.
Human breath gas was used in our research as a many-component gas mixture. As it was mentioned above, breath is a composite mixture of gases, which, moreover, have various endogen origins and are secreted in the respiratory passages, gastro-esophagus canal and the mouth cavity [12]. Recently, the problem of exhaled gas analysis has attracted much attention because concentrations of many breath components are dependent on the metabolic state. This property makes it possible to use breath gases as markers of certain states of the human organism and develop noninvasive methods for diagnosis of various diseases. This makes the study of sensor electric conductivity in the human breath media very important. The obtained results will be of high value for the development of portable diagnostic devices.
We studied and tested 49 samples of point-contact sensors based on the conductive compound [N-C 4 H 9 -iso Qn](TCNQ) 2 . Variations in their electric conductivity were measured upon the action of a composite gas mixture (human breath gas). The measuring circuit included multimeters Keithley 2000 and Keithley 2100 (USA), and a high-precision model resistors C2-29B-0.125-1M of 1 MΩ (NPO "ERKON", Russia) which were connected in series with the sample (Figure 2a). The gas action on point-contact sensors caused an increase or decrease in its conductivity and, accordingly, growth or drop of current in the sample circuit, giving rise to a change in voltage bias across the model resistor. The conductivity of the sample and its resistance were calculated from registered values of voltage drop and current flowing through the sample using a resistor with a high percentage accuracy. The procedure of data registration and processing is described in [13]. The conductivity measurements were performed indirectly because of the rather high values of electrical resistivity of the sensor samples which are almost below the sensitivity threshold of standard multimeters.
Along with the application of standard multimetes to our electrical measurements we carried out investigations of sensors' electric conductivity upon action of a complex gas mixture by means of original portable electronic device designed by our research group (Figure 2b). This device was created for amplification and measurement of the response signal of a sensor interacting with gas media. The device can be used in future as a model for development of a personal portable tool in the noninvasive diagnostics technologies. The measurement circuit of the device contains a model resistor with a high percentage accuracy which operates similarly to the model resistors shown in Figure 2a.
To register the sensor response signal U(t) (U -voltage, ttime) a multistructure specimen was put into the holder (Figure 2b) connected to the device. One of the basic parts of the holder is a replaceable expendable mouthpiece. The mouthpiece serves as a cell for interaction of the point-contact multictructure with the breath gas. The device can operate in a two-channel mode. In this case, a holder with two plugs is used. These plugs serve to connect the device and two sensors, which can be put into plugs and studied simultaneously under identical conditions. The process of response signal registration consisted of the following stages. On the request of the researcher, the volunteer put the mouthpiece with an integrated sensor into the mouth, kept it for one minute and then took it out, again, on the request of the researcher. The signal measured by the device was transmitted automatically to the computer. Registration and processing of the results were performed using original software created at B. Verkin ILTPE. The two measurement approaches yielded reproducible results.
Results and Discussion
Investigation of the point-contact sensor matrices [4] made it possible to reveal an essential distinction of their properties from parameters of conventional chemical film and nanostructured sensors. Usually, conventional chemical sensors demonstrate response curve upon gas action which consists of a single monotonous extremum. Such dependence can contain nothing more than information about the presence or absence of the substance under study and, in some cases, about its concentration. Thus, conventional sensors operate in the regime of an alarm function only. A point-contact multistructure provides a considerably larger volume of data on the studied gas mixture. Point-contact nanosensors are characterized by a complex response curve which contains spectral information on both the composition of the medium and interaction of its components [14]. This allows point-contact sensors to be used to perform an efficient analysis of the breath gas and develop noninvasive methods for diagnosis of the human organism states [8,15]. Figure 3 shows time dependences of point-contact sensor voltage drop resulted from interaction of sensors with gas mixture. The curves are registered by two sensors analyzing the breath gas of two volunteers. The left segment of a curve (within the range of 0-60 s) characterizes the time t 1 of sensor exposure to the gas exhaled by a volunteer. The right segment is formed after the sensitive element is placed in the ambient air, where the exhaled gas does not interact with the sensor, and corresponds to relaxation of the sensor to its initial state (relaxation time t 2 ). As we can see from Figure 3, the response curves have a nonmonotonic structure which reflects the individual profile of the exhaled gas mixture (see also [4,14]). Response curves of point-contact sensors are characterized by a set of parameters which can not only be used to analyze the exhaled gas, but also to provide the criteria to evaluate the reproducibility of the sensor data. The following parameters of a sensor response curve can be considered for this purpose [16]: absolute value of the exposure maximum; absolute value of the relaxation maximum; ordinate of the final segment of the exposure phase (signal value at the end of exposure); ratio of the height of the relaxation maximum above the signal value at the end of exposure to the absolute height of the exposure maximum; ratio of the height of the relaxation maximum above the signal value at the beginning of relaxation to the absolute height of the relaxation maximum; slope of the initial segment of the exposure phase; slope of the initial segment of the relaxation phase; slope of the final segment of the relaxation phase; time of exposure; time of relaxation.
It can be seen from Figure 3 that response curves may have a number of pronounced distinctions. These include, among others, different intensities of point-contact spectra of the exhaled gas profiles, a peculiar character of nonmonotonic features for each curve and different relaxation times. One of the main problems a researcher faces when developing a new method for diagnosing states of the human organism is to ensure reproducibility of properties and parameters of the newly developed sensor samples. Finding a solution to the problem will allow the researcher to be sure that the above-mentioned distinctive features of a response curve characterize the exhaled gas only and not possible variations in properties of the sensor device. In this case, the response curve parameters will accurately reflect the individual profile of the gas exhaled by a specific person and guarantee a high selectivity and specificity of diagnosis of human organism states. To select samples with similar parameters out of a set of sensors under study we employed a cluster analysis technique. In order to minimize the influence of a possible change in the exhaled gas composition on sensor parameters, we measured our response curves over a short period of time following the measuring technique used for developing methods of medical diagnosis [16]. Response of the sensors to breath gas action was registered after an overnight fast of the subjects in one-minute exposure mode. Reproducibility of results was controlled by recording of a several consequent responses to the action of breath gas.
Cluster analysis is one of the most efficient tools to perform a systematic scientific search [17]. Since the subject of investigation, a complex sensor image of the metabolic profile of a person, has a pronounced quantitative multiparameter character, the choice of this analytical tool can be regarded as fully justified. Cluster analysis is one of the mathematical methods used to structure complex data collections consisting of separate elements. Uniform parameters, which characterize each element, are compared with each other through a certain mathematical procedure. On the basis of a quantitative expression of the parameter comparison a conclusion is made on the degree of "similarity" of the elements considered. The maximum "similarity" within the studied selection allows some elements to be grouped in clusters. Inside these clusters there can be observed some regulated "similarity" of elements. At the same time, the clusters must differ from each other. Of course, clusterization is a formal procedure based on some criteria proposed by a mathematical model. A change in the qualitative interpretation of the category of "similarity" significantly affects the cluster structure of the studied selection of elements. In the extreme case of a very rough estimation of "similarity", the whole selection can be considered as one cluster, while a very fine estimation may yield no clusters at all. Apart from the degree of "similarity", the cluster structure of a selection is also determined by how uniform the quantitative indicators of that degree are for the series of elements studied. The role of the degree of uniformity can be fulfilled by an estimated value of dispersion of the quantitative indicators of "similarity". If the mathematical model considers dispersion as one of the conditions for clusterization, structures can be formed even if the elements are not similar enough.
Cluster analysis is now one of the most efficient tools to process large amounts of data; it is employed wherever computing engineering is used. In this paper, to quantitatively determine the degree of similarity of elements we mapped them into a multidimensional Euclidean space. It was assumed that every characteristic parameter of the sensor image of a metabolic profile corresponds to a certain dimension. To illustrate a cluster analysis procedure, let's consider a simple example of two sensor samples which can be characterized by two (n=2) parameters: 1 P and 2 P . For a better visualization, sample numbers i (in our case i=1, 2) are shown as superscripts for each parameter: ( ) 1 i P and ( ) 2 i P . As it follows from mathematical statistics [18], to perform our analysis we should reduce the non-uniformly scaled random quantities X, which represent our sensor parameters, to standardized ones by scaling the initial data set. To do this, we need to replace all the random quantities ( 1 P and 2 P ) with their reduced analogs U characterized by a zero mathematical expectation and a unit dispersion: M(U)=0; D(U)=1.
A reduced random quantity is defined as follows: where ( ) M X is the mathematical expectation of our random quantity Х, and ( ) X σ is its dispersion. Thus, all our cluster analysis calculations will be made for a two-dimensional Euclidean space of reduced random quantities 1 U and 2 U related to our selected parameters 1 P and 2 P through the following equations: where: At the final stage of the calculation, we only have to find the Euclidean distance between these two elements in the two-dimensional space using the well-known expression for distances in Euclidean spaces: (1) (2) 2 (1) In the general case, this approach allows one to find a generalized distance between any two elements as a square root of the sum of squared differences of their uniform parameters: where d ij is the Euclidean distance between the i-th and j-th elements, ( ) i k U and ( ) j k U are the values of the k-th random variable for the i-th and j-th elements, respectively.
In our case, the elements of the analyzed selection are sensor images of metabolic profiles (the time drift of the exhaled gas composition can be neglected).
The volume N of the array of such distances is, naturally, equal to the total number of all possible pairs of the considered elements and thus given by: On the basis of the data obtained, we constructed a minimum-length tree two-dimensional projection for the plane of the most significant characteristic parameters. The size of the resulted clusters, defined as a relative number of elements in the structure, provides a qualitative assessment of the level of the technology used. It is evident that the bigger cluster we can individualize in the considered array of elements, the more parameter-wise similar the sensor elements are and, therefore, the higher the level of technological perfection of their manufacturing is.
It should be noted that even in the case of one sensor the temporal change in state and composition of the gas exhaled by a person can result in obtaining different parameters of the response signal at different stages of the experiment. If the number of samples used is large, the influence exerted by the above factor becomes even more important. This may cause additional difficulty in selecting similar samples on the basis of all the above-mentioned parameters, that is in performing a cluster analysis in a multidimensional Euclidean space with the maximum number possible of sensing element properties taken into consideration. In this case, understated estimates are possible in the process of cluster formation and selection of samples with similar parameters as a result of the variability of experimental conditions. We tried to simplify the problem by unifying the experimental conditions in accordance with the aforementioned measuring technique [16] used to develop methods of medical diagnosis, аs well as by taking for calculation a limited number of characteristic parameters which have already been proved to be important markers of certain states of the human organism [8,14]. Figure 4 shows distribution of sensor samples in the three-dimensional space of exposure maximum, relaxation maximum and relaxation time. As we can see, the pattern of points (i.e. sensors) in this space is rather close. Taking into account the natural drift of the parameters determining the experimental conditions, we can conclude that the technological level of the process of manufacturing the point-contact sensors was rather high. At the same time, further improvement of technology requires the level of technological perfection to be evaluated quantitatively. The high demands on the quality of diagnosis methods also imply some additional criteria for selecting sensor samples with the most similar parameters. It is these requirements which are fully met by employing cluster analysis. By using three main characteristics -exposure maximum, relaxation maximum, and relaxation time -we found Euclidean distances between all pairs of the selection, which are necessary to choose the most similar sensors. The sensor samples shown as blue spheres in Figure 4 form a cluster with a limited Euclidean distance 0.27 which consists of 11 samples.
Our calculations demonstrate that the cluster analysis results correlate well with the distribution of the manufactured sensors in the space of real parameters. The ratio of the size of the largest cluster to the volume of the analyzed selection for a given limited Euclidean distance can serve as a quantitative assessment of the degree of technological perfection (the so-called criterial factor, CF, of technological perfection). It is obvious that in an extreme case a set of manufactured sensors can be a monocluster selection with a unit ratio and a zero limited Euclidean distance. Of course, such a singular localization in the space of characteristic parameters is virtually unattainable. For our object, which is a set of sensing elements, CF 0.27 = 0.225. To assess the effectiveness of the approach, we took two random samples from the cluster and measured their characteristics in the same gas medium, which allows us to make a portable device which was earlier developed by us (see above). The samples were placed simultaneously into the holders of a two-channel apparatus measuring their response signals, and electrical conductivity of the sensors was registered in one and the same medium of gas exhaled by a volunteer. The results of the measurements are presented in Figure 5. It turned out that the sensor images of the selected elements display a good convergence within the experimental error. The small differences in the intensities of the curves shown in Figure 5 can be due to variation in power of the internal source of electric energy which is formed in active-type sensing elements, as well as to possible deviations of the current flow regime for different samples from the regime which is optimal for the manifestation of the point-contact gas-sensitive effect. As we know from Yanson point-contact spectrosopy [1,2], deviations from the ballistic regime of current flow result in a decrease in the point-contact spectrum intensity. Our technology of synthesis of a point-contact multistructure based on TCNQ compounds does not exclude the possibility of some of the sensing matrix point-contacts having an imperfect crystalline structure. This may lead to a deviation from the optimal conditions for current flow and transfer of energy to atoms adsorbed on the surface of the contact upon placing it in a gas medium. A lower effectiveness of the energy aspect of the process of interaction of charge carriers in the contact with atoms of the gas medium can, in turn, lead to a lower contribution of the above processes to the intensity of the sensor response signal, leaving at the same time unchanged the number and position of the features (maxima) in point-contact spectra of the exhaled gas profile. As we can see from Fig. 5, the fine structure of point-contact spectra of the exhaled gas profile remains the same for different samples. It is of special importance that we can observe a complete reproducibility of the sensor response signals as far as relaxation time t 2 , which is important for medical diagnosis [8,14], is concerned. From the viewpoint of Yanson point-contact spectroscopy, this comes without surprise, since the sensor recovery (relaxation) time t 2 describes an integrated energy value of exhaled gas adsorption ingredients in patients and is thus an analog of the energy length of the point-contact spectrum of the electron-phonon interaction.
Similar results were obtained for other pairs of elements from the cluster. This allows us to consider the cluster isolated for the limited Euclidean distance 0.27 as a relevant set of products which can be believed to be uniform and used as sensing elements to study the composition of complex gas media, for example, in clinical practice for a noninvasive diagnosis based on analyzing breath gas exhaled by a patient.
Conclusions
In this paper, we first propose a criterion for selection of uniform sensors from sample sets based on sensing arrays, each containing over 200 point-contact sensing elements. We studied electric conductivity of point-contact sensors in a complex multicomponent medium of gas exhaled by a person. Considering a large number of parameters which characterize response curves of point-contact sensors, we proved the possibility of using a statistical procedure to assess the reproducibility of sensor operation. To select sensors with similar parameters from a studied set of sensors, the method of cluster analysis was employed. Using three main characteristics of point-contact sensor response signal which are applied to develop methods of noninvasive medical diagnosis -exposure maximum, relaxation maximum, and relaxation time -we found Euclidean distances for all pairs of the selection and isolated а cluster with the limited Euclidean distance 0.27 consisting of 11 samples. We demonstrated the effectiveness of the proposed approach for the selection of uniform sensors in experiments with breath gas exhaled by a volunteer. In this case, pairs of random elements from the formed cluster showed a good reproducibility of their sensor images. The selected elements were thus proved to be uniform samples which can be used to study complex gas media, for example, in clinical practice to develop methods of noninvasive diagnosis based on analyzing gas exhaled by a patient. | 6,791.4 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Characterisation of Particulate Matter Emitted from Cofiring of Lignite and Agricultural Residues in a Fixed-Bed Combustor
This study is focused on the emission of fixed bed combustor batch operated. Real-time analyser ELPI (electrical low-pressure impactor) system was used to size-segregated particulate matter emission ranging from 40 nm to 10 μm. The results show that total number concentration were 3.4 × 103, 1.6 × 104, and 1.5 × 105 particles/cm3 · kgfuel, while total mass of particles were 12.2, 8.0, and 6.5 mg/Nm3 · kgfuel for combustion of lignite, rice husk and bagasse, respectively. But it can be noticed that cofiring released more particulate matter. Meanwhile it was found that the effect of ratio of over-fired air to total air supply is more pronounced, since decrease in this ratio, the amount of particles are decreased significantly. For particle size distribution, it can be observed that submicron-sized particles dominate and the most prevailing size is in the range: 50 nm <D p < 100 nm, for lignite and agricultural residues. However, during cofiring of fuel mixture at 70% rice husk mass concentration, it is found that there are two major fractions of particle size; 40 nm <D p < 70 nm and 0.2 μm <D p < 0.5 μm. The analysis of particle morphology showed that the isolate shape of submicron particle produced during lignite combustion is characterised by different geometries such as round, capsule, rod, flake-like, whereas the spherical shape is obtained with combustion of rice husk.
Introduction
Airborne particulate matter (PM) is one of the major pollutants affecting negatively the atmospheric environment, combustion system, and human health. For its impact on atmospheric environment, it is known that sub-micron-sized particle (e.g., 0.1-1 µm) whether in form of solid or droplet plays a role to decrease visibility [1]. In problematic of combustion system [2], there were reports that serious corrosion problems were found in the cooler part of the flue gas path. From SEM-EDS analysis, it indicated that the corroded tube was covered with oxide layer having rich of Fe, K, Cl, Si, and S, which these elements mostly contain in submicron particle.
The major fuel for energy production in Thailand is lignite; however, its amount is limited in a long term. As agricultural countries, Thailand produces large amount of agricultural residues such as rice husk, and bagasse [3]. In the light of these, the energy production by co-firing lignite/residues becomes a promising option. PM emission during cocombustion of coal/biomass/wastes has broadly been investigated [4][5][6][7][8][9][10], but they are mostly processed by densification and burnt as pellet or briquette. Meanwhile, in Thailand, study of emission from cocombustion of domestic lignite, biomass, and waste has been investigated [11] but they focused only on gaseous emission and combustion efficiency associated with combustion condition regardless to the measurement of particulate matter. In fact, the characteristics of particulate emitted either from combustion of Thai lignite, rice husk, and bagasse or from cofiring of Thai lignite/rice husk have not been investigated in Thailand up to now.
This study is, therefore, focused on the PM emission from lab-scale fixed bed combustor batch operated. The point of study includes total number/mass concentration of PM, and determination of particle morphology. The effect of the fuel mixture and the ratio of overfired air to total air supply (OFA/TA) on PM characteristics has also been addressed.
Fuel Preparation and Properties.
In this study, domestic lignite and two agricultural residues; rice husk and bagasse have been selected. Their physical are depicted by Figure 1. Lignite was supplied by the Electricity Utility in Thailand and was crushed and sieved to 3-5 mm in diameter range. Rice husk and bagasse were received from rice mills and sugar cane factory, respectively, and used in as-received characters, as shown in Figure 1. Since bagasse physical is inhomogeneous in size/shape (e.g., short-long line, thinner-thicker shape, or powder portion) including low density (60 kg/m 3 ) in comparison with lignite, making difficult to well mixing with lignite, so combustion of bagasse alone is present in this paper. However, rice husk seemed better to mix with lignite; hence cofiring of lignite and rice husk can be tested.
For cofiring of lignite and rice husk, they were mixed together before loading to reactor. The fuel mixtures of lignite and rice husk are 30/70 and 60/40 by mass concentration. Burning 100% of lignite, rice husk and bagasse are performed as baseline data. Fuel properties are shown in Table 1.
Experimental Rig.
The experiments were conducted in a lab-scale fixed bed combustion system which is a vertical cylindrical chamber of 120 mm internal diameter, 2680 mm height, insulating with 45 mm of refractory material, and 20 mm of rock wool. The grate is located at the bottom. Eleven thermocouples (types K chromel-alumel) were used to measure the temperature along the reactor including combustion zone and freeboard. Air supply was divided into two parts, named as overfired and underfired air. Underfired air was fed beneath the grate while overfired air was put at 840 mm height above the grate. The ratio of overfired air to total air (OFA/TA) was increased from 0 to 0.3. Figure 2 shows the schematics of experimental setup.
Particulate Sampling and Analysis.
Real-time analyser: ELPI (electrical low-pressure impactor) was used to sizesegregated PM emission ranging from 40 nm to 10 µm. Sampling probe was inserted in fixed-bed reactor located at 2680 mm above the grate. Twenty-five millimetre of diameter of Thirteen-Teflon filters without greasing was used to sampling particulate per one case. Particle sized larger than 10 µm was trapped at 13th stage of ELPI and only particle sized below 10 µm was allowed through size-segregation 1st-12th stages of ELPI. The sampling rate was fixed constantly at 10 L/min entire the sampling time. Number of particle is a function of measured current and mass concentration of particle was calculated based on 1 g/cm 3 of particle density. Total number and total mass of particle were obtained by integration of value in each stage (i.e., 1st-12th).
SEM (scanning electron microscopy) model: JSM-6301F was used to study the particle morphology. The analysis of particle morphology covered the significant mode of particulate, especially submicron particle, nucleation mode (D p ; 70-100 nm), or 2nd stage of filter, accumulation mode (D p ; 0.2-0.31 µm) or 4th stage including supermicron particle (D p ; 3-5 µm) or 10th stage.
Total Number/Mass Concentration of Particle Emitted from Combustion.
The results are shown in Table 2. The total number of particle emitted from combustion of lignite, rice husk, and bagasse are 3.4 × 10 3 , 1.6 × 10 4 , and 1.51 × 10 5 particles/cm 3 · kg fuel , respectively, while total mass of particles are 12.2, 8.0, and 6.5 mg/Nm 3 · kg fuel . These results indicate roughly that combustion of low bulk density fuel may be one of the causes to generate higher the emitted particle (compared to lignite). However, comparison with 1.8 × 10 13 , 1 × 10 13 , 1.7 × 10 13 particle/kg released from combustion in self-built burning stove of wheat straw, corn straw, and rice straw, respectively [13]. It seems that the low of density fuel (i.e., rice husk or bagasse) may not be a priority concerned with high emission of PM but the combustion technology or operating condition seems more importance.
Other interesting is that bagasse and rice husk have higher volatile yield than coal, therefore, the main combustion process is marked by devolatilisation rate of fuel and homogeneous (gas-phase) reaction dominated, which later favours particle formation via gas-to-solid pathway (e.g., condensation). According to this phenomenon, this could be observed from the reverse relationships between particle number and particle mass concentration. For instance, most prevailing size of particle of bagasse combustion is at d p 70 nm of 80% cumulative of total number of particle. Because this high content of submicron particles is less significant to contribute the overall mass loading, thus low mass concentration does. In addition, low mass of emitted particle also indicates the lower particle density of residues.
Total Number/Mass Concentration of Particle Emitted from Cofiring of Lignite and Rice Husk.
Cofiring of lignite and rice husk was performed under various mass fraction and ratio of overfired air to total air and results are shown in Table 3.
It can be seen that cofired lignite and rice husk result in increase of both particle number and mass concentration compared to burning of either lignite or rice husk. This synergy effect could be from the difference in fuel properties and physical which needed further investigation and analysis. While mass fraction concentration has affected to PM emission, an increase in lignite mass concentration leads to decrease in PM emission. However, total number/mass concentration of particle is decreased dramatically at overfired air to total air ratio of 0.1. The result of particle number of fuel mixture (8.7 × 10 3 ) is in between those of lignite and rice husk (3.4 × 10 3 and 1.6 × 10 4 ) but mass concentration is much lower. This could be said that PM emission at this condition probably is very fine particle.
Particle Size Distribution (PSD) of Combustion.
As from the results, it was found that the most prevailing particle size was in the range of 50 nm to 100 nm for combustion of lignite and bagasse. It was accounted to 60 and 80% of total particle for lignite and bagasse, respectively. Meanwhile, for rice husk combustion, it was obviously seen that there were two groups of particle size range; 50-100 nm and 0.5-1.0 µm. This could be inferred that ultrafine or fresh particle was collided and agglomerated to form fine particle.
Particle Size Distribution (PSD) of Cofiring of Lignite and
Rice Husk. The results from cofiring lignite with rice husk show the same effect as burning of either lignite or agricultural residues. The major fraction of particle size is 40-70 nm but the number of particle is higher. However, the increase in rice husk mass fraction, 40 to 70%, leads to release larger particle size. At 70% rice husk mass fraction, there are 3.5. Particle Morphology. SEM was used to investigate particle morphology. Particle shape derived from combustion of lignite, rice husk, and cofiring of lignite/rice husk are illustrated by Figures 3(a)-3(d). It can be seen from Figure 3(a) that the isolate shape of submicron particle produced during lignite combustion is characterised by different geometries such as round, capsule, rod, flake-like, whereas the spherical shape is obtained from rice husk combustion (see Figure 3(d)).
For cofiring mode, Figure 3(c) (left and middle) shows that co-firing of high mass fraction of rice husk (70%) enables to modify structure of submicron particle from "smallroundly shaped" to "large-amorphously shaped," in comparison to rice husk burning case, which finally results in increasing of the average diameter of particle.
Conclusion
Characterisation of particulate matter emitted from firing and cofiring of lignite and agricultural residues, rice husk and bagasse, has been investigated in fixed-bed combustor batch operated. Parameters concerned in this study are comprised of total number/total mass concentration and particle morphology. It can be summarised the results as follows.
(2) In cofiring of lignite and rice husk, the results show synergy effect which released particulate matter is higher than burning either lignite and rice husk. The increase in rice husk mass fraction tends to increase the amount of particle. Nevertheless, it was found that the effect of ratio of overfired air to total air supply is more pronounced, since decrease in this ratio, from 0.3 to 0.1, the amount of particles are decreased significantly.
(3) During cofiring fuel mixture at 70% of rice husk mass fraction, it is found that there are two major fraction of particle size; 40 < D p < 70 nm to 0.2 < D p < 0.5 µm. This indicates the possibility of agglomeration of ultrafine particle increasing the average diameter of particle.
(4) The analysis of particle morphology shows that the isolate shape of submicron particle produced during lignite combustion is characterised by different geometries such as round, capsule, rod, flake-like, whereas the spherical shape is obtained with rice husk combustion. | 2,825.2 | 2012-05-02T00:00:00.000 | [
"Engineering"
] |
“Pure-Polyhex” π -Networks: Topo-Combinatorics
: Structural possibilities are considered for what arguably is the most general class of connected “pure-polyhex” π -networks (of carbon atoms). These are viewed as hexagonal-network coverings ( i.e. , a tiling by hexagons) of a connected locally Euclidean surface S possibly with holes which can be simple cycles of sizes other than 6 . The surface S can curve around to connect to itself in different ways, e.g ., with handles of different sorts. This then includes ordinary benzenoids, coronoids, carbon nanotubes, bucky-tori, carbon nano-cones, carbon nano-belts, certain fullerenes & fulleroids, various benzenoid polymers, a great diversity of defected (disclinational or dislocational) graphene flakes, and many other novel pure- polyhexes. A topological classification is made, and several combinatorial conditions on chemical sub -structure counts are identified. These counts include that of “combinatorial curvature”, such as is related to curvature stresses, as also relate to the Gaussian curvatures of the embedding surface.
INTRODUCTION
ONJUGATED π-networks have been much studied, since from before the time of Kekulé, with recent intense excitement upon the discovery of fullerenes, of carbon nano-tubes, and of grapheneic structures. For the purpose of constructing nano-devices much interest has developed in additional novel structures: decorated nanotubes, branched nano-tubes, carbon nano-belts, buckycones, negatively curved structures, and more. Typically these species may be viewed as conjugated π-networks based upon a polyhex-tiled surface with various defect rings of other sizes punched in the surface. The classical benzenoids can be considered as polyhex species covering a surface topologically equivalent to a disk. Coronoids are pure-polyhex species tiling a surface topologically equivalent to a punctured disk. The carbon nano-tubes (or buckytubes) are purely polyhex in the bulk of the surface which appears like a long cylinder, while something also occurs at the ends, such as an opening at each end (to make the surface overall topologically equivalent to an open-ended cylinder, or equivalently again to a punctured disk). If both ends are suitably closed off, a fullerene again can be obtained, though there are other possibilities with smaller sized rings, especially if the smaller sized rings abutt to one another -in which case a "fulleroid" may result. But there are many more possibilities for polyhex-covered surfacesboth finite & infinite.
The implications on substructural counts (say of atoms, edges, rings, & boundary features) have already been studied for several special cases. Gutman [1] & others [2][3][4][5] have emphasized such combinatoric aspects for the particular case of a pure polyhex with a surface topologically equivalent to a disk, with Dias exhibiting [2,3] results in terms of a "formula periodic table of benzenoids". And further there have been comparable studies of what occurs when one [6,7] or a few [8][9][10][11][12][13][14][15][16][17] other-size rings are allowed, all focused on the circumstance of a planar network with otherwise a single outer boundary. The case of coronoids (as benzenoids with a hole) also has been similarly studied, as well as multi-coronoids (albeit to a lesser extent) -reviewed in Ref. [18] The case of fullerenes is also extensively studied, [19][20][21] with the count of 12 pentagons being very well known. Harris [22] and Sadoc & Mosseri [23,24] have generally discussed polygon-tiled surfaces topologically equivalent to a sphere, with attention to the relation to Gaussian curvature -with "curvature" effects in fullerenes have also been relevant. [25,26] The circumstance of covering a torus has been considered quite separately, either purely [27][28][29] with hexagons, or with a few other-sized rings [30,31] mixed in to (partially) relieve stress. There are many works concerning Möbius arrangements involving a Möbius strip defined to contain all the π-orbital axes (instead of our strips containing all the conjugated-carbon σ-network)this alternative starts with Heilbronner [32] ). As to infinite structures, the hexagon-tiled nanotubes have been (comprehensively) addressed, [33] and then there is graphene & substructures cut from it, as well as its vacancy defects. [34] Here we pursue a general comprehensive investigation of possible surfaces and commensurate polyhex π-network structures thereon. That is, rather arbitrary topologies are allowed for the connected surface to be exactly covered by hexagons in a suitable fashion: every edge of the network is in either 1 or 2 hexagons, and every site is of degree 3 or 2. Upon covering by hexagons, the surface is said to be hexagonally tiled. Some modest degree of standard topology is used -e.g., as in [35][36][37][38] use is made of homeomorphism which is a mapping of one geometrical set to another through continuous neighborhood-preserving transformation. Thus benzenoids are viewed as: polyhexes on a (topological) "disk"; coronoids (& multi-coronoids), on punctured (& multi-punctured) "disks"; "poly-q-polyhexes" [6][7][8][9][10][11][12][13] (with nonadjacent q-gons) also as punctured "disks" (with punctures corresponding to nonhexagonal rings); tori, with or without punctures; polyhex bracelets (which are not coronoids or multi-coronoids, including both untwisted & Möbius possibilities); many extended (i.e. infinite) benzenoid or coronoid polymers; bucky-tubes; fullerenes (with isolated pentagons); highgenus negatively curved graphene surfaces [39,40] (with punctures for non-hexagonal rings); disclinationally or dislocationally defected [41] graphenes; etc. A collection of chemically oriented contributions (without restriction to conjugated-carbon networks) is found in Sauvage et al's monograph. [42] Here a major aim is to identify a collection of theorems and combinatorial results for different substructure counts -often very well-known for classical benzenoids, but here seeking wider limits and modifications to their applicability. Ultimately the topocombinatorics relates to geometric curvatures, complicit stresses, & realizable embeddings into ordinary space.
FUNDAMENTALS & INITIAL STRUCTURES
Here a π-network is viewed as a graph G, with any H atoms bonded to any of the carbon atoms deleted from this otherwise carbon network. The pairs of σ-bonded π-centers in the π-network are identified to edges of G. The restriction to trigonal (sp 2 ) π-centers limits the degree of each site to no more than 3. The whole network G is imagined to be "suitably" embedded in a connected smooth surface S, which in turn is embedded in 3-dimensional Euclidean space 3 E . Any boundaries of S correspond to edges (i.e., bonds) of G, and "rings" of G bound (nearplanar) disk-like regions (faces) of S. The surface S is to be locally Euclidean in that its points have open sets (of S) homeomorphic either to an open disk or to a boundary point as for a point of a disk. The graph G is to be embedded in S and to consist entirely of hexagons exactly covering S, such that every edge of G occurs in exactly 1 or 2 hexagons. That the associated surface S is to be a conjugated whole precludes three surfaces meeting at a seam (because conjugation entails at least approximately a mutual alignment of π-orbitals and this in turn implicates a smoothly varying normal to S, as would not occur at a seam). Moreover, the hexagons are typically imagined to be of comparable sizes, and the surface not to fluctuate notably below or even at the scale of the size of these faces. That is, radii of linear curvature of the surfaces are to be a somewhat greater than the bond lengths, and areas of faces on S are to be not too different from that for a regular polygon with edges of lengths similar to that of the faces. A boundary of the network (and of S) is identified such that every degree-2 vertex of G appears on the boundary, along with the edges (bonds) incident to these sites. Generally some vertices of degree 3 may be identified to the boundary -when there are 2 boundary edges incident thereto. See, e.g., the "coronoid-like" structure of Figure 1. That is, the surface is tiled by hexagons with no more than 3 hexagons meeting at any vertex, such that it takes exactly 3 to completely surround a vertex, whence such a graphical structure is termed a pure polyhex. These structures do not account for all conjugated π-networks -but it does recognize the propensity for hexagonal rings so as to encompass a large variety of possibilities, many of which are realized, and many more as yet unrealized. Besides benzenoids, it allows bucky-cones, bucky-tori, diverse defected grapheneic flakes, and more. It allows isolated-pentagon-rule fullerenes as 12-fold punctured spheresand it allows "fulleroids" with isolated rings of other sizes. This is a considerably more general than the usual definition of a polyhex π-network (of sp 2 carbons), so that questions naturally arise as to how different earlier results for more limited structures might generalize.
Some initial considerations are appropriate for a finite benzenoid B (= G), which is taken to be a region of the Euclidean plane homeomorphic to a disk and fully tiled by hexagonal faces (or rings). Often the hexagonal rings are required to be geometrically regular, in which case the benzenoid is a part of the honeycomb lattice net. A coronoid C (= G) is viewed as a benzenoid with a hole cut out of the center, such that the remaining region of S is fully tiled by hexagons, which again are often required to be geometrically regular. See Figure 1. If two or more holes are cut from a benzenoid then a multi-coronoid results, still with every edge in a hexagonal face.
For a single (benzenoid) boundary δ, procession (say) clockwise around δ (with the bulk of B on the right), gives a number of right turns & a number of left turns, with the excess of right turns over left turns being +6 to accomplish a full cycling of 6 ( / 3) 2 π π × = radians. Also note that each right turn corresponds to a degree-2 C atom, with an attached H atom, whereas each left turn (if any) corresponds to a degree-3 C atom, with no H atom. See Figure 1.
It might seem that the preceding argument depends on the regularity of the hexagonal faces, but its essence does not. To see this, consider the "helicenic" example B of Figure 2, and imagine B to be the result of 2 (potentially) regular-faced benzenoids B1 & B2 fused together at their bold-faced edges (as indicated in Figure 2). The clockwiseoriented circuits C1 & C2 around B1 & B2 each entail a total turn through an excess of 6 right 60° -turns over left turns. Furthermore the clockwise boundary circuit of B can be viewed as a "sum" of C1 & C2, where the common (oppositely oriented) edge is canceled, and the 4 right (60°) turns (of C1 & C2 at this edge) are traded for 2 left turns. Thus the total excess of right turns over left turns around C remains 6. Of course, even for the deformed B in Figure 2 there is a net turn of o 360 2 rad π = as one traverses the circuit (even though the hexagons are deformed). Note each right turn entails a degree-2 site, while each left turn entails a degree-3 site, allows a boundary correspondence amongst: net turning angle; excesses of degree-2 over degree-3 sites; and numbers of H atoms versus total boundary length. Such structures are here termed generalized benzenoids. The differences for the numbers of left & right turns still turns out to be 6 (even if the angles are not precisely 60° turns as in the first part of Figure 2.
For a coronoid, the turns around the inside boundary δin still satisfy a condition: procession with the near part of S on the right now means a counter-clockwise circuit with the number of left turns exceeding the number of right turns by +6. Again the right turns correspond to vertices of degree 2 while the left turns correspond to vertices of degree 3. Again see Figure 1.
The noted results extend fairly straight-forwardly to a multi-coronoid M (= G), which may be viewed as obtained from a benzenoid by deleting several regions to leave several holes, each made by deletion of 2 ≥ hexagons, while still requiring the remnant region to be fully tiled by hexagons. If the hexagons are geometrically regular, similar results apply. These results are conveniently made independent of the direction of cycling about a boundary circuit δ, if they are expressed in terms of the numbers This result (for benzenoids) is known -as in, [2][3][4]18] where in fact there are some further enumerative relations, which are here addressed later (in a generalized form). The statement of this theorem does not mention the relation to angular turns, but this lack mirrors these earlier references, and here we return to the turn-angle aspect in a more general context. This theorem provides a first example of the sort of combinatorial information here considered -and it also provides a fundamental reference circumstance for the more general structures G contemplated.
Let us note something concerning embeddings, in 2 . E The benzenoid B (coronene) in the first part of Figure 3 may be embedded in a second fashion as in the second part of this figure. We use a convention that given any hexagonal ring, the part of the empty part of this hex region is to be identified as part of our embedding surface, with the area of this hexagon surface finite. Thus in the last part of Figure 3, the region containing the dashed line is to be part of the "outer" surface hexagon, but should be finite -so that the drawing of the first part of Figure 3 is correct. (If instead we imagine the second drawing to occur on the surface of a (big) sphere, then this outside part of the surface is finite, and the result is the same as dealing with the standard coronene drawing in the first part of our figure.
FUNDAMENTAL "EULEROLOGY"
For our graph embeddings, a surface S is topologically characterized in terms of its Euler (or Euler-Poincaré) characteristic ( ) χ S . Indeed a quite general compact set [43,44] S in n E manifests such a characteristic ( ) χ S invariant under homeomorphic transformations of S. This general Euler-Poincaré characteristic has the value 0 for the empty set, the value 1 for a point, and satisfies Indeed this allows the (easy) determinations for rather general (compact & topologically closed) geometric sets R & Te.g., as emphasized by Hadwiger [42,43] and others. [26,45,46] Indeed one has: share only a single point 1 2 p L L = ∩ which is an end-point of each, then is also a line segment (possibly not straight) homeomorphically equivalent to L1 & L2, one has These results are well-known (in the general mathematical literature), with more examples in Ref. [26], but it seems of value to illustrate the use of the fundamental definition of (1), as proves of further use in the following. A surface 0 S may be "decorated" in various ways. A boundary-p-handle adds to an 0 S with boundary by joining a "disk" D such that D shares exactly 1 p + disjoint boundary line-segments with a like number of disjoint boundary line segment on 0 S , as indicated in Figure 4. In general there are (topologically) different ways to make the gluing at the boundary or boundaries (as in Figure 4) which can change the "orientability" of the resultant surface, it being said that a surface is orientable if a (miniscule) walker on one side of the surface cannot get to the other side at the same point without crossing a boundary.
A tube-handle is added to S by taking an open-ended cylinder with both ends attached at the boundaries of 2 holes punched into S, as illustrated in Figure 5. This can be viewed as punching two holes in the parent surface and gluing the open-ends of the cylinder to the two holes.
Theorem 3 -Let dec
S be a "decorated" locally Euclidean surface which is obtained by adding to finite 0 S numbers bdry--handle p n of boundary-p-handles, tube-h n of tube-handles, & holes n of holes. Then Proof: This proceeds using the basic definition, after the fashion of the proof of the preceding theorem. Given a surface 0 S a first consideration is to punch a hole in 0 S , thereby diminishing its χ -value by 1, as is seen upon considering the defining relation (1) along with with the two sharing just the circular boundary of T. That is, with each hole added χ is reduced by 1.
Next to add a p-handle, one considers 0 S and a "disk" D to share 1 p + disjoint line segment boundaries.
For general 1 p ≥ , this argument is but slightly modified, with 0 S D ∩ consisting of 1 p + disjoint line segments, for which one also readily sees that 0 ( ) 1 χ S D p ∩ = + , and addition of a p-handle is seen to diminish the χ -value by p.
Next, for a tube-handle one considers appending an open-ended cylinder to a surface 0 S , by first punching 2 holes in it and then taking an open-ended cylinder each of whose circular boundaries is to be glued to a corresponding boundary around a punched hole. The union of this doublepunched surface (with χ diminished by 2 from the unpunched surface S) with the open-ended cylinder adds one handle, and again using the defining relation (1) one finds the result with the added tube-handle has the same χ -value as the double-punched surface. That is, with the addition of each tube-handle, the χ -value diminishes by 2. S . The second case is of a Möbiustwisted 1-handle, and the last case has = 2 p . Figure 5. A doubly-punctured surfacial structure 0 S and then a tube-handle added in two different ways.
When S is a surface without boundary (i.e., is a surface closed in a geometric sense, which is to say a surface with no holes & no tube handles), this result is quite widely known, and then the number of tube-handles is termed the "genus" of S.
The Euler-Poincaré characteristic also relates to counts of further chemical substructures involving cellular embeddings, by which we mean a tiling covering S such that each tile is homeomorphic to a disk. Proof: This can be established by induction, with the addition of faces 1 by 1. To initiate the induction, note that the relation is obviously fulfilled when there is a single face around a single "disk". This gives ( ) 1 χ D = is a polygon (with 3 e v = ≥ ). Now if a face is added to 0 G covering 0 S to give G covering S, we imagine that the face (a "disk") D shares some of its boundary with 0 S . At this point the situation is viewed in terms of different cases depending on what D shares with 0 S : Case 1 -All D's boundary is shared with 0 S . Then a hole is being filled, to increase the χ -value by 1, while G0 & G are the same though we count 1 more face for its embedding, and granted the result for 0 G , it follows for G. Case 2 -Not all of the boundary of D is shared with 0 S . Then this shared boundary consists of a number, say p, of disjoint "line segments", and the addition of D to 0 S corresponds to the addition of a p-handle to 0 S and thereby obtain S, whence − . But at the same time to obtain G from 0 G , there are added alternating edges & sites between pairs of shared chains. With 1 p + shared chains between G & G0, there evidently are added also 1 p + unshared chain sequences of edges joining pairs of shared chains. See an example in Figure 6. Any one of these unshared chain sequences contains 1 more edge than vertex, so that the difference n e − diminishes by 1 p + in going from 0 S to S. But also f increases by 1 in going from 0 S to S. Thus n e f − + decreases by p. And the induction is completed. ⯀ Alternative proof: Let for a pure polyhex, The formula here is very well-known when S has no holes or p-handles (i.e., S is without boundary) -especially in the circumstance when S is homeomorphic to a sphere. Note that here it was not even needed that G be purepolyhex. If G is pure polyhex, and indeed is a benzenoid Figure 6. Example additions to form p-handles. The additions are imagined to be made by fusion at the boldface bonds to an otherwise connected parent surface 0 S . The first two cases here are for = 1 p -handles, and the third case is for a = 2 p -handle. In the first case when a 1-handle is being formed, the two 0 S -shared chains each contain a single edge, but the 2 unshared edge sequences shown in light-face involve internal vertices & not the end vertices: each sequence has 2 unshared edges & 1 unshared (internal) vertex for each sequence.
(with ( ) 1 χ S = ) such that the area of each regular hexagonal face 1 = , then the area occupied by G is 1, f e n = = − + as may be seen as a special case of Pick's theorem. [47,48] But this whole section is more general than the pure polyhex case -and is useful in the following. 3 ( ) 2 ( ) ( ) n G n δG n δG + + , which in turn is just
Often a pure polyhex structure associates to a surface with boundaries. But not always: Theorem 6 -Let G be a pure polyhex on a finite locally Euclidean surface S which has no boundary. If G is finite, then S is either a torus or a Klein bottle.
Proof: This has already been pointed out, [28] say by a relatively straight-forwardly approach, first utilizing theorems 4 and 5 to see that the surface S has ( ) 0 χ S = , and then looking at the mathematical literature to see just which boundaryless finite-area surfaces have ( ) 0.
The case of a Klein bottle, indicated in Figure 7 (and named after Felix Klein [49] ), necessarily entails self-intersections when embedded in 3-dimensional space, so that the associated polyhex structure is not chemically plausible. However, it may be pointed out that one could introduce a single large hole through which the tube is being pushed between the 4 th & 5 th panels of Figure 7. Also if one were to use a Klein bottle with suitably placed "interlaced" holes, then self-intersections can be avoided, as indicated in Figure 8. It should be pointed out that an actual chemical bond does not fit well through the center of a small (say hexagonal) ring, so that these holes in Figure 7 are to have boundaries of length at least 10 edges, and perhaps desirably somewhat more. The projective plane which again is nonorientable, may be obtained by adding a single cross-cap to a sphere as in Figure 9. Again the self-intersection can be avoided with the addition of suitably located holes, to make a structure which is in principal chemically realizable. Indeed a crosscap implies non. But also it turns out that the sphere with cross-cap must have holes if it is to be tiled by hexagons. True cross-caps (without modification by adding such holes) are not the focus here. Indeed now we go on to surfaces with at least 1 boundary. The different boundaries for finite-area S & G are crucial to the structures and may be characterized. For a connected component δ of the boundary, note that δ has no end, and so must be either infinite or a cycle from G. Now for our pure polyhex species every edge of δ occurs in a hexagonal ring of G, and the set of these hexagonal rings of G which share edges with δ form a subgraph δ G . Often δ G is a simple ordinary "bracelet" of fused hexagonal rings, but it need not be so -e.g., δ G could be a Möbius strip, possibly with a width of 1 hexagon, and possibly 2 ≥ hexagons width, or possibly of a width varying numbers of hexagons, or possibly δ G can be a Möbius strip with 1 or more holes cut in it. In any event δ G forms something which is broadly construable as a general bracelet, possibly with some number ( ) t δ of twists, associated with the manner in which its portion of surface δ S is embedded in 3 E . See, e.g., Figure 10. Formally given a strip δ S in 3 E , first imagine a short (compared to non-neighbor hexagon separations) normal to the strip, tracing out a curve for the head of the normal as its tail circumscribes around the strip, and take ( ) t δ to be the minimum number of uncrossings of the tail and head curves needed to disentangle the two curves. That is, an ordinary bracelet has ( ) 0 t δ = , a Möbius strip has ( ) 1 t δ = , and higher numbers of twists are conceivable. If δ G is of a single hexagon width, then whether ( ) t δ is odd or even is discernable, as this corresponds to whether there is 1 or 2 boundaries (comprised from the non-shared edges) of δ G . Of course, if δ G has an odd-parity ( ) t δ , then the strip δ S is nonorientable (in having only one side), whence also S is nonorientable. Presumably the greater the twist ( ) t δ , the greater the stress in δ G .
Lemma 7 -Let δ be a finite-length boundary of δ S & S coverable by a pure polyhex graph G. Then δ is a cycle. If δ G is untwisted (i.e., ( ) 0 t δ = ) , then δ S can be embedded in the plane all on one side of δ. If further G is finite, δ G is either a benzenoid or has at least 2 boundaries.
Proof: That δ G or the portion δ S of S which is exactly covered by δ G is untwisted means that δ S can be homeomorphically embedded in the Euclidean plane 2 E where the homeomorphism extends to the ambient space 3 E . A path center δ from hexagon center to adjacent hexagon center also is topologically circular and does not cross δ, since δ being a closed curve in the plane, the Jordan curve theorem applies to say that δ divides the plane into 2 regions, an inside one and an outside one, with consequently center δ on one side. The strip δ S then has a second boundary outside of center δ . ⯀ A further boundary characteristic concerns the mode of contact of δ to a ring in δ G , by which we mean a ring may make contact with 1, 2, or 3 successive strings of edges of δ, as illustrated in Figure 11. These different modes all occur with different benzenoids, but they can also occur on "internal" boundaries. In many cases δ G is a bracelet with a width of a single hexagon, but not every hexagon of δ G is incident to both the inner & outer boundaries, as indicated in Figure 12.
Beyond twisting there is also the possibility of knotting, by which we mean that we have a structure with some sort of handle such that it is embedded in Euclidean space 3 E in such a way that it cannot be disentangled by a homeomorphism which also so maps the embedding space 3 E as well. Neither this knotting nor the related topic of "linking" (involving graphically disconnected polyhex structures) are considered here. A subclass of such polyhexes of this sort is mentioned in Ref. [50] & more fully in Ref. [51] but is not pursued here.
The edges might be subcategorized, and counted. That is, let 22 2 ( ) ( ) e δ e δ + counts half edges attached to a degree-2 site, as also does 2 2 ( ) n δ . For the second equality, one counts δ's half edges attached to a degree-3 site -noting that though there are 3 half edges attached to a degree-3 site as counted by 3 ( ) n δ , only 2 of them are for each such site actually in δ. For the first (lower-bound) inequality of the third line, each edge of type (3,3) must have degree-3 sites on its ends each of which we can associate half of (as boundary sites) to that edge. For the second inequality of this third line (i.e., for the upper-bound to 33 2 ( ) e δ ), one notes that no degree-3 site on the boundary δ may be associated to more than 2 edges of type (3,3) on δ. The lower-bound inequalities of the fourth & fifth lines are trivial. For the second inequality of the fourth line (i.e., the upper bound for 23 2 ( ) e δ ), one notes that each edge of type (2,3) must be associated with one degree-2 & degree-3 site on its 2 ends, so that there must be at least as many sites of each of these degrees on δG as there are type (2,3) edges. For the 22 2 ( ) e δ upper bound in the fifth line, note that no degree-2 vertex may be associated to more than 2 incident edges of type (2,2). In the sixth (& last) line, for the lower bound (to 33 2 ( ) e G ), note that each interior site is of degree 3, so is attached to 3 half edges of type (3,3) occurring in the interior of G, and each degree-3 site in any boundary must be attached to 1 half edge also in the interior. Finally for the last inequality (the upper bound to 33 ( ) e G ), note that again each interior (degree-3) site is attached to 3 half edges, while each degree-3 site of G ∂ can also be incident to no more than 3 such edges. ⯀ This lemma may be used in conjunction with Milan Randić's [52] (renowned [53] ) connectivity index . That is, upon noting that for pure polyhex graphs , we have bounds: Corollary 9 -Let G be a finite-area pure polyhex graph. Then The circumstance for benzenoids has already been considered. [54]
CURVATURE CHARACTERIZATION OF PURE POLYHEXES
Here the untwisted case is now the focus. As seen for benzenoids & multi-coronoids the difference between the numbers 2 ( ) n δ & 3 ( ) n δ of degree-2 & degree-3 vertices in a boundary component δ was important, and so it remains. To understand the significance, consider simple holes representing a ring of a size other than 6. In this case there are no sites of degree-2, and the result that 2 3 ( ) ( ) 6 n δ n δ − = ± of theorem 1 clearly does not hold for this δ. For this case, continued circumscription of ever larger bracelets of hexagons around the initial δ G leads to a cone, [55] with the net Gaussian curvature in the region of the apex of the cone being {6 ( )} / 3 n δ π − -at least if the hexagonal rings in the surface of this cone are to remain not overly distorted. See, Figure 13. But this Gaussian curvature in the apex region is reflected [41,56] in circumscribing strips (for a more general untwisted δ G ) even well away from the apex. As a result it is natural to define as a combinatorial curvature associated to the boundary δ (or to the empty region adjoining δ). Note that for a boundary-δ hole in a coronoid (or multi-coronoid), one finds (via theorem 1) that ( ) 0 G κ δ = , which is to say that there is no "inherent" distortion by the sp 2 -network skeleton to pull away from the flat case, with net integrated Gaussian curvature for a disk extended into the hole-region being 0 = . Another way to say this, is to identify ( ) G κ δ to be the net combinatorial curvature of a planar graph fit into the hole region. Such a combinatorial curvature of a "filledin" conjugated network is a fundamental graphic quantity, which is believed [25,26] to rather generally closely associate to (geometric) Gaussian curvatures. Moreover, exactly this definition has been utilized [57] to characterize untwisted "cyclo-polyphenacenes" (such as our untwisted δ G is), and indeed G κ for such a strip has even been found to correlate with stresses from ab-initio quantum-chemical computations. [58] That the combinatorial curvature for the outer boundary δ of a benzenoid (or regular multi-coronoid) G turns out to be ( ) {6 6} / 3 4 G κ δ π π = + = is just saying that if such a (near planar) benzenoid were embedded on a sphere, it would preferably be very large sphere so that the hole region comprising the bulk of the sphere would manifest a net Gaussian curvature approaching that of the whole sphere (namely 4π ). The difference in 2 ( ) ( ) This is of special significance if one understands the association between combinatorial curvature & Gaussian curvature for the realized molecular geometry. A formula for the curvature rot,G ( ) κ δ can be given in rotations, if for the boundary graph δ G (which we recall is a strip of hexagonal rings) we define for each ring α a turn number ( ) t α = -1, 0, 1, 2, or 3, as in Figure 14. Here it should be noted that we do not necessarily concern ourselves with details of the embedding of S in 3 E . That is, even if a polyhex is embedded on a doubly twisted bracelet as in Figure 10, this so embedded S cannot be extended (retaining its embedding in 3 E ) to something which is homeomorphic to a sphere in 3 E , it still can be homeomorphically mapped into the 0-twist bracelet, which in turn is clearly homeomorphically embeddable on a sphere, so that the theorem still applies. This again provides a rationale as to why for multi-coronoids (as in theorem 1), the outer boundary gives , while the inner boundaries give it 0 = . Another way to characterize a hole uses a boundary circuit δ. Imagine that the hole is filled in by a disk hole D which shares the boundary δ, and further imagine that a graph hole G is embedded on the disk such that it includes the boundary sites & edges of G, such that a new edge attaches to each one of the degree-2 sites (properly locating H atoms) on δ possibly with new degree-3 sites internal to hole D so that hole G divides hole D into cells. Then this hole G determines the combinatorial curvature of the hole: Theorem 13: Let G be a pure polyhex species, with a finite untwisted circuit boundary δG , with a hole graph hole G .
Possible Types of Surfaces
The possible types of finite-area surfaces without boundary are already given in theorem 8, so let us now proceed on to the case that the surface S has a single boundary. Conjecture 14 -Let G be a finite pure polyhex on a surface S which has a single boundary δ. Then S is topologically a disk or a Möbius strip or generally an initially boundaryless surface with a hole. A hole would have 2 3 ( ) ( ) 6 ( ) n δ n δ χ S − = and integer curvature rot,G ( ) ( ). κ δ χ S = Proof Attempt: Note that it has already been established that topologically (i.e., up to homomorphism) there are only a select collection of finite-area surfaces with a single boundary, namely either a Möbius strip or a punctured finite-area surface or a disk. So let's proceed step by step, with a few cases.
Let us begin with the Möbius strip can be tiled with hexagons -an ordinary cyclic belt of hexagons may be cut (across the belt), twisted once, and then reconnected. Indeed there are very many ways to tile a Möbius strip with hexagons.
Next, we imagine that we start with a surface 0 S having no boundary, and then introduce a hole into it to obtain S.
G G n δ n δ − = corresponds to 0 S being a sphere and S being a disk, which is to say that G is (topologically) an S with 0 ( ) 1 χ S = , which means that it is a sphere with a cross-cap and S is a disk with a cross-cap. That such an S might support a hexagonal tiling is suggested in Figure 15, for a graph with 18 vertices, 18 + 6 edges, & 3+2 hexrings. This gives an Euler characteristic 18 24 5 1 χ = − + = , so that we might imagine this corresponds to our crosscapped case. Such constructions can start from larger annulenes, say of 30 sites.
Proceeding for the case where 2, S have 0 ( ) 1 χ S ≤ − , and these do not have hexagonal tilings. But puncturing such surfaces to give S can result in something that is hexagonally tilable. In Figure 16, we illustrate such a construction: first, starting from g tori, each of which is tilable by hexagons ( Figure 16); second, puncturing each by removal of a connected set of hexagons; and third, joining them by additional hexagons in such a way that a single boundary surface S results, which if the hole is filled in by surface (without worrying about tiling) yields a genusg surface 0 S with ( ) 2 2 The illustration in Figure 16 is just for 1,2,3 g = , but it is clearly extendable -and though the basic tori illustrated are all the same, they need not be. Indeed, any one of the tori can be replaced by a Klein bottle -just put the hole in the half of a Klein bottle which looks like half of a torus. Indeed, one may use a sphere with a cross cap, even though such a surface is not hexagonally tilable -the surface obtained in the second stage, where a hole is punched in the sphere (to give a disk with a cross-cap) is tilable by hexagons. Thus one finds the requisite examples of surfaces S with arbitrary negative (integer) ( ) χ S . ⯀ Chemo-physical studies of cases with a single boundary other than the classical benzenoids are somewhat limited, though there are examples. For instance, Möbius twisted polyacenes also have been studied, [59][60][61][62][63] as well as the more general case of a Möbius-twisted poly-phenacenes, still of 1-hexagon width. Indeed wider strips have also been studied. [64,65] The case of a single-boundary structures with ( ) 0 χ S < , seem not to have been contemplated comprehensively before. Our case of Figure 15 with 18 sites & ( ) 1 χ S = − surely seems to have excessive steric strain. But there are more plausible looking cases with the same topology (but more sites) -as in Figure 17, for a topologically equivalent structure with 42 sites. A more geometric view takes 2 hexagons a bit apart normal to a common axis, with 3 anthracene chains, each of which with ends fused to alternating sides of the 2 axial hexagons. Yet also, these anthracene interconnections are each Möbius twisted -the result has just a single boundary, reminiscent of an ordinary Möbius band. It looks a bit like an American football, excepting the 3 Möbius twists. The structure also looks similar to the "catacondensed chemical hexagonal complexes" of Anstöter et al. [66] Pure polyhex surfaces corresponding to ( ) 2 χ S ≤ − do not correspond to any hexagonally tilable closed surface 0 S obtained by filling in the hole with additional surfacebecause of conjecture 14, but also as seen in the present conditions on 2, , there are no suitable hexagonal tilings for the added surface filling in the hole (as this difference takes a different value than is required by theorem 1). That is, if the hole is filled in by additional πnetwork, it would need to involve a topological "defect" (making it other than a pure polyhex, such as addressed here).
Theorem 15 -Let G be an infinite pure polyhex on a surface S which has no boundary. Then S is 2 E or an infinite cylinder or a semi-infinite cylinder capped by a cross-cap.
Proof: Obviously the result is true for 2 E where G is just the honeycomb lattice. To deal with surfaces S which have some kind of "defect", we note that one could cut out a hexagon tiled region including the "defect" to obtain a surface 0 S as in the attempted proof for conjecture 14. Thus to determine such S, we proceed via an examination of the finite-area hexagon-tilable surfaces 0 S with a single boundary and check to see which can be extended via repeated circumscription of hexagons to an infinite graph. ⯀ The case of an infinite pure polyhex on a surface S with a single boundary is also of interest. If the boundary is finite, this evidently includes S as a punctured plane 2 E corresponding to graphene with some faces removed. But this choice for S also includes nano-cones and dislocationse.g., as described in. [22,41,58] Still with a finite boundary, there is a possibility of a infinite cylinder (or tube) either with an open end or with a hole. For the infinite cylinder with a hole, the hole may be made by simply removing some hexagonal faces, but also there is a possibility of a hole like a dislocation, in which case the tube is of different "types" on each side of the dislocation, and also there is the possibility that it be like a disclination of negative curvature, in which case it opens up like a funnel. If both G & its boundary are infinite, then S can evidently be a half-plane.
Here it might be commented that the case that S is a half-plane involves rather different sorts of possibilities which might be distinguished via a refinement of the notion of homeomorphism. Let ( , ) d such that there exists a finite D ∈ R for which ( , ) ( , ) d x y d φx φy D ′ − ≤ for all points , x y S ∈ . Now, for example, all finite benzenoids are boundedly homeomorphic, because we did not specify a value for D except to say that it was finite. But maybe this puts different homeomorphic surfaces into different classes, such as is considered below.
Next imagine that from the regular honeycomb network (i.e., graphene) in 2 E one obtains sectors via cuts from the center of a central hexagon straight outward through centers of surrounding hexagons. Further define an m-sector to consist of m such sectors with the i th joined to the ( 1) i + th just as in the honeycomb network, [ 1] i m ∈ − , and 1,2,3,4,... m = . The "degenerate" 0sector is defined to be a semi-infinite strip of hexagons. Each of these m-sectors is imagined to consist of very nearly regular hexagons with the surface S into which it is embedded having vanishingly small curvatures. , one sees that the boundedly homeomorphic condition implies that equally spaced points (say the graph vertices) on the two boundary segments directed away from the central apex of m S would Figure 17. In the first row a redrawing of the structure of Figure 15. In the second row a structure with the same topology but with 42 sites rather than just 18. Also it has 11 hexagons, 3 of which are "Möbius twisted" need to end up being nearly equally spaced (to within a distance D) in the mapping to 3 S . Indeed consider similar rays of points directed away from the central apex nearly along each of the additional "boundaries" of the component 1-sectors combined together to form m S . Evidently if the points within any a first boundary ray (say on the "left") and the next one in (at 60° from this "left" boundary) are to satisfy the boundedly homeomorphic condition this second ray of points would under the mapping φ end up being radially directed outward from the apex in , m S ′ and ultimately at large distances ( D >> ) from the apex be directed nearly radially outward from the apex of m S ′ & at ultimately at very nearly an angle of 60° from the "left" boundary ray of . m S ′ One could continue until finally one obtains a ray that does not match the boundedly homeomorphic condition to the "right" boundary of . m S ′ That is, the different m-sectors are not boundedly homeomorphic to one another. ⯀ A further question is whether there are any other polyhex tilable surfaces with a single infinite boundary. All these are imagined to be simply homeomorphic to the half-planeand preclude an infinite-length Möbius strip, which is not thought to make "sense". Let us comment on different pure polyhexes which are nevertheless boundedly homeomorphic. Two such examples are indicated in Figure 18, each of which is boundedly homeomorphic to 3 S . In the first "stepped" example of Figure 18, one can imagine fusing a couple 0-sectors to a 3-sector, and there is a homeomorphism which takes the i th boundary ring of this first example to the i th boundary ring of 3 S , and similarly on into the interior. Thence one sees that boundaries that such a "notched" boundary involving the addition of say d 0sectors each with the ends of the 0-sectors close by ends up being boundedly homeomorphic with D d . In the second translationally symmetric example in Figure 18, there are additional hexagons periodically attached to 3 S .
In such an example, one can imagine a homeomorphism φ in which each additional ring α along with the rings to which α is attached are mapped to these corresponding attachment rings in 3 S , and otherwise the other rings are mapped to corresponding rings in 3 S . Thence such a φ describes a bounded homeomorphism with D roughly the size of such an attachment region. We speculate that the different boundedly homeomorphism classes for all surfaces simply homeomorphic to the half-plane might correspond to "pie-shaped" sections of the honeycomb net with different angles ϑ from a central apex.
PROGNOSES & CONCLUSIONS
It is seen that there is a diverse class of "pure polyhex" molecular π-network structures, for possible use in various nano-devices. For these structures there are non-trivial combinatorial structure-counting relations & conditions related to the topological & geometrical features of these structures. These relations extend standard results for the special case of benzenoids, as well as some fullerenes, and a few other special cases previously studied. Here unified and extended interpretations & consequences are found, particularly in terms of combinatorial curvatures, which are believed to correlate with geometric Gaussian curvatures, and consequent geometric realizations. The results here provide further support that combinatorial curvature is a fundamental concept -helping to characterize, and understand, various possible structures. Steps are taken toward the identification of all possible types of topological surfaces which are hexagonally tilable, most especially for the case that there are either 0 or 1 boundaries to the surface. The case of no boundaries, gives rise to just a finite number of topological possibilities: (infinite) graphene, infinite nanotubes, & bucky-tori -and the nanotubes and nano-tori can be knotted. Our demand of embedding of the polyhex into a 2-dimensional manifold S which in turn is embedded into Euclidean 3-space excludes the possibility of polyhex-tiled Klein bottles (as the related S intersects itself). The case of a single boundary has been attempted to be treated in a somewhat comprehensive way, though even here leading to an infinite number of topological cases -aside from the further possibility of knotting of the boundary in Euclidean 3-space.
There has been much work with special cases of two boundaries, for instance, the case of coronoids has long been extensively contemplated (as reviewed elsewhere [18] ). Also different bracelet-like structures have been addressed, e.g, with work on cyclo-polyacenes going back some time, [32] and many more recent studies, represented by the Refs. [67][68][69][70] Most such cases considered entail holes with 0 combinatorial curvature, though there has been consideration [56,57] with this Figure 18. Indications of two infinite boundaries. The first has a single region where there is a step up in what is otherwise a "straight" zig-zag boundary. The second boundary is translationally symmetric, and can be viewed as periodically adding additional rings to the 3-sector.
curvature being non-zero. For 2 or more boundaries, the topological possibilities seem to become truly bewildering. Nevertheless some of our results still apply.
A further question concerns schemes by which to encode boundaries δ, such as we have found to be so important in understanding the possible purely polyhex structures. Recalling that vertex degrees u d ( 2 = or 3), , a boundary code ( ) δ c for δ could be defined as the sequence of values ( ) 2 u u c δ d ≡ − for the different vertices u when one proceeds along δ with the bulk of the polyhex on the right. Of course there are different starting points, and one might choose a starting point lexicographically: the sequence of digits of ( ) δ c is to be a smallest such binary number. One might have entertained also choosing the direction around δ to minimize this number (just as has been proposed [71] for benzenoids, and benzenoid polymers) -but the choice to keep the bulk of the polyhex (i.e., δ G ) on the right helps ascertain relative orientations of any different boundaries. Now ∑ is our net turn number. But really the boundary code is more to specify the shapes of the boundaries -and could be a subject of future consideration.
In dealing with nano-structures, one often wishes to deal with infinite polyhexes, say as a reference. Some of our theorems require the polyhex structure to be finite, but theorems 3, 6, & 7 do not (though some of these are only about finite boundaries -e.g., of holes in a buckytube, or graphene strip, or even a full sheet of graphene).
Another point concerns structures of a more general class, say which does not have any hexagonal faces (such as polyenes) or even if they do have hexagons, edges not in a hexagon are allowed (such as biphenyl). The given condition of local Euclideanicity would presumably be relaxed, to allow "strings" to be attached to our current sort of surface. This could be a subject of future development. A sort of theoretical development somewhat parallel to what we have done here might be made, deleting these strings (of edges not in any hexagon) and investigating the remnant pure polyhex network. Some (and perhaps much) of the commentary on combinatorial curvature would remain intact, but the identification of degree-2 sites with H atoms would need to be modified (to allow either H atoms or "strings" to be there. This also is left for later. Though a diverse range of structures are here illuminated, it seems that there remains much more to do, as concerns geometric structure, but especially as concerns electronic structure. Even general results for the last item involving the simple Hückel model (involving adjacency-matrix eigenspectra) and resonance theory (involving perfect or nearperfect matchings) should be of interest. The strength and stability of the bonds formed in these polyhex species, indicates a high potential for use in nano-science. We see the study of pure polyhexes as but barely begun, though there has been extensive work on numerous subclasses, by numerous numbers of researchers.
Here we would like to mention three prominent such researchers (who are chemical graph theoreticians): Dr. Prof. Milan Randić, Dr. Ed Kirby, and Prof. Mircea Diudea (each of whom at least one of us count as close friends). We dedicate this work to them. Notably there many more chemical graph theoreticians long active in this area: N. Trinajstić, A. T. Balaban, I. Gutman, H. Hosoya, S. J. Cyvin, J. R. Dias, W. C. Herndon, as well as several close colleagues, and undoubtedly several more whom we have not mentioned. Even further this general area (of benzenoids & beyond) has received much attention, with our current effort being to open up the field a bit more in a general mathematical framework. | 12,402.6 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
Preparation of polyaniline/graphene coated wearable thermoelectric fabric using ultrasonic-assisted dip-coating method
The use of thermoelectric fabrics for powering wearable devices is expected to become widespread soon. A thermoelectric fabric was prepared by coating nanocomposite of polyaniline/graphene nanosheets (PANI/GNS) on a fabric. Four samples of the fabric containing different wt% of GNS (0.5, 2.5, 5, and 10) were prepared. To characterize the samples, Fourier transform infrared (FTIR) spectra, attenuated total reflectance-Fourier transform infrared (AT-FTIR) spectra, field-emission scanning electron microscopy (FE-SEM), electrical conductivity and Seebeck coefficient measurements were used. The electrical conductivity increased from 0.0188 to 0.277 S cm−1 (from 0.5 to 10 wt% of the GNS in PANI/GNS nanocomposite). The maximum coefficient of Seebeck was 18 µV K−1 with 2.5 wt% GNS at 338 °C. The power factor improvement was from 2.047 to 3.084 μW m−1 K−2 (0.5–2.5 wt% GNS).
Introduction
Considering the population growth and significant increases in welfare of most populated countries, the energy crisis has become an ever more crucial phenomenon. There are several solutions to this crisis such as solar cells, biomass, wind, and so on [1]. Another solution involves recycling of waste energies and inventing energy-harvesting technologies that can harvest various forms of energy from the environment dynamically and convert it to electricity. Recently, piezoelectric and thermoelectric materials have been used in this regard. For instance, as the Internet of Things (IoT) becomes more popular, the need for advance sensors will also become more and more crucial [2][3][4]. Most of the waste energy is basically the thermal energy from factories, transportation vehicles and residential units. Thermoelectric materials can convert heat to electricity directly. This phenomenon was first observed by Seebeck in 1821 [5], and is characterized by the formula ZT = S 2 σT/κ, where S, σ, κ and T, stand for Seebeck coefficient, electrical conductivity, thermal conductivity, and absolute temperature, respectively [6]. Another method to describe thermoelectric property is the power factor (PF = S 2 σ) [7].
As in semiconductors, the thermoelectric materials are also divided into two types; n-type and p-type. In the n-type thermoelectrics, the current generation results as the electrons move to the cold side, while in the p-type, the current generation takes place as the holes move to the cold side. Among the most important inorganic materials that have ever been used in thermoelectrics are Bi 2 Te 3 , PbTe, GeTe, and Sb 2 Te 3 [8][9][10][11][12][13]. However, the use of these materials has its own limitations such as high cost and low flexibility as well as the complexity of the production methods [14][15][16][17][18].
The aforementioned cases focused on construction of thermoelectric composites, but recently, researchers have been studying the topic of wearable thermoelectric materials. In 2015, Du et al. [26] coated PEDOT:PSS on polyester fabric and in 2017, Yong Du et al. [27] coated PEDOT:PSS and 15 wt% graphite on polyester fabric achieving 0.025 μW m −1 K −2 power factor.
In this research work, we fabricated PANI/GNS coated polyester/linen fabric and studied its characteristics. Four samples of the fabric containing different wt% of GNS (0.5, 2.5, 5, and 10) were prepared. The materials and methods used in this work are scalable and can easily be commercialized. Besides the advantage of affordable materials and methods, the results also showed that the Seebeck values for the produced samples are comparable with those of other works reported in the literature.
Materials and methods
Synthesis grade aniline monomer double distilled under vacuum and stored at 0 °C from Sigma-Aldrich, ammonium peroxydisulfate (APS) from Sigma-Aldrich, GNS from Daejeon, South Korea, hydrochloric acid (HCl) and acetone from Merck, deionized (DI) water, commercial polyester and yarn fabric were used in this study.
Polymerization
Initially GNS was dispersed in 1 M HCl by ultrasonic horn for 30 min at room temperature. Then aniline monomer was added to the HCl/GNS suspension and was stirred for 1 h under nitrogen atmosphere at 0-4 °C. APS was dissolved in 1 M HCl and was added to the HCl/GNS/aniline suspension slowly (to prevent flash polymerization) and was continuously stirred for 6 h in nitrogen atmosphere at 0-4 °C. Finally, it was kept at 0-4 °C for 24 h without any movement to complete polymerization. The completely polymerized product was centrifuged for 15 min to obtain pure PANI/ GNS composite nanoparticles and eliminate the undesirable aniline oligomer. Then acetone was added to the composite nanoparticles and centrifuged for 15 min again, and the extra diluent was removed. This was repeated twice. Finally, a suspension was made by dispersing PANI/GNS composite nanoparticles (PANI/GNS suspension) in acetone by stirring. Schematic of nanocomposite synthesis process and schematic of PANI/GNS composite nanoparticles synthesis are shown in Fig. 1a and b, respectively.
Preparation of PANI/GNS coated fabric
The as-received fabric was cut into several strips (20 mm × 10 mm). All strips were washed in DI water to remove any contamination. For coating, the fabric strips were dropped in PANI/GNS suspension and sonicated for 1 h. Then the coated fabric strips were removed from the PANI/GNS suspension and dried in an oven at 60-70 °C for 15 min. These steps were repeated twice for achieving the final product (thermoelectric fabric nanocomposite).
Characterization
The surface morphology of the thermoelectric fabric samples was studied by field emission scanning electron microscopy (FE-SEM) model TEScan-Mira III. To verify the desired synthesis of nanocomposites, Fouriertransform infrared spectroscopy (FTIR) was used. Attenuated total reflection (ATR) test was used to demonstrate the presence of nanocomposite on the fabric. A standard four-probe method was used for the electrical conductivity measurements. A Keithley 487 picoammeter/voltage source was used for the measurements. Seebeck coefficient of the samples was measured by an automatic apparatus described comprehensively in [28,29].
Results and discussion
Weight measurement revealed that the percentage of PANI/GNS on the fabric surface was proportionate in all four samples containing GNS. The FTIR spectra of PANI and PANI/GNS are shown in Fig. 2. In the region 400-4000 cm −1 , there are five main peaks at wavelengths of 802, 1145, 1303, 1508, and 1597 cm −1 indicating good agreement with those reported in other works [16,17]. The peaks appearing in the 1500-1600 cm −1 range correspond to the C-N bond of quinoid ring (1597 cm −1 ) and benzenoid ring (1508 cm −1 ) vibrations in the emeraldine salt. The intensity ratio of these two peaks quinone/benzene for synthesized PANI is less than unity. The peaks observed at 1303 and 802 cm −1 are related to the C-N bond of secondary aromatic amines and the aromatic C-H flexion graft [13]. The peak appearing at 1145 cm −1 is related to the protonation of nitrogen atoms in the imine ring of quinones [13,17], and is known as an electron-like band.
There are significant differences between the PANI/GNS and PANI spectra. Interestingly enough, however, there is a little difference in the peaks of those containing different wt% GNS. In the PANI/GNS spectra, the ratio of the quinone/benzene intensity is more than unity, indicating that PANI is richer in this case than in the quinone rings. In other words, the interaction of PANI-graphene increased the quinone ring structure [19]. The position of peaks of quinone and benzene also changed. Another obvious difference in the N-H bond is in the 3402 cm −1 . This peak is strong in the nanocomposites, while very weak in PANI. The reason for the strength of this peak is not well known, but the interaction between PANI and GNS may seem to lead to chain transfer [20]. The intensity and position of other peaks in the PANI/GNS nanocomposites have also changed as compared to PANI. For instance, the intensity of 1145 cm −1 peak has increased, indicating that the interaction between PANI and GNS has facilitated the transfer process and increased the degree of electronic stability in the PANI chain, which ultimately resulted in increased electrical conductivity in the PANI/GNS [13].
To determine the presence of nanocomposite on the fabric, AT-FTIR spectrum was taken of the 2.5 wt% GNS sample. The result is shown in Fig. 3 and in Table 1.
FE-SEM images of the specimens were taken at different magnifications. Examples include impregnated fabric samples by PANI/GNS containing 0.5, 2.5, 5, 10 wt% GNS. The surface morphology and particle size on the fabrics containing varying wt% GNS are shown in Figs. 4 and Fig. 2 FTIR spectra of PANI and PANI/GNS nanocomposites: a 0.5 wt% GNS, b 0 wt% GNS, c 2.5 wt% GNS, d 5 wt% GNS, e 10 wt% GNS 5, and it can be said that with increasing wt% GNS, the particle size distribution improved, but only up to 5 wt%, then a composite film formed on the fabric. By examining Fig. 5, it is obvious that all composites are in the nano size range.
Electrical conductivity
The result of electrical conductivity of PANI/GNS coated fabric are shown as a function of GNS loading in Fig. 6.
As shown in Fig. 6, with increment in GNS content, electrical conductivity increased. The increase in electrical conductivity was more pronounced up to 2.5 wt% GNS. This improvement in the composite electrical conductivity can be attributed to the high electrical conductivity of GNS; their high carrier movability, as well as their uniform distribution throughout the PANI matrix [27,38,39]. In other words, GNS acted as bridge helping in carrier transport by π-π interactions and thus enhanced carrier movability, leading to increased electrical conductivity on the fabric surface. The optimal enhancement in electrical conductivity occurred at 2.5 wt% GNS, beyond which GNS may interrupt the carrier transportation in the PANI matrix chains [38].
Seebeck coefficient
The Seebeck coefficient, shown in Fig. 7, was obtained by measuring the difference in voltage produced per unit of temperature difference created on both sides of the coated fabric samples. As shown in Fig. 7a, the Seebeck coefficient increased very sharply at around 2.5 wt% GNS followed by downward trend. The improvement of Seebeck coefficient could be illustrated in the sense of the energy filtering of carriers by inducing crystallization that may occur during PANI/GNS nanocomposite preparation [40,41]. To estimate Seebeck coefficient for a p-type semiconductor, the figure of merit equation is used: where n is carrier concentration, q is charge, r is scattering parameter, m* is effective mass, k B is Boltzmann constant, T is temperature, and h is Planck's constant. Presented results in literature also shows that Seebeck coefficient decreases by increasing carrier concentration, usually [42]. The increase and decrease trends in Seebeck coefficient have also been attributed to PANI-type transition from n-to p-type in the presence of GNS for around 1 wt% [40]. A study has noted that electrical conductivity and Seebeck coefficient cannot increase simultaneously due to narrow energy transport level ( E T ) and Fermi level ( E F ). In this study, however, both the Seebeck coefficient and electrical conductivity of PANI/ GNS increased simultaneously, indicating that a conventional model based on the band theory or the electron-phonon scattering cannot explain the conduction mechanism [43]. The simultaneous increase in both properties has been reported in other studies too [39,44,45]. The reason might be due to the electronic-structure of the polymers [43]. In other words, the molecular ordering of polymer chains affect their electronic-structure, causing an increase in charge carrier mobility, and leading to simultaneous increase in both Seebeck coefficient and electrical conductivity [46].
As noted above, Seebeck coefficient, according to Eq. 1, is function of temperature. In Fig. 7b, the Seebeck coefficient is shown for two samples (2.5 and 5 wt% GNS) that showed the highest Seebeck coefficient and electrical conductivity among the four samples. Figure 7b illustrates the significance of temperature on Seebeck coefficient; the Seebeck coefficient for 2.5 wt% GNS sample increased from 12 to 18 µV K −1 (50%) in the temperature range of 303-338 K, while for the 5 wt% GNS sample increased from 4.5 to 13.2 µV K −1 (193%) in the same temperature range. The upward progression of the Seebeck coefficient with temperature for the PANI/GNS composite of p-type has been reported in another study, too [20,47].
Power factor
The effects of GNS loading on the power factor of the PANI/GNS nanocomposite fabric are presented in Fig. 8. It is obvious that the power factor was very low up to around 1.5 wt% GNS, which is due to very low conductivity in the samples. Within the range of 1.5-2.5 wt% GNS, the power factor very sharply increased and reached its maximum value. Considering that 2.5 wt% GNS provided the maximum power factor, it was used as the appropriate wt% GNS for coating the fabric strips and preparing the nanocomposite materials.
Conclusions
Thermoelectric flexible fabric was prepared by applying a thin layer of PANI/GNS on the surface of a woven fabric by ultrasonic assisted dipping and slurry aqueous polymerization processes. The Seebeck coefficient results revealed that PANI/GNS induced thermoelectric properties to the flexible fabric up to 193%, and over 190% enhancement in, the power factor, making this a potential fabric for powering wearable devices and waste energy recycling. Fig. 6 Electrical conductivity of coated fabric as a function of GNS loading measured at room temperature (300 K) Fig. 7 a Seebeck coefficient as a function of wt% GNS measured at average temperature (338 K). b Seebeck coefficient for 2.5 and 5 wt% GNS samples as a function of average temperature Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. Fig. 8 Power factor as a function of wt% GNS measured at average temperature (338 K) | 3,305.4 | 2020-10-04T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Impact of Early Conventional Treatment on Adult Bone and Joints in a Murine Model of X-Linked Hypophosphatemia
X-linked hypophosphatemia (XLH) is the most common form of genetic rickets. Mainly diagnosed during childhood because of growth retardation and deformities of the lower limbs, the disease affects adults with early enthesopathies and joint structural damage that significantly alter patient quality of life. The conventional treatment, based on phosphorus supplementation and active vitamin D analogs, is commonly administered from early childhood to the end of growth; unfortunately, it does not allow complete recovery from skeletal damage. Despite adequate treatment during childhood, bone and joint complications occur in adults and become a dominant feature in the natural history of the disease. Our previous data showed that the Hyp mouse is a relevant model of XLH for studying early enthesophytes and joint structural damage. Here, we studied the effect of conventional treatment on the development of bone and joint alterations in this mouse model during growth and young adulthood. Mice were supplemented with oral phosphorus and calcitriol injections, following two timelines: (i) from weaning to 3 months of age and (ii) from 2 to 3 months to evaluate the effects of treatment on the development of early enthesophytes and joint alterations, and on changes in bone and joint deformities already present, respectively. We showed that early conventional treatment improved bone microarchitecture, and partially prevented bone and joint complications, but with no noticeable improvement in enthesophytes. In contrast, later administration had limited efficacy in ameliorating bone and joint alterations. Despite the improvement in bone microarchitecture, the conventional treatment, early or late, had no effect on osteoid accumulation. Our data underline the usefulness of the Hyp murine model for preclinical studies on skeletal and extraskeletal lesions. Although the early conventional treatment is important for the improvement of bone microarchitecture, the persistence of osteomalacia implies seeking new therapeutic strategies, in particular anti-FGF23 approach, in order to optimize the treatment of XLH.
X-linked hypophosphatemia (XLH) is the most common form of genetic rickets. Mainly diagnosed during childhood because of growth retardation and deformities of the lower limbs, the disease affects adults with early enthesopathies and joint structural damage that significantly alter patient quality of life. The conventional treatment, based on phosphorus supplementation and active vitamin D analogs, is commonly administered from early childhood to the end of growth; unfortunately, it does not allow complete recovery from skeletal damage. Despite adequate treatment during childhood, bone and joint complications occur in adults and become a dominant feature in the natural history of the disease. Our previous data showed that the Hyp mouse is a relevant model of XLH for studying early enthesophytes and joint structural damage. Here, we studied the effect of conventional treatment on the development of bone and joint alterations in this mouse model during growth and young adulthood. Mice were supplemented with oral phosphorus and calcitriol injections, following two timelines: (i) from weaning to 3 months of age and (ii) from 2 to 3 months to evaluate the effects of treatment on the development of early enthesophytes and joint alterations, and on changes in bone and joint deformities already present, respectively. We showed that early conventional treatment improved bone microarchitecture, and partially prevented bone and joint complications, but with no noticeable improvement in enthesophytes. In contrast, later administration had limited efficacy in ameliorating bone and joint alterations. Despite the improvement in bone microarchitecture, the conventional treatment, early or late, had no effect on osteoid accumulation. Our data underline the usefulness of the Hyp murine model for preclinical studies on skeletal and extraskeletal lesions. Although the early INTRODUCTION X-linked hypophosphatemia (XLH) is the most common form of genetic rickets. This rare disease is caused by inactivating mutations in the phosphate-regulating endopeptidase homolog X-linked (PHEX) gene, characterized by chronic hypophosphatemia. Impaired function of PHEX leads to elevated levels of phosphaturic fibroblast growth factor 23 (FGF23) resulting in renal phosphate-wasting hypophosphatemia and low levels of calcitriol [1,25(OH) 2 D 3 ] via the inhibition of 1α-hydroxylase and the activation of 24-hydroxylase (Kinoshita and Fukumoto, 2018).
Clinically, children with XLH are characterized by progressive skeletal deformities (leg bowing, waddling gait, poor growth, and disproportional short stature), dental abscesses, and craniosynostosis. Adult patients present various symptoms of osteomalacia such as bone pain, insufficiency fractures, and myopathy. In addition, adults may develop hearing loss, odontomalacia, mineralizing enthesopathy, and osteoarthropathy (Linglart et al., 2014). In adult patients with XLH, the aforementioned manifestations significantly reduce quality of life (Che et al., 2016;Steele et al., 2020). We showed that Hyp mice, a murine model of XLH, developed early osteoarticular lesions and the severity of these lesions gradually increased over 12 months, demonstrating the relevance of this murine model for osteoarticular preclinical studies (Faraji-Bellee et al., 2020).
Current medical treatment of XLH consists of oral active vitamin D [calcitriol or 1α-(OH)D 3 ] and multiple daily doses of phosphate supplements. To optimize the final outcomes (recovery of rickets, normalization of elevated alkaline phosphatase (ALP) levels, growth improvement, restoration of leg deformities, and dental mineralization), treatment should be started as soon as the diagnosis of XLH is made. Supplementation is commonly prescribed from early childhood to the end of growth but is also essential in certain periods of adult life such as pregnancy or breastfeeding, before planned surgical interventions, and in all symptomatic patients with XLH (recurrent dental abscesses, fractures, etc.) (Linglart et al., 2014). Nonetheless, despite this treatment during growth, musculoskeletal symptoms due to enthesopathies and osteoarthritis remain the major manifestations in the clinical progression of XLH. Further, there is a paucity of data on the effects of conventional treatment (i) if started early, on the abnormal bone phenotype in XLH, beyond the traditional goals, and (ii) if started early, on prevention of and/or recovery from osteoarticular manifestations of XLH such as osteoarthritis and enthesopathies. The main limitations of the studies on the effect of conventional treatment in XLH performed so far are small sample sizes and their retrospective cross-sectional observational design that does not take into account the age when the treatment is started.
Therefore, we designed a prospective study in a murine model of XLH (Hyp mice) aiming to evaluate whether, if started early in life, conventional treatment is capable of preventing and/or ameliorating the skeletal and extraskeletal manifestations of hypophosphatemia.
Mice
The Hyp mouse model B6.Cg-Phex Hyp/J was used in this study. Heterozygous breeding was carried out and tail snips were collected for genotyping. DNA was extracted from the snips using DNeasy Blood and Tissue Kit (Qiagen, France) and the genotype was determined by PCR using primers for the Phex gene. Wildtype (WT) and Hyp littermate male mice were used in experimental procedures. All experiments were performed with a protocol approved by the Animal Care Committee of the Université de Paris (project agreement 20-008, APAFiS #27827 N • 202001171429974). Animals were maintained in accordance with the ethical protocol approved by the Animal Care Committee of French Veterinary Services (DPP Haut de Seine, France: agreement number D9204901). All mice were housed under standard conditions of temperature (23 ± 2 • C) with a 12:12 h light-dark cycle and unlimited access to water and standard pelleted food (1.20% calcium and 0.83% phosphorus, rodent diet 3800PMS10, Provimi Kliba, Kaiseraugst, Switzerland).
Two groups of treated Hyp mice were carried out in this study (n = 6 mice per group): (1) to study the effect of long-term treatment on skeletal/extraskeletal manifestations if started early during growth, Hyp mice received the conventional treatment during the whole study period from the juvenile stage starting at weaning, which occurs at 3 weeks (W), to the beginning of the mature adult stage which occurs at 3 months (M) of life and (2) Hyp mice which received the conventional treatment from M2 (corresponding to the end of juvenile stage) to M3 to study the effect of the conventional treatment on skeletal/extraskeletal manifestations if started later during growth. Both treated groups of Hyp mice were compared to control WT and Hyp mice, which were not given conventional treatment (n = 6 per group).
The conventional treatment consisted of intraperitoneal injections of calcitriol 175 pg/g every other day [1,25(OH)2D3, Cayman Laboratory] and phosphate supplementation (phosphate-enriched water 1.93 g phosphate element per liter of beverage, Phosphoneuros). Doses of calcitriol were adjusted once a week according to the animals' weight.
In vivo Study
The Growth Parameters Growth parameters (body weight and total length) were measured once a week. Precision scales and a graduated ruler were used for weight and length measurements, respectively. Additionally, the length of the rachis was measured on X-ray micro-computed tomography (Micro-CT) images.
X-Ray Micro-computed Tomography Analysis
Wildtype and Hyp mice were scanned at W3, M2, and M3 using a high-resolution X-ray micro-CT system (Quantum FX Caliper, Life Sciences, Perkin Elmer, Waltham, MA, United States) hosted by the PIV Platform (UR2496, Montrouge, France). Standard acquisition settings were applied (setting the voltage at 90 kV and intensity at 160 mA), and scans were performed with a field of view alternatively focused on the right paw (scan time of 180 s and voxel size of 20 µm 3 ), focused on the hip (120 s and 50 µm 3 ), or covering the full body (36 s and 236 µm 3 ). Micro-CT datasets were analyzed using the built-in multiplanar reconstruction tool, Osirix 5.8 (Pixmeo, Switzerland), to obtain time series of images aligned anatomically for each region of each animal.
Axial and coronal images of the sacroiliac and hip joints, sagittal images of the spine, and axial and sagittal images of the hind paw were reconstructed. The following were evaluated: hip osteoarthritis (defined as the presence of osteophytes on joint margins, narrowing of the joint space or altered shape of the bone ends); enthesopathies (defined as new bone formation at enthesis sites) on the iliac bone, spine or paw; erosion of the sacroiliac joint and periarticular calcification. The analysis was focused on these areas because they are the most frequent sites of structural involvement in adults with XLH. Erosion of the sacroiliac joints was assessed following Faraji-Bellee et al. (2020) protocol, which was developed by rheumatologists specialized in the field of rare bone diseases and bone inflammatory diseases. The reader was blind to the status of the mouse (Hyp vs WT) but was aware of the different analysis time points (W3, M2, M3). A semi-quantitative score was established, ranging from 0 (normal) to 3 (most severe feature assessed) for sacroiliac erosions (see Supplementary Table 1).
The angle of dorsolumbar kyphosis of mice was defined for each mouse at W3, M2, and M3. Using sagittal images of mice spines from full-body CT scans, endplate orientations of thoracic and lumbar vertebrae have been marked using ImageJ (Rasband, W.S., ImageJ, U.S. National Institutes of Health, Bethesda, MD, United States 1 1997-2016). The apical thoracic vertebrae of the rachis were identified and the angle of kyphosis defined by the means of (1) the tangent to the lower vertebral endplate of the fourth lower vertebra and (2) the tangent of the upper vertebral endplate of the fourth upper vertebra, from the apical vertebra. A script in MATLAB (MATLAB R2012b, The MathWorks Inc., Natick, MA, United States, 2000) was used to measure these angles at each age in each of the mice. 1 https://imagej.nih.gov/ij/ The trabecular bone was analyzed at the distal metaphysis of the femur. The following parameters were used: bone volume/total volume (BV/TV) ratio, trabecular number (TbN), trabecular separation (TbSp), trabecular thickness (TbTh), and trabecular pattern factor (TbPf) (Bouxsein et al., 2010).
Alkaline phosphatase was used to reveal the layer of osteogenic cells by incubating the sections with naphthol ASTR phosphate (Sigma-Aldrich) and diazonium fast blue RR salt (Sigma-Aldrich) for 30 min at 37 • C (pH 9) in the presence of MgCl 2 .
Immunohistochemistry
Sections embedded in methyl methacrylate were deplasticized in methyl glycol acetate. After rehydration in a graded ethanol series to pure distilled water, non-specific peroxidases were blocked for 15 min with ortho-periodic acid and background activity was blocked at room temperature using 5% bovine serum albumin (BSA). Sections were then incubated in a humid atmosphere for 12 h at room temperature in a dark chamber with primary antibody against sclerostin (SOST) (R&D Systems, Minneapolis, MN, United States) diluted at 5 µg/mL. Sections were washed and then incubated with polyclonal antigoat immunoglobulin (Dakocytomation) diluted at 1/200 for 1 h at room temperature in a dark chamber. Peroxidase activity was detected using diaminobenzidine (DAB) substrate (Sigma-Aldrich). Control incubations to assess non-specific staining consisted of the same procedure except that the primary antibody was replaced by non-immune serum.
Statistical Analysis
Statistical analysis was carried out and graphs plotted with GraphPad Prism for Windows, version 7.0. The distribution of variables was tested with Kolmogorov-Smirnov test. The results are expressed as the mean ± SD for continuous variables and comparisons being performed using ANOVA. Data are expressed as median with interquartile range in Figures 4B,C and mean ± SD in Figure 4D using Mann-Whitney and Student's t-test for statistical analysis, respectively. P-values of less than 0.05 were considered significant.
Effect of Conventional Treatment on Growth and Dorsolumbar Kyphosis
Seeking to understand the effect of the conventional treatment on parameters such as growth and dorsolumbar kyphosis, we analyzed changes in spine length and changes in dorsolumbar kyphosis (expressed in degrees of curvature) by group (Figure 1).
Early Start of Conventional Treatment
At baseline, Hyp mice had significantly shorter spine lengths than WT animals ( Figure 1B). After starting the treatment, Hyp mice in the early treatment group showed much larger gains in length than WT or untreated Hyp mice (Figure 1B). On the other hand, micro-CT analysis showed that at M3, WT mice were significantly longer than treated Hyp mice, and there were no significant length differences between treated and untreated Hyp mice ( Figure 1B).
Regarding dorsolumbar kyphosis, spinal kyphosis increased between baseline and the end of the study in untreated Hyp mice whereas it decreased over time in the WT group. No significant difference in spine curvature was observed at M3, i.e., at the end of the study, between the Hyp mice treated early and WT groups ( Figure 1B).
Late Start of Conventional Treatment
Hyp mice did not show a significant increase in length after initiating the treatment at M2, and no differences in spine length were seen between treated and untreated Hyp mice at M3 (Figure 1C). Stature growth was not statistically enhanced by conventional treatment when initiated as an adult. Nevertheless, at M3, Hyp mice with treatment showed a gain in length comparable to that in WT animals ( Figure 1C).
Regarding dorsolumbar kyphosis, though not statistically significant, spinal kyphosis tended to decrease in Hyp mice after initiating treatment, following the same pattern as in the WT group. In contrast, in untreated Hyp mice, kyphosis increased over time (Figure 1C).
Hyp mice with late treatment showed a spine length and curvature comparable to Hyp mice with early treatment (Figure 1D).
Effect of Conventional Treatment on Bone Microarchitecture
Early Start of Conventional Treatment 3D reconstructed images demonstrated that treated Hyp mice had bone microstructure very similar to that in the WT group by the end of study (M3) (Figure 2A). Regarding the parameters of bone quantity, both BV/TV and TbN were significantly higher (p < 0.05) by M2 in Hyp mice with early treatment than in untreated animals, and this difference was maintained at M3 ( Figure 2B). There were no differences in TbTh between WT and Hyp mice (treated or not) at any point in the follow-up (Supplementary Figure 1A). Regarding the trabecular connectivity (expressed as TbPf), there were significantly more connections between bone trabecula (p < 0.05) in treated than untreated Hyp mice, though this difference was only observed at M3 (Supplementary Figure 1A).
Late Start of Conventional Treatment
3D reconstructed images demonstrated a slightly greater bone mass in treated than untreated Hyp mice, although the bone microstructure in treated Hyp mice was far from that in the WT group (Figure 2A). The study of bone microarchitecture showed that there were no statistically significant differences in BV/TV, TbN, TbTh, or TbPf between treated and untreated Hyp mice by the end of study (M3) (Figure 2C and Supplementary Figure 1B).
Compared to Hyp mice in the early treatment group, Hyp mice treated late showed lower BV/TV and TbN at M3, although the statistical significance was not reached (Figure 2D). There were no differences in TbTh and TbPf between Hyp mice with early or late treatment groups (Supplementary Figure 1C).
Effect of Conventional Treatment on Bone and Joint Structural Damages
Bone and Joint Alterations in the Axial Skeleton (Sacroiliac Joint)
Early start of conventional treatment
Micro-CT images, performed at baseline (W3), showed alterations in the sacroiliac joint in Hyp mice, in comparison to WT (Figure 3A). Two out of six untreated Hyp mice and two out of six Hyp mice in the early treatment group already displayed a high sacroiliac joint score for erosion at baseline (score between 2 and 2.5 out of 3 which means <25% and ≥25% to <50% of the articular surface area affected, respectively) (Supplementary Table 2). At M2 and M3, multiple erosions and an irregular and blurred appearance of the cortical margins were noted in untreated Hyp mice, whereas Hyp mice started on the conventional treatment early showed fewer erosions and a more regular appearance of the sacroiliac joint, compared to that observed in WT animals ( Figure 3A). These results were confirmed by the sacroiliac joint scores for erosion at M3 which were lower in Hyp mice with early treatment compared to untreated Hyp mice. At M3, three of six Hyp mice with early treatment had a mean score of 0, whereas none of untreated Hyp mice had such a mean score (Figures 3B-D
Late start of conventional treatment
Multiple erosions and irregular and blurred cortical margins of sacroiliac joints were noticed on micro-CT either at M2 or M3 in untreated Hyp mice, in comparison to WT animals. Hyp mice given treatment, even when initiated late, showed a slight trend of amelioration of these alterations present at M2 before treatment, nonetheless, there were no differences in sacroiliac joint score for erosion between untreated Hyp mice and Hyp mice with late treatment (Figures 3A,C and Supplementary Table 3).
Compared to Hyp mice in the early treatment group, Hyp mice treated late had a significant higher score at M3 (Figure 4D and Supplementary Tables 2, 3).
Effect of conventional treatment on bone markers
To confirm the micro-CT results and study the pathophysiological mechanism, we performed histological analyses of sacroiliac joints of the 3-month-old mice (Figures 4, 5).
Von Kossa and Masson's Trichrome staining of the sacroiliac joint confirmed altered mineralization with accumulation of osteoid in untreated Hyp mice, compared to WT mice. The accumulation of osteoid was still evident in Hyp mice with early or late treatment (Figure 4). As expected, Hyp mice showed strong ALP staining (a marker of ALP activity), especially at the periphery of the osteoid. In contrast, in untreated Hyp mice, TRAP staining indicated that osteoclasts were present but concentrated in large "clusters" in the peripheral zone of sacroiliac joint. Both early and late start of conventional treatment in Hyp mice hardly modified ALP/TRAP activity (Figure 5).
We further studied the expression of sclerostin as a marker of differentiated osteocytes and bone turnover. Immunohistochemistry showed sclerostin expression in osteocytes of subchondral bone in WT mice (Supplementary Figure 2). Interestingly, untreated Hyp mice and Hyp mice treated late showed only faint sclerostin expression. In contrast, in the Hyp mice given early treatment, sclerostin expression was somewhat higher, and this finding is suggestive of improved regulation of bone turnover by terminally differentiated osteocytes.
Peripheral Enthesophytes (Calcaneus) Early start of conventional treatment
Micro-CT images of hind paws showed similar features in 3-month-old Hyp mice of both treated and untreated groups (Supplementary Figure 3). Histological sections revealed mineralizing fibrochondrocytes expanding into both Achilles tendon and plantar fascia ligament insertions of calcaneal tuberosity in Hyp mice (Figure 6).
Late start of conventional treatment
The multiple calcaneal enthesophytes present in untreated Hyp mice were also seen in Hyp mice on conventional treatment started late (Figure 6).
Effect of conventional treatment on bone markers
Alkaline phosphatase and TRAP staining in the area of the enthesophytes at the insertion of the Achilles tendon and plantar fascia ligament was similar in Hyp mice that received treatment, either early or late, and in both cases more than that observed in WT mice (Figure 6). That is, these alterations persisted in Hyp mice regardless of early conventional treatment. Neither early nor late conventional treatment seems able to prevent excessive bone mineralization at the Achilles tendon (enthesopathies) and restore ALP and TRAP activity (Figure 6).
In WT mice, sclerostin labeling was not observed in the Achilles tendon. In contrast, labeling was observed in both untreated and treated (early or late) Hyp mice at the insertion of the tendon, corresponding to the area where enthesopathies develop (Supplementary Figure 2). In Hyp mice on conventional treatment started early, there is improvement in extraskeletal manifestations at M2 and M3, but without complete restoration ad integrum. No improvement in extraskeletal manifestations is noticeable in Hyp mice on conventional treatment started late. Sacroiliac score evaluated at Micro-CT scans at M3 (B) in WT mice, untreated Hyp mice, and Hyp mice on conventional treatment started early, (C) in WT mice, untreated Hyp mice, and Hyp mice on conventional treatment started late, and (D) in Hyp mice on conventional treatment started early compared to Hyp mice on conventional treatment started late. W3: 3 weeks; M2: 2 months; M3: 3 months; *p < 0.05; **p < 0.01.
DISCUSSION
The main focus of conventional treatment (phosphate supplements and active vitamin D) is growth restoration, and the impact of this type of treatment on XLH manifestations has been poorly investigated after the end of growth. In this context, we studied its effect on the main manifestations of XLH, in particular, skeletal features. Our study is the first to demonstrate that conventional treatment of XLH started early significantly improves bone microarchitecture and sacroiliac joint lesions but has little effect on enthesopathies assessed at the calcaneus. Empirically, it is assumed that current medical treatment of XLH should be started as early as possible to optimize final clinical outcomes in children (Linglart et al., 2014;Haffner et al., 2019). Nonetheless, this has yet to be proven in prospective studies. We have demonstrated that early conventional treatment has a significantly positive effect on impaired skeletal development. Micro-CT analysis showed that Hyp mice with early treatment showed a progressive increase in parameters of bone quantity (BV/TV, TbN) and structure (TbPf) resulting in a bone microarchitecture similar to WT mice. Interestingly, patients with XLH show compromised trabecular microarchitecture despite conventional treatment received since childhood (Cheung et al., 2013;Shanbhogue et al., 2015;Colares Neto et al., 2017). The major limitation of these studies is their retrospective design without taking into account age at the start or the duration of conventional treatment.
Importantly, our histological analyses showed a persistence of osteoid accumulation in treated Hyp mice by Von Kossa staining. Regardless of improvement of bone microarchitecture, neither early nor late treatment cured osteomalacia. However, restoration of sclerostin expression at the sacroiliac joint in Hyp mice in the early treatment group suggested that terminally differentiated osteocytes may regulate bone turn over, even if ALP and TRAP activities presented similar features in treated and untreated Hyp groups.
The effect of conventional treatment on skeletal manifestations depends on the bone involved. A significant reduction in the sacroiliac alterations was demonstrated in Hyp mice treated early, compared to Hyp mice treated late. Regarding the peripheral skeleton, conventional treatment, both early and late, has little effect on enthesopathies. Hyp mice in both treatment groups had persistent heel enthesophytes at the end of the study. The ALP expression in Achilles tendon in treated Hyp mice ultimately confirmed the expansion of mineralizing fibrochondrocytes in ligaments. Interestingly, we noticed sclerostin expression by fibrochondrocytes at the tendon of Hyp mice treated early. These results are concordant with Karaplis et al. (2012) findings and also the clinical study by Connor et al. (2015), which also demonstrated the limited effect of conventional treatment on enthesopathies in patients with XLH. From a pathophysiological point of view, hyperplastic fibrocartilaginous chondrocytes in the tendon are independent of improved mineralization of the bony part at the calcaneus. In this condition, newly mineralized bone is likely not strong enough to prevent the expansion of mineralizing fibrocartilaginous chondrocytes within the enthesis at the points of mechanical strain, a process that may be a compensatory mechanism to respond to biomechanical properties of poorly mineralized bone.
Nowadays, the mechanism of early osteoarthritis and enthesopathies in XLH is poorly understood. The degenerative osteoarthropathy principally depends on the accumulation of unmineralized immature bone (Liang et al., 2011;Wei and Bai, 2016). Additionally, high levels of FGF23 may promote the Wnt/ß-catenin pathway in chondrocytes which activates the genes responsible for increased chondrocyte differentiation and osteoarthritis progression. Consequently, low expression of inhibitors of the Wnt/ß-catenin pathway such as sclerostin is usually associated with the promotion of cartilage degradation (Meo Burt et al., 2018). Indeed, the results of our study confirm that accumulation of osteoid in the subchondral bone and lack of sclerostin expression are associated with the appearance and progression of numerous erosions at the sacroiliac joint in non-treated Hyp mice. On the contrary, sclerostin expression was restored in Hyp mice treated early, suggesting a link with the reduction of sacroiliac bone erosion in this group compared to Hyp mice with late treatment.
Early conventional treatment did not completely restore spine length in Hyp mice. Conventional treatment significantly increased the gain in length in Hyp mice treated early or late, though it did not have a significant impact on final length. In fact, we found no significant differences in length as assessed by micro-CT between non-treated and treated Hyp mice (early or late) at the end of the experiment. These results are in line with other studies performed in children with XLH. Specifically, healing active rickets promotes growth and after 2 years of successful treatment growth velocity is restored to its maximal potential in the majority of patients; however, 25-40% of patients with well-controlled XLH show linear growth failure despite optimized treatment (Linglart et al., 2014). It may be explained by several reasons. First, linear growth depends on bone mineralization and also on the cartilage growth plate FIGURE 5 | ALP and TRAP enzyme histochemistry of sacroiliac joint of 3-month-old WT mice, untreated Hyp mice, and Hyp mice on conventional treatment started early or late. The TRAP reaction stained for osteoclast cell activity and the osteogenic layer stained with ALP reaction indicate that both early and late start of conventional treatment in Hyp mice hardly changed ALP/TRAP activity. ALP, alkaline phosphatase; TRAP, tartrate-resistant alkaline phosphatase. (Fuente et al., 2018). Partial growth restoration might result from a poor effect of the conventional treatment on growth plate maturation. Second, the conventional treatment, being a supplementary treatment, does not have influence on other players involved in XLH pathogenesis such as high levels of FGF23 (Carpenter et al., 2010;Zhukouskaya et al., 2020) or accumulation in the extracellular matrix of other proteins or peptides (osteopontin, ASARM peptides, etc.) (Salmon et al., 2013(Salmon et al., , 2014Coyac et al., 2018).
The major strength of our study lies in its clinical implications. This is the first preclinical research demonstrating the beneficial effects of early conventional treatment in a well-designed prospective study, the findings underlining the importance of this treatment in the management of XLH.
Our results were confirmed by several different methods, from clinical manifestations to radiological and histological analysis. Overall, the data provided highlight the usefulness of the Hyp murine model for preclinical studies on skeletal and osteoarticular lesions.
CONCLUSION
We have shown that conventional treatment given since an early stage improves bone microarchitecture and prevents joint erosions, though it does not have a notable effect on the formation of enthesophytes. Despite the improvement of bone microarchitecture, the persistence of osteomalacia FIGURE 6 | Histological analysis of the calcaneus of 3-month-old WT mice, untreated Hyp mice, and Hyp mice on conventional treatment started early or late. Masson's Trichrome and Von Kossa staining of undecalcified sections of calcaneal area confirmed the cellular expansion of mineralizing fibrochondrocytes into Achilles tendon and plantar fascia ligament insertions (red arrows) of the calcaneal tuberosity in Hyp mice. ALP and TRAP staining in the area of the enthesophytes at the insertion of the Achilles tendon and plantar fascia ligament was similar in Hyp mice that received treatment, either early or late, and in both cases (red arrows) more than that observed in WT mice. ALP, alkaline phosphatase; TRAP, tartrate-resistant alkaline phosphatase.
implies seeking new therapeutic strategies. Further studies are needed to compare these outcomes with the potential benefits of new therapies such as anti-FGF23 to improve the treatment of XLH.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
All experiments were performed with a protocol approved by the Animal Care Committee of the Université de Paris (project agreement 20-008, APAFiS #27827 N • 202001171429974). Animals were maintained in accordance with the ethical protocol approved by the Animal Care Committee of French Veterinary Services (DPP Haut de Seine, France: agreement number D9204901). | 6,829.2 | 2021-02-18T00:00:00.000 | [
"Biology"
] |
Long-ranged Interaction Forces and Real Spaces Related to Them Including Anisotropic Cases
This paper is aimed to find a connection between i-dimensional spaces (i=0,..., ‘n’) and the long-range j-dimensional attractive forces (j=0,..., ‘m’) creating these spaces. The connection is fundamental and unrelated to any processes going in the spaces being studied. A theorem is formulated and strictly proved showing in which cases the long-ranged attractive forces can form real spaces of different dimensions ( i=0,...,n). The existence of the attraction between masses is defined by divergence of the vector of interaction between masses. Weak anisotropic real spaces are studied by rotating an ellipsoid for (3 ζ)D-cases when its eccentricity ε<<1. Such spaces cannot be in equilibrium, the time of their existence is substantially limited. The greater is anisotropy, the shorter is the lifetime of such substance. The latter cannot be in equilibrium, the time of their existence is substantially limited.
Introduction
As is well known from affine geometry (see, e.g., [1]), there are spaces with the admissible systems of orthogonal coordinates having a common origin, an identical unit volume and orientation. Such is our real isotropic 3d-space. Why? Because our real space could be created only owing to long-ranged attractive forces, e.g., by the forces of gravitation. An empty space, i.e., the space without any matter, can have any dimension -from zero up to 'n'. The mathematical space is empty.
The main goal of this article is to find connection between the real spaces of different dimensions i = 0,…,n and the long-range attractive forces that create these spaces and have its dimension j = 0,…,m. By the dimension of long-range attraction forces F is meant a value of the exponent j in the denominator of the formula , where m 1 and m 2 are the interacting masses (kg); k is a coefficient; r the distance between these masses (m); j(j)=1,2,3,…,m(m).
Using the result obtained for the real spaces with an integer dimension, the author is studying two specific cases connected with weakly anisotropic real spaces whose dimension differs from the integer number by a very small quantity.
Our problem should not be confused with problem that P. Ehrenfest was solving 100 years ago (see, e.g., [2 -5]). He made attempt to link the dimension of space with fundamental laws of physics but he did not concern the problems connected with the creature of spaces under influence of long-ranged interaction forces of a definite kind.
We should note the following as well. This article is only the first step in studying the spaces of any dimensions. The time will come, and the results obtained in this article will be united with results which were obtained by other researchers, e.g., by A. G. Horvath [6].
Real Spaces and Forces Creating them
Call any space containing matter a real space. Any real space has to contain sources of long-range interactions, viz. an attraction between masses in our case. The existence of these sources is defined by a divergence of the vector a of interaction between masses. For example, for our 3-D real space the divergence of a will have the following form if a is only a function of the coordinate ρ (in a spherical coordinate system): Long-ranged Interaction Forces and Real Spaces Related to Them Including Anisotropic Cases where index "3" in (1) indicates that the above formulae refer to 3D-space; dS is the surface element of a spherical surface; the unit vector perpendicular to dS; V the volume; F the interaction force between masses m 1 and 2; k the constant of gravitation (kg -1 • m 3 • s -2 ); E the vector of gravitation field intensity (m • s -2 ), as 2 1 m = If , then we can write down (1) as Since in spherical coordinates we have after integration (3): If we use another formula instead of F=-km 1 m 2 r/r 3 , e.g.,
F=-km
then we shall have which means that div 3 a depends on "r', and, in turn, the law of energy conservation is broken. Indeed, if (see (1)), then and div 3 a = 0. This means that the gravitation source at this point of space is unobserved. An analogous picture takes place at other points of our space. In fact, it means as well that there is no real space, since the interaction (5) cannot maintain its existence. Now take such a law instead of (5): then instead (6) we have and div 3 a if V and, consequently, r tends to zero. Hence, our space collapses into a point, i.e., we obtain a black hole.
Here we should make an important remark. As seen, we use the sphere of dimension 3, which means that we study an isotropic space. If the space investigated had a fractional dimension, e.g., 2.9, then we had to take for our investigations not a sphere but an ellipsoid. Consequently, we cannot use an expression for the element of the ellipsoid surface as , since in this case ρ should be a function of the angles φ and θ. Now we can put a question: how will things be going for spaces of other dimensions -from zero to n? To answer this question, first of all write down expressions for volumes and surfaces of different ranks. We begin to study spaces whose dimensions i 3, i.e., 0,1,2.
If i=2, then we consider a circumference and a space inside it (we call this space a flat sphere). Therefore we have: S F=-km 1 m 2 r/r 2 ; a E (F/m 2 ) -r here the coefficient k in kg -1 • m 2 • s -2 units. If , then we can write down (9) as div 2 a .
Since in polar coordinates , we have after integration in (11): If we have another formula instead of F=-km 1 m 2 r/r 2 , e.g., then we shall have Hence, div 2 a depends on "r', it means, in turn, that the law of energy conservation is broken. Indeed, if (see (9)), then and div 2 a = 0. Thus, the gravitation source at this point of space is unobserved. An analogous picture takes place at other points of this space.
In fact, it means as well that there is no real 2D-space, since the interaction (13) cannot maintain its existence. Now take such a law instead of (10): then instead of (14) we have and div 3 a , if , i.e., the flat sphere collapses into point and we have black hole but in 2D-space.
Below the Greek letters φ and θ will be replaced by the Greek letter with index m=1,2,… , since we shall study spaces with i If i =1, a straight-line segment will be as if an analogue of the above flat sphere and a pair of points will be as if analogue of the above circumference bounding the above flat one. In this case we have: here the coefficient k is in kg -1 • m • s -2 units. At last, if i=0, then the space is a point here and its div 0 a , i.e. we have uncertainty.
Now we shall study the spaces having the dimensions from i=4 up to i =n.
If i=4, then we have [7]: If , then we can write down (19) as Since in spherical coordinates then we have after integration (21): If we have another formula instead of F (4) =-km 1 m 2 R (4) / , e.g., then we shall have div 4 A (4) .
Hence, div 4 A (4) depends on "R", it means, in turn, that the law of energy conservation is broken. Indeed, if (see (19), then and div 4 A (4) = 0. The analogous picture takes place at other points of our space.
In fact, it means as well that there is no real space, since the interaction (23) cannot maintain its existence. Now take such a law instead of (23): then instead of (24) we have and div 4 A (4) if and, consequently, R (4) tends to zero. It means that our space collapses into a point, i.e., we obtain a black hole. In principle, we get a similar picture for the cases i=5 -n. Show it for the case i=n.
where index "n" in formulae (27 -28) indicates referring them to nD-space, is the element of nD-surface, the component of the unit vector perpendicular to each point of this (n -1)D-surface, the nD-volume, F (n) the interaction force between of masses 1 and 2 in nD-space, k the constant of gravitation in nD-space (kg -1 • m n • s -2 ), E the vector of gravitation field intensity (m • s -2 ) at m 2 =1 the gamma function.
If
, then we can write down (27) as Long-ranged Interaction Forces and Real Spaces Related to Them Including Anisotropic Cases div n A (n) .
Since in spherical coordinates we have after integration (29): If we have another formula instead of F (n) =km 1 m 2 /R n-1 , e.g., Hence div n A (n) depends on it means, in turn, that the law of energy conservation is broken. Indeed, if (see (27), then and div n A (n) = 0, which means that the gravitation source at this point of space is not observed. An analogous picture takes place at other points of this space.
In fact, it means as well that there is no real space, since the interaction (31) cannot maintain its existence. Now take such a law instead of (31): then instead of (24) we have and div n A (n) if and, consequently, R (n) tends to zero. It means that our space collapses into a point, i.e., we obtain a black hole. Now we can assume that vacuum is a nD-space where the interaction law between masses has the rank n+1. There are fluctuations of the number n+1 in the interaction law and the rank of the interaction may become less than the space dimension. As a result, there occurs the Big Bang.
There occurs an ejection of substance to an empty space after the Big Bang, and the matter begins to convert a mathematical space into a space of dimension n. The number n depends on the law of mass interactions for this matter. If it is the law (2), then we obtain our three dimensional space.
The Formulation of the Theorem on Spaces and Forces
We can generalize the above-mentioned to a theorem, viz., the long-raged interaction forces of dimensions i =0,1, 2,…,n can form real isotropic Euclidean spaces if and only if, when the dimensions j of these spaces equal j=i+1. Then we can affirm, using the method of mathematical induction, that the long-raged interaction forces of the dimensions i =n+1 can form a real isotropic Euclidean space of the rank j=i+1=n+2.
This is a theorem, which we call "the theorem on spaces and long-ranged interaction forces forming the former" or, more shortly, "the theorem on spaces and forces forming them".
Based on this theorem we can suppose that real spaces, where the theorem is valid, can be in equilibrium. In case this theorem does not hold the spaces fail to exist, at any rate collapsing into a black hole or an empty space. In this connection it will be interesting to study weak anisotropy (3 ζ)D-spaces, ε<<1being oblate and prolate ellipsoids where ε is an eccentricity of ellipse of revolution.
The Case of the Weak Isotropic (3-Ζ)D-Space, Ζ=Ζ(Φ)<<1; Oblate Ellipsoid with the Eccentricity Ε<<1
To study the weak anisotropic space, we use the expression (3) having transformed it as follows: where the quantity ζ depends on an angle φ between the radius of the spheroid ρ and its major axis, i.e., instead of the sphere limiting 3-D isotropic Euclidean space we use an oblate ellipsoid limiting a weakly anisotropic space of fractional dimension. Then the expression dS for the element of the ellipsoid surface will significantly differ from the analogous expression for the sphere surface element. In the latter case, this element is representable as a differential of the ellipse arc dl e multiplied by a differential of the circle arc dl c formed by rotation of one or another point on the ellipse arc around its major axis. Then we have: , where (37) is the equation of ellipse in polar coordinates, when its origin is at the center of the ellipse, r=ρ is the radius of the ellipse, b its minor axis, ε the eccentricity of the ellipse, φ the angle between the radius r and the axis X in Cartesian coordinates aligned with major axis of coordinates around which it rotates, θ the angle of the ellipse rotation around of axis X.
Taking into account of (37), we obtain from (36): The final part of the expression (39) was obtained providing if the eccentricity ε is small enough, i.e., much less than unity. If the quantity ε=0, then we obtain an expression for the element of a spherical surface. The integration over the angle φ here is clockwise, since this angle is also read in the same way. Now we should obtain an expression for the interaction of masses in the anisotropic space studied (in the oblate ellipsoid), when the ellipse has the eccentricity ε. With this aim we should return to the expression (2). We write down it for the studied case as F= -km 1 m 2 r/r 3-ζ ; a E (F/m 2 ) -r (40)
To obtain the divergence of the quantity a, we use the expression (3), allowing for (38 -41) and taking into account that the vector product a • n = a^n at any point of the surface of the ellipsoid because of a very small eccentricity of the figure of rotation and, hence, the cosines of the angle between these vectors being close to unity.
Here and below, in lnb and logb the quantity b is dimensionless. As a result, we have for the divergence a; div 3-ζ a= ,.
then the expression (44) transforms in the expression (4), as , all terms in the right side of (44), except the first one, are tending to infinity, since , but the quantities b are only tending to zero. Absolutely another picture takes place, when the quantity ε does not tend to zero but is taken equal to 0.01. In this case, we obtain in the right side of (44) an indeterminate form, which is very difficult to evaluate, if possible at all. However, this difficulty we can overcome, if instead of another expression is used, where is the volume of the oblate ellipsoid having the small axis m. Then the right side of (44) converges quickly, and in this volume there can be all free atoms of Mendeleev table each taken separately and the most of molecules containing these atoms, i.e., an almost complete set of matter components, whose mass defines of the existence of gravitation.
The quantity • м. It is very small. In turn, the major and minor axes of the ellipse are almost equal to each other. Then the interaction (40) can form and maintain a space whose dimension is very close to 3D, but this space cannot exist arbitrarily long. Early or late, the space transforms to an empty space with all circumstances following from that.
Case of Weakly Anisotropy (3+Ζ)D-Space, Ε<<1; Prolate Ellipsoid of Rotation
Replace the oblate ellipsoid in (40) by a prolate ellipsoid of revolution. Then we again obtain a new weak anisotropic space, but now it will have other configuration and properties. This space is also anisotropic, but it should be tending, in the course of time, to the black hole and not to an empty space. In this case, the dependence (35) takes the form: To obtain formulae similar to the dependences (36) -(38), but for the prolate ellipsoid of revolution, we should, in particular, turn the ellipsoid, studied in the previous section, by ninety degrees anticlockwise, after that we should rotate the upper arc of the ellipse around the horizontal axis X of Cartesian coordinates. Then the formula for the element of surface of this geometrical object takes form: , where the differential of the ellipse arc dl e is again multiplied by the differential of circle arc formed by the rotation of one or another point on the arc of the ellipse around the axis X of the Cartesian coordinates, dϑ the differential of the angle of the rotation arc around this axis. Then the dependences (37) and (38) The main difference (51) from (42) is in the exponent of the magnitude b in the expression (51) having a diametrically opposite sign as compared with the exponent in (42). Consequently, div 3+ζ a will tend to infinity as . As to the integrand as a whole, so it expands without difficulties into a series which converges very quickly at ε<<1. It makes no sense to represent here a final expression using (51), since its principal difference from the expression (42) is in the sign of the exponent of the magnitude b, which defines the tending of diva to zero or to infinity.
As in the previous case, the interaction (50) can keep a space created by it, however, not infinitely long. Finally, this space should be converted into a black hole, with the ensuing consequences.
Discussion
The results obtained and, first of all, the above theorem permits one to make a proposal concerning an evolution of the real spaces created by a certain cataclysm, e.g., by the Big Bang. These spaces are Euclidean or almost Euclidean ones; they can have different dimensions and be isotropic as well as anisotropic. They can exist infinitely long if the above theorem is kept or a limited time if the theorem is broken. As to curved spaces, the author of this article does not deny their possible existence but a theory of their own is required. Here it is clear that in the case of curved real spaces there should exist a connection between their dimensions and the dimensions of the long-ranged interactions created these spaces.
How will evolve the real Euclidean and almost Euclidean spaces (isotropic and anisotropic) created by the certain long-ranged interactions? Different cases, which are possible here, will be studied below.
The first case. The space is isotropic, and the theorem is valid.
In this case, the substance, having got an initial impulse because of a cataclysm, generates an expanding space, e.g., three-dimensional one. During the expansion of space, there occurs a transformation of an empty space in the three-dimensional one. Later on, the non-equilibrium space can come to an equilibrium state remaining there arbitrarily long.
The second case. The space is weak anisotropic, and the theorem is broken.
In this case, the space cannot be in equilibrium, it will be weakly non-equilibrium. At some instant, the expansion will be replaced by the compression owing to the long ranged forces of interaction. Further evolution depends on the law of interaction between the masses in collapsing spaces. For example, if the space is close to 3D-space and the exponent in the law of interaction somewhat less than two, then the space will be finally converted into an empty space. If the exponent is, in contrary, somewhat greater than two, then the collapsing space will be converted into a black hole.
The space where we live, can be considered a three-dimensional Euclidean one, showing a few patches of curved space volumes containing masses of substance of higher density. Evidently, "the theorem on spaces and forces" is invalid for them.
Conclusions
1. Formulated and proved is a theorem according to which real Euclidean spaces can be formed by long-ranged interaction forces. These spaces are in equilibrium if and only if the integer dimensions of the space i and those of interacting forces j are connected by the relation i = j + 1. 2. It is shown that weakly curved anisotropic spaces cannot be in equilibrium and cannot exist arbitrarily long. Early and late, they have either to transform to an empty space or to collapse into a black hole. | 4,718 | 2019-11-01T00:00:00.000 | [
"Mathematics"
] |
Physical Mechanisms Controlling the Offshore Propagation of Convection in the Tropics: 2. Influence of Topography
A set of idealized convection‐permitting simulations is performed to investigate the influence of topography on the physical mechanisms responsible for the nocturnal offshore propagation of convection around tropical islands. All simulations have an idealized island in the middle of a long channel oceanic domain, with constant sea surface temperature and without rotation. To diagnose the impact of topography, we compare a flat island simulation with two simulations with mountain ranges of different shapes. The topography over the island has a strong impact on the diurnal cycle of convection as clouds tend to remain all day over the highest topography. This weakens the diurnal cycle and the land breeze front and triggers a comparatively less frequent long‐distance offshore propagation of convection. As in the flat simulation, the distance of offshore propagation is particularly sensitive to humidity and temperature at the top of the boundary layer. A shallow circulation that is asymmetric with respect to the island influences the boundary layer top humidity and can favor propagation on one side of the island or the other. These results mimic cloud and precipitation patterns observed prior to the Madden‐Julian Oscillation propagation over the Maritime Continent. The shape of the topography does not seem to influence the offshore propagation of convection significantly except for mountain‐valley breezes that reinforce the land breeze and the establishment of the asymmetric shallow circulation.
Introduction
Part I of this paper (Coppin & Bellon, 2019) describes the physical mechanisms controlling the offshore propagation of convection around an idealized flat tropical island. In this simulation, a sea breeze systematically develops in the morning, followed by convection in mid afternoon, early evening, and an offshore propagation of convection over the surrounding ocean at night. This diurnal cycle of island convection is realistic (Mori et al., 2004;J.-H. Qian, 2008;Yang & Slingo, 2001); in particular, the timing of convection triggering over the island is well simulated, while many models tend to simulate it too early (Neale & Slingo, 2003;Peatman et al., 2015;J.-H. Qian, 2008).
In Part I, we identify two main phenomena in the control of nocturnal offshore propagation of convection over the ocean for a flat island. A land breeze propagates at an average speed of 3-4 m/s, similar to values found in observations (Mori et al., 2011;Yokoi et al., 2017) and other high-resolution modeling studies (Hassim et al., 2016;Love et al., 2011;Vincent & Lane, 2016). This propagation speed depends on the large-scale wind speed, mostly modulated by the presence of convection triggered earlier and further offshore by gravity waves when the environmental conditions are favorable. In our model, two gravity wave modes stand out: the first and second baroclinic modes, propagating at 30 and 20 m/s, respectively. The first baroclinic mode is reminiscent of gravity waves generated by the diurnal heating over islands found in Love et al. (2011). The second baroclinic mode resembles the gravity waves triggering offshore convection in several modeling studies (Hassim et al., 2016;Mapes et al., 2003;Vincent & Lane, 2016), even though it is slightly faster in our model.
Topography also plays a crucial role in controlling the diurnal cycle of precipitation over tropical islands. Early work showed that precipitation is enhanced by upslope winds over islands with a mountain range (J.-H. Qian, 2008;Yang & Slingo, 2001). Depending on the direction of the wind, topography can enhance convection and its offshore propagation on one side of the mountain range and suppress it on the other side (Ichikawa & Yasunari, 2007, 2008Qian et al., 2013). This blocking effect of the mountain range is also found to strengthen the sea breeze (T. Qian et al., 2012). In that study, the sea breeze is modeled as the response to an oscillating heat source over a flat land or an inland plateau. When topography is added, the partial blocking of the sea breeze traps cool air at the base of the plateau near the end of the heating cycle. This air is cooled further during the night and generates a stronger cold pool that leads to a stronger land breeze as well as a faster propagation. Experiments show that the strength of the land breeze increases with the terrain height, at least for moderate heights. Elevated terrain also pushes the diurnal heating upward into the stratified layers of the surrounding atmosphere, generating gravity waves that help trigger offshore convection (Mapes et al., 2003). When topography is more realistic than an idealized elevated plateau, additional effects occur. During daytime, sea breezes converge in valleys, enhancing and focusing convection over the mountains (J.-H. Qian, 2008). At night, downslope winds also converge in valleys and reinforce the land breeze (Vincent & Lane, 2016).
Local processes associated with the diurnal cycle can interact or compete with variability at larger scales. The arrival of the Madden-Julian Oscillation (MJO; R. A. Madden & Julian, 1971;Julian, 1972, 1994;Zhang, 2005) over the Maritime Continent is a good example of such interaction. One or two phases before the main envelope of the MJO reaches the Maritime Continent; the diurnal cycle over the island is enhanced, followed by an enhancement of the diurnal cycle over the surrounding oceans in regions of offshore-propagating convection (Peatman et al., 2014). Later on during the active phase of the MJO over the Maritime Continent, the MJO propagation can be hindered by a strong diurnal cycle over islands because it competes with large-scale convection over the ocean for moisture supply: the moisture convergence over islands due to sea breezes dries the oceanic regions (Hagos et al., 2016). Hence, understanding how topography affects the offshore propagation of convection may be crucial to better simulate the propagation or blocking of the MJO over the Maritime Continent.
The aim of this study is to analyze how topography impacts the nocturnal offshore propagation around an idealized tropical island. Section 2 presents the simulations with topography and the setup used. Section 3 analyzes the similarities and differences observed between the simulations with topography and the simulation with a flat island from Part I. Section 4 focuses on the changes resulting from different shapes of mountain ranges.
Methods
Similar to Part I, we run the mesoscale nonhydrostatic atmospheric model Meso-NH version 5.3.1 (Lafore et al., 1998;Lac et al., 2018) coupled with the SURFEX model (Masson et al., 2013) over land, with the same Radiative Convective Equilibrium setup: long doubly periodic channel, 2,048 km × 128 km in x and y directions, respectively, with 47 stretched vertical levels and a top at 25 km, an island of 128 km× 128 km in the middle of the domain, no Coriolis force, and no large-scale wind forcing. Similar to Part I, even though we do not impose a wind forcing, an overturning large-scale circulation develops due to the maintenance of convection over and around the island and low-level winds associated with this circulation converge over the island. We call these winds large-scale winds hereafter.
Two different mountain shapes are tested and illustrated in Figure 1: one with a ridge (simulation Ridge) and one with a peak and a pass (simulation Peak). We will compare these simulations to the simulation without topography analyzed in Part I (simulation Flat).
More specifically, in simulation Ridge, the topography varies only in direction x and has a Gaussian shape: where H=600 m is the altitude of the ridge, x 0 correspond to the center of the island, and the standard deviation σ x =15. The maximum altitude of the ridge is chosen to be representative of mountain plateau found over Sumatra, Java, and Borneo (even though the peaks on all these islands are much higher).
Topography in simulation Peak is a combination of the same Gaussian distribution in the x direction and a sinus in the y direction, which creates a valley, a pass at 400 m, and a peak at 800 m: with L y =128 km the width of the channel.
With these definitions, both mountains have the same average altitude for each point in the x direction. At the crest, this altitude corresponds to a Froude number below 1, indicating that these mountain ranges do not block the large-scale flow associated with the lower branch of the overturning large-scale circulation, which we wanted to avoid. Contrasting the two simulations will shed some light on whether mountainvalley winds reinforce convection and the land breeze and see if the geography of the island influences the wind fields themselves. Both simulations are performed for 250 days, and they reach a stationary state after about 40 days.
In most figures showing composites (Figures 2,4,(7)(8)(9)(10)(11), variables are composited a function of the distance |x−x 0 | to the center of the island, including both sides of the island, as in Part I, to increase sampling. If not specified otherwise, the composites are averages over the last 200 days of simulation and over direction y.
Comparison With Flat Island Simulation
In this section, we investigate the effect of island mountain ranges by analyzing the similarities and differences between simulation Flat and both simulations with topography.
Global Composites
Both simulations with topography have the same sea breeze in the morning to early afternoon followed by convection over the island in early evening and an offshore propagation of convection later at night similar to simulation Flat ( Figure 2).
Journal of Advances in Modeling Earth Systems
Topography decreases the maximum distance of propagation in both simulations, defined as the point furthest from the coast where precipitation exceeds 0.33 mm/hr at least once in 24 hr. As in Part I, both sides of the island are considered independent to increase the sampling size. The mean distance of propagation is 112 and 100 km for simulations Ridge and Peak, respectively (compared to 141 km for simulation Flat). Both simulations with topography have a lot more instances of convection staying over the island or very close to it all day long and fewer cases of very long propagation ( Figure 3).
The shorter propagation of convection in the simulations with topography is associated with a much weaker offshore land breeze than in the flat simulation (1 vs. 3 m/s in Figures 2b and 2c). The same budget analysis for wind and temperature as we presented in Part I for simulation Flat (see Figure 7 in Part I) indicates that this is nonetheless the signature of a land breeze (not shown). This difference in the land breeze speed does not result from the large-scale overturning circulation which is almost identical at 200 km from the coast in all three simulations ( Figure 2), even though the daily profile of wind shows a slightly stronger onshore wind within 200 km from the coast in simulations with topography ( Figure 4). Figure 4 generally indicates that the mean circulation is stronger in the vicinity of the island (within 100 km from the coast) with more inflow in the boundary layer and more outflow in the lower troposphere (2-4 km) and the upper troposphere (9-12 km). Convection is also more localized above the mountain range in simulations Ridge and Peak. The temperature anomalies (red contours) show that the atmosphere is slightly more stable in these simulations except above the mountain range. Even though they also have a shallow circulation between the top of the boundary layer and 4 km, this circulation is weaker in simulations Peak and Ridge away from the island, the latter having an even more reduced circulation. Both simulations present a small drying of the lower troposphere far from the coast and a moister boundary layer, which can be interpreted as a decrease in shallow and midlevel convection. Again, this pattern is more intense in Ridge than in Peak, with some modulation due to advection by the moist outflows from the island.
The main difference between simulations with topography and simulation Flat is the persistence of precipitation over the island at night and in the morning ( Figure 2). This difference is clearly visible when we compare averages over the whole island ( Figure 5a).
The increased precipitation is associated with a larger cloud amount (contours in Figure 6), especially over the high topography. Figure 6 also shows the effect of topography on surface temperature. The average height of the simulations with topography is 172 m, which results in an approximately 1.3 K lower surface temperature averaged over the island. The decrease of temperature with height is particularly visible over the topography (Figure 6). Near the coast where the height is the same in the three simulations, the temperature behaves similarly, with a diurnal warming up to 302 K. But on average over the island, the islands with Journal of Advances in Modeling Earth Systems topography are 0.6 K colder throughout the night and up to 2 K colder at midday and in the early afternoon ( Figure 5b). The stronger diurnal warming in simulation Flat originates from both the difference in averaged height (1.3 K) and its reduced cloud shadowing of the surface (0.7 K). At night, the temperature difference is smaller than 1.3 K, probably because clouds' greenhouse effect decreases the energy loss of the surface in the simulations with topography.
The decreased warming during the day leads to a weaker sea breeze at the coast ( Figure 2) and a smaller convective enhancement over the island (Figure 5a) even though precipitation is still larger on average with topography. The precipitation rate at 18:00 and 21:00 is roughly similar in all three simulations. But it is much more concentrated over the topography in simulations Ridge and Peak, and it persists through the night there.
To investigate whether this pattern occurs everyday and whether this influences the nocturnal offshore propagation, we investigate the difference between short and long propagation in the next section, similar to the analysis in Part I.
Difference Between Short and Long Propagation
We classify nocturnal propagation events based on the distribution of maximum distance of propagation from the coast (Figure 3) into quartiles as we did for simulation Flat in Part I. For the first quartile, nocturnal propagation does not extend further than 50 km from the coast in simulation Peak (Figure 7a) but, for the last quartile, we see long propagation as in simulation Flat, as well as the same convection pattern developing 100-150 km away from the coast around 22:00, similar to simulation Flat (see Part I): convection triggered far from the coast still occurs in the simulations with topography, even though it is less frequent and appears as a weaker signal in the composite (Figures 2b and 2c). For this quartile, there is little or no precipitation at night over the island (Figure 7b). Looking at the first and last quartiles for simulation Peak shows that gravity waves trigger offshore convection for long propagation (Figure 8b). (Figure 8 in Part I). The same gravity waves associated with the first and second baroclinic modes propagate at 30 and 20 m/s, respectively. Their cold phase triggers offshore convection (green line associated with convection in Figure 8b). Contrary to simulation Flat, no gravity wave is found for short propagation (Figure 8a).
Zoomed composites of the island and nearby ocean for long propagation days in all three simulations confirm that there is no convection in the morning over the island, even over topography, when long The island is on the left. The tendency, pressure, advection, shallow convection, turbulence, radiation, rain evaporation, and phase changes components are represented by the black, blue, red, orange, pink, yellow, purple, and green lines. Note that the abscissa starts at the coast and goes up to 400 km to be able to show the gravity wave.
propagation occurs (Figure 9). Because the gravity waves necessary for offshore convection are triggered by the diurnal warming over land, they are only present for days when the cloud cover over the island is small in the morning. No gravity wave is emitted for short propagation probably because the heating increase in the afternoon is strongly reduced when convection stays over the island all day long. According to Figure 3, these days with short propagation and no fast gravity wave emitted in the afternoon are more frequent and explain why, on average, the diurnal hovmoller for simulations Ridge and Peak has convection all day long over the topography (Figures 2b and 2c).
Large-Scale Control of the Maximum Distance of Propagation
Similar to what happens in simulation Flat, the environmental conditions over the ocean, and more specifically the conditions at the top of the boundary layer, control the maximum distance of offshore propagation. For long propagation, once the top of the boundary layer is sufficiently deep, the cold phase of the fast gravity waves forced by the heating over land during the afternoon triggers convection far from the coast in both simulations with topography, as it did in simulation Flat (not shown). This allows convection to gradually propagate away from the island.
The picture is noticeably different when we consider propagation close to the coast (Figure 10). In order to compare with simulation Flat (Figure 11 first row from Part I), we only consider the first day of propagation between 0 and 80 km after days with propagation reaching further than 80 km from the coast; in particular, we discard cases where convection stays over the island on the same day or the day before. In simulation Peak (second row), the warm and dry anomaly advected at the top of the boundary layer prevents the development of convection far from the coast, which is reminiscent of what happens in simulation Flat. In simulation Ridge (first row), the advection of this dry anomaly is delayed and the anomaly itself is much smaller. These differences between simulations Ridge and Peak are investigated further in the next section.
Influence of the Shape of Topography
Both simulations with topography are more similar with each other in terms of convection over the island and offshore propagation of convection than either of them with simulation Flat, even though simulations Flat and Peak share some characteristics. In this section, we focus on the differences created by the shape of the island topography.
Mountain-Valley Breeze
Over the island, most changes in precipitation and wind patterns should result from the change in topography from a ridge to a peak and a pass. The first major difference occurs at 12:00 when the sea breeze starts to propagate over the island (Figure 9). Before convection appears over the mountains, upslope wind develops in both simulations with topography. But this upslope wind is doubled along the slopes of the peak (top and bottom of the domain, third row). At 15:00, the wind starts to diverge at the pass in simulation Peak while convection develops over the mountains in both simulations. At 18:00, a wind minimum appears at 40 km where the sea breeze and an outflow coming from the pass converge. This outflow is not present in simulation Ridge. At 21:00, convection is mainly localized over the highest topography, that is, along the ridge in simulation Ridge and over the peak in simulation Peak with a clear local minimum over the pass. The highest precipitation values found in simulation Peak over the topography are probably the signature of an ascent reinforced by the valley and may explain why precipitation is always larger over the island in simulation Peak relative to simulation Ridge (Figure 5a).
Wind going down the slopes of the peak converge in the pass, which results in an increased land breeze at the outflow of the pass. The land breeze is also stronger (1 m/s) on average in simulation Peak compared to simulation Ridge but is still 1 m/s weaker than the land breeze in simulation Flat. At 00:00 and 03:00, a slightly faster wind is still visible in the outflow from the valley in simulation Peak.
Journal of Advances in Modeling Earth Systems
Thus, the valley effect seems to reinforce the land breeze, especially in the outflow from the valley. But this reinforcement is not sufficient to exceed the speed of the land breeze in simulation Flat, probably because the stronger convergence near the center of the island in that case (see the wind 25 km from the center of the island at 15:00 in Figures 10a and 10c) leads to a larger convective enhancement, which in turn generates a stronger density current.
Impact on Large-Scale Control for Short Propagation
The main difference in distance of propagation between both simulations with topography occurs for intermediate distances of propagation, with simulation Peak having a higher proportion around 80 km similar to simulation Flat, while simulation Ridge have a smoother transition shifted toward longer propagation.
In the previous section, we mention that simulations Flat and Peak also have a very similar advection of warm and dry air prior to short propagation (Figure 10 from Part I and Figure 10). To investigate where this dry and warm anomaly originates from, we repeat the budget analysis done in Part I for the humidity at the top of the boundary layer. It shows that, in both simulations with topography, the advective term causes the drying (Figure 11). But this term and the tendency are larger in simulation Peak (bottom row) where the advection of the dry and warm anomaly also tends to inhibit shallow convection, which becomes much weaker than in simulation Ridge. This explains why the positive anomaly of humidity remains much larger for simulation Ridge in Figure 10 (first row). As in simulation Flat, the advection is mostly horizontal within 200 km from the coast (not shown).
To investigate how the shape of the topography can generate these different behaviors, we focus on the same days of short propagation as in Figures 10 and 11. But we look at both sides of the island to see how symmetric the circulation is relative to the island in all three simulations. To understand what is driving the advection of dry air observed as early as 2 days before in Figure 10, we focus on the symmetric and asymmetric parts of the circulation 3 days before the first short propagation (Figure 12). Because we select the distance of propagation from both sides of the island, all the days selected are arranged so that the short propagation happens on the right side of the island. The symmetric part is defined as the average of both sides of the island. The asymmetric part is calculated as the difference between the original composite and the symmetric part.
In all three cases, the symmetric component is much larger than the asymmetric component indicating a strong forcing of the circulation by the island. The three simulations also have roughly similar wind and humidity patterns. On the other hand, the asymmetric component is much larger in simulations Flat and Peak than in simulation Ridge. In both cases, the wind and humidity patterns are very similar and highlight stronger convection around 200 km on the right side of the island, that is, where the dry anomaly will be advected later. An eastward wind associated with less convection on the left side of the island and the stronger convection on the right side is visible in the boundary layer around the island and is not stopped by the topography in simulation Peak. A westward anomaly between 1.5 and 3 km advects drier and warmer air from the right side of the domain. This lateral advection is mainly due to the shallow circulation generated by the asymmetric offshore convection. This shallow circulation also forces part of this anomaly to subside between 1 and 2 km as seen in Figures 10e-10h. There, it is advected toward the coast by the symmetric wind component and shuts down shallow convection on its way (Figures 11c and 11d).
Because this shallow circulation resulting from the asymmetric development of convection on one side of the island is totally absent from simulation Ridge (Figure 12e), the dry and warm anomaly located at an altitude of 3 km, 400 km from the coast 3 days before the short propagation does not subside (Figures 10a-10d). Thus, the onshore wind at the top of the boundary layer does not advect an anomaly as dry as in simulations Peak and Flat.
Even though simulations Ridge and Peak behave very similarly in many respects, their mountain shapes generate some key differences for days with short propagation. The mountain ridge prevents the development of a strong asymmetric convective pattern, permitted by the pass in simulation Peak. In addition to preventing the mean flow from crossing the mountain range, the main effect of the ridge seems to force
Journal of Advances in Modeling Earth Systems
convection always over the mountain range, generating an upward transport over the island and a more symmetric circulation.
In simulation Peak (as well as simulation Flat), the asymmetry is responsible for the advection of a much drier and warmer anomaly on top of the boundary layer. This anomaly forces convection to stay close to the coast longer than in simulation Ridge where it can be quickly eroded and allows for longer propagation much sooner.
Discussion and Conclusion
In this paper, we investigate how topography affects the nocturnal offshore propagation of convection around an idealized tropical island and how the shape of the island topography (idealized mountain ranges represented by a ridge or a peak with a pass) modulates it in the Radiative Convective Equilibrium version of the mesoscale nonhydrostatic atmospheric model Meso-NH.
Adding a mountain range on the island significantly decreases the surface temperature above the island. It also affects the pattern of convection and its offshore propagation relative to a simulation with a flat Island (simulation Flat): • On average, convection stays over the highest topography at the center of the island at night in both simulations with topography, resulting in an increased cloud cover and precipitation at night and in the morning. The presence of these clouds reduces the average surface temperature difference between simulation Flat and both simulations with topography, probably due to the greenhouse effect of clouds. During the day, they have an opposite effect as they reduce the diurnal warming over land. • The average propagation speed of the land breeze is much smaller because of the weaker land breeze circulation and the competing effect of convection maintained over the mountains at night. However, this speed increases on days when convection is not maintained over the island at night. The weaker land breeze is also responsible for a reduced maximum distance of propagation, with convection anchored 25% of the days on the island or within 50 km of the coast. • For long-propagation days, the same gravity waves as in simulation Flat are found. The first and second baroclinic modes propagating, respectively, at 30 and 20 m/s trigger offshore convection, while the higher-order modes play a role in reinforcing existing convection prior to the land breeze front. Because clouds stay over the island all day during short propagation days, these gravity waves are only seen for long propagation.
The two simulations with topography are very similar, indicating that the shape of the mountain has little impact on the land-sea breeze circulation. However, two distinct features have a notable impact on convection and its propagation: • The valley seems to reinforce convection as the highest precipitation value occurs over the peak and on average in simulation Peak, as was emphasized by J.-H. Qian (2008). But it does not change the timing of precipitation. This valley also generates land-valley breezes visible in the outflow of the valley. These winds reinforce the offshore wind relative to simulation Ridge. But this reinforcement, also mentioned by Vincent and Lane (2016), does not compensate the decrease in land breeze speed caused by convection staying over the topography later in the evening if not all night long. We do not see any katabatic wind. But in our cases where convection clings to the island at night, these winds would most likely be compensated by the weaker land breeze. • The ridge, even though its height does not correspond to a Froude number above one, acts as a much stronger barrier than the peak and pass. Momentum reaching the island below 600 m is transported vertically and forces the large-scale circulation to be much more symmetric in simulation Ridge. In simulation Peak, the lower pass allows the flow to propagate more easily across the island. This creates situations when offshore convection is much stronger on one side of the island while suppressed on the other side. The stronger offshore convection generates a local circulation that advects drier and warmer air between 1.5 and 3 km from the sides of the domain. As this anomaly subsides, it is advected onshore and above the boundary layer by the symmetric component of the large-scale circulation. Along its way, it suppresses shallow convection and forces convection to stay close to the coast. The larger and more stable anomaly takes longer to be removed by shallow convection than the weaker anomaly advected in simulation Ridge. In the latter, shallow convection rapidly deepens the boundary layer and reduces convective inhibition, allowing convection to gradually propagate further away from the coast every day. This explains why the distribution of distance of propagation is much smoother for simulation Ridge.
Simulations Peak and Flat have the same distribution of distance of propagation and the same warm and dry anomaly much closer to the coast for the short propagation cases. In fact, this anomaly is even stronger and close to the coast for simulation Flat because there is no mountain range to act as a barrier on the mean flow, leading to a larger asymmetry. This shows that a small change in topography can significantly affect the offshore propagation of convection near the island.
The results presented in this study confirm those from Part I and might explain the relationship between convection over islands and the propagation of the MJO over the Maritime Continent. The gradual enhancement of the diurnal cycle over islands followed by an enhancement over the surrounding oceans prior to the MJO envelope could very well work in the same way as the gradual offshore propagation of convection that we observe in our simulations and create an environment favorable for the MJO envelope to propagate over the oceans. Similarities with Phases 4 and 5 of the MJO when the envelope is over the Maritime Continent are less obvious, partly because these phases are characterized by sustained convection over the ocean, a usually weaker diurnal cycle over the island and a strong large-scale forcing, which we do not study in our simulations without large-scale wind forcing. It is nonetheless noteworthy that, in our simulations, days with short propagation have clouds over the topography all day long and a weaker land-sea breeze circulation. The oceanic moisture supply being higher during such days than in cases with strong diurnal cycle over land, this might relate to the weakened barrier effect of the islands for the propagation of the MJO when their diurnal cycle is weakened. Finally, it is possible that dry and warm air advected during the last phases of the MJO suppresses the diurnal cycle over both ocean and island, the same way as the dry and warm anomaly in the lower troposphere forces convection to stay near the coast and reduces the diurnal cycle over land.
Several steps are still needed to bridge the gap between our ideal experiments and reality. Considering how a simple change of topography can modify the offshore propagation of convection in our simulations, one possibility would be to study the effect of topography for different heights as several islands of the Maritime Continent have mountain ranges higher than what we modeled, as well as more realistic island topography. Another extension of this work would be to assess the impact of the mean flow on the propagation mechanisms. | 7,417.2 | 2019-10-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Using Nonperforming Loan Ratios to Compute Loan Default Rates With Evidence From European Banking Sectors
This research is the first attempt to calibrate default rates of loan portfolios using raw data on nonperforming loans and some additional information on the maturity structure of the loan portfolios. We applied a simple model of loan quality, controlling for loan maturities and dynamics of loan supply. Results for nine national aggregate indices of nonperforming housing loans in the Czech Republic, Greece, Ireland, Hungary, Latvia, Poland, Portugal, Romania, and Spain revealed strong differences in the dynamics of calibrated default probabilities between countries. Calibrated default rates were correlated with macroeconomic factors, but the linkages depended on the markets investigated. JEL classification: C15, C22, G21, G31
Introduction
This paper suggests a simple method to derive default rates (DR) of loan portfolios from the time series of nonperforming loans (NPL).The NPL ratio (i.e., the ratio of NPLs to total loans in the portfolio) is a standard measure of loan quality widely used in research analyzing performance of banking sectors and their customers (e.g., Meeker and Gray, 1987;Lízal and Svejnar, 2002;Hasan and Wall, 2004;Podpiera, 2006;Mendoza and Terrones, 2008;Aman and Miyazaki, 2009;Festić et al., 2009;Čihák and Schaeck, 2010;Whalen, 2010;Jin et al., 2011).The well-known problem of this measure is the mechanistic dependence of its values on the rate of growth of the loan portfolio, which often forbids cross-sectional and inter-temporal comparisons (Tornell and Westermann, 2002, p. 22;Coricelli et al., 2006).Another drawback of the NPL rate as a measure of loan quality is that it is a backwardlooking variable (i.e., it evaluates the historical performance of the loan portfolio).NPLs often remain in portfolios for several quarters or even years, affecting the NPL ratio but having no effect on the current financial performance of credit institutions.In contrast, most analysts are interested in the present standing of the portfolio and the present performance of the debtors.The DR is one alternative variable that describes the current performance of loans and does not automatically depend on the dynamics of the loan.
The DR quantifies the rate at which borrowers default on the amount of funds they owe to the bank in a given (e.g., most recent) period.The DR is often weighted with the values of analyzed loans so that defaults of large loans weigh more in this measure.The historical DR may be used to predict further credit risk, and this is known as the probability of default (PD).Therefore, DRs are often used to predict values of loan portfolios.One important problem is the lack of publicly available data on the PDs and DRs of loan portfolios, while the time series of NPL ratios at the aggregate country level are published by international central banks and financial supervisory authorities.
The literature bears witness to just a few attempts to construct a DR of some loan portfolios using information contained in the NPL ratio.Pinto and Vivan (2013) constructed the implied NPL ratio, a new measure that controlled the changes in the portfolio growth rate during its average term to maturity.This measure took into account differences in the default distribution across time and the amount of time a past due loan remained in the balance sheet.Serwa (2013) derived another measure of loan quality: the adjusted NPL rate.This measure was similar to the DR because it was also robust to changes in the rate of growth of the loan portfolio.
This paper contributes to the literature by deriving the DR from the information contained in the NPL ratio.Moreover, we applied this measure in the analyses of stability in the aggregate loan portfolios in nine European banking sectors.We applied a simple method to calibrate the term structure of the loan portfolios proposed by Serwa (2013) and derived the time series of DRs from historical data of NPL rates in the Czech Republic, Greece, Ireland, Hungary, Latvia, Poland, Portugal, Romania, and Spain.We found that the derived PDs for some countries provided valuable information about the state of the banking sector.However, for other countries we found the need for a better calibration of the method.
We focused here on the aggregate portfolios of housing loans because these loans are relatively uniform in comparison to other types of banking loans.The housing loans are long-term contracts, and something such as a bad quality mortgage can remain in the loan portfolio for a long period of time and have an influence the NPL ratio.Analyzing the aggregate DRs of housing loans provided valuable information about the average financial performance of households involved in mortgages.
The next section describes the method to calibrate the DRs, given minimum information on the term structure of the loan portfolio.Section 2 includes empirical results of estimated DRs for nine country-aggregates of housing loans, and the final section involves our conclusions.
1 Term-structure of loans and default rates Our description of the model of a loan portfolio strictly followed Serwa (2013).Let n be the maximum maturity of a loan contract in a given portfolio.The loan portfolio X t at time t consists of aggregated loan cohorts x i,j,t supplied to borrowers i periods ago and maturing in j periods, where i = 0, 1, . . ., n − 1 and j = 1, 2, . . ., n, respectively.The NPLs are a part of this portfolio and the aggregated bad-quality (non-performing) loan cohorts are denoted as b i,j,t .The tranches of good-quality loans g i,j,t are computed as g i,j,t = x i,j,t − b i,j,t .
The NPL ratio is the share of bad loans in the portfolio: The following simplifying assumptions were introduced in order utilize the derivation of DRs.The important characteristics of loan contracts do not change while loans are present in the analyzed portfolio.Loans are only removed from the portfolio after reaching maturity.Good-quality loans pay interest while the NPLs do not.The interest paid on loans in each period is not included in the portfolio, and all new loans added to the portfolio at time t are of good quality.The good-quality loans are repaid by borrowers in equal tranches between each period, g i,j,t /j , and the bad-quality loans are not repaid until maturity.Calculations assuming other repayment schedules do not change our general results, the above assumptions are the same as in Serwa (2013).
The recursive formula for the value of NPLs is as follows: where DR i−1,j+1,t−1 is the average default rate between time t−1 and t for the loans that have belonged to the portfolio since i−1 periods and have been expected to mature in j +1 periods at time t − 1.Because we aimed to construct a single index of DR, we computed an average DR for all cohorts of loans.To simplify computations, we assumed that DR i,j,t = DR t was the same for all maturities and cohorts at time t.Now, the aggregate value of all NPLs is equal to the following: Also, the aggregate value of good-quality loans is equal to the following: Deriving DR t from (3) and ( 4) was complicated by the fact that data for neither j=1 g i,j,t−1 , or n−1 j=1 g 0,j,t are publicly available.One way to overcome the problem of unobservable cohorts of the portfolio's loans was to calibrate the term structure using information about the maximum available maturity of loans, the growth rate of loans, the NPL ratio, and the distribution of new loans entering the portfolio in each period.Then, the rate of default can be recursively computed from equation (3) as follows: where bi,j,t−1 and gi,j,t−1 are the approximated values of true b i,j,t−1 and g i,j,t−1 , respectively, derived from equations (3) and (4) for the time t − 1.The initial values of b i,j,0 and g i,j,0 as well as the new loans g 0,j,t should be either known or assumed.In the latter case, Serwa (2013) proposed to use some artificial distributions of initial loans in the portfolio and to perform robustness analysis afterwards.
We considered two distributions of new loans with respect to their maturity.The first approach assumed that all new loans were equally distributed across maturities.In this approach, the value of new one-year loans was equal to the value of ten-year loans.The second approach controlled for when new long-term loans were more frequent than new short-term loans and the distribution of new loans was triangular.The one-period loans were n times less frequent than the n-period loans, and the values of loans with respective maturities were all calibrated with the initial growth rate of a total loan portfolio and the initial ratio of the NPLs.The two distributions described here were used in our empirical analysis.
When calibrating the distribution of good and bad loans in the portfolio to match the initial ratio of NPLs, we recursively calculated the long-term levels of b i,j,t and g i,j,t for t approaching infinity assuming a steady state growth rate of the whole portfolio, a constant distribution of new loans with respect to maturity, and a constant rate of default of loans.In practice, 500 recursive computations of b i,j,t and g i,j,t from equations ( 3) and (4) lead to stable steady state values of these variables for all i and j independently of the initial values (cf., Serwa, 2013).
Default rates of housing loans in nine European economies
We looked at nine European banking sectors, and we were able to obtain information on NPL ratios, the growth rates of loans, and the values of maximum available (but realistic) maturity for all loans in each sector.We chose to focus on the aggregate portfolios of housing loans in each country, because these loans have long maturities and are more homogenous than other types of loans.The data on NPL rates came from the central banks in each respective country.Unemployment rates, wage growth rates, and market interest rates (usually the short-term interbank rates in respective markets) were obtained from the International Financial Statistics database of the International Monetary Fund.
Figures 1-9 present fluctuations of the NPL ratios in time and the intermediate changes in the business cycles of respective economies.For most of the countries, we observed rising unemployment and slowing income growth during the global financial crisis of 2008-2009.In the calm periods wage growth was higher and the unemployment rate was lower.By observing NPL ratios, it was difficult to distinguish between crisis and non-crisis periods because they contained combined information about credit growth rates and credit DRs.These two combined factors provided rather noisy messages on credit risk.For instance, the pace of credit growth in many countries was still high at the beginning of the recent global financial crisis while the economic fundamentals were already weak and the credit risk was rising.
Loan DRs should react more rapidly to changing economic conditions than the NPL ratios.Figures 10-18 present changes in DRs for each analyzed country.These DRs were calibrated using the model presented in Section 1.The DRs shown in these figures are computed using of equation ( 5) and are valid for the two scenarios: (a) when all new loans are equally distributed across remaining maturities (DR1); (b) when more new loans have longer-term maturities (DR2).The differences between the two estimates were relatively small and they did not affect the general conclusions (other distributions of new loans in the portfolio were also possible, but they did not significantly change the results).
There were strong differences in the volatility and roughness of the calibrated DRs for different countries.For some countries such as Hungary, Latvia, or Spain, the estimates were relatively smooth.The series for Portugal and Poland were much rougher.These latter DRs may contain more economic information or they may simply contain more random noise.It is clear from the figures that large increases in DRs were observed during the global financial crisis of 2008-2009 for all the countries (this applied to Greece later as well).For some countries (such as Spain) the crisis period was also the most volatile in terms of changes in the DRs.
Surprisingly, DRs became very small values in calm periods for some markets (such as Latvia).The rates of default were also rather small for most portfolios, which seems unlikely even for developed economies in prosperous times.Therefore, the calibrated indices should be treated as proxies of credit risk rather than precise estimates of credit risk.Our new measures are likely to provide valuable information on changes of credit risk over the business cycle rather than the information on the levels of risk.
In the Czech Republic, the DRs are on average 50 percent lower before the crisis.A similar result was found in Hungary where the DR started to increase in 2008 and was six times higher in 2010 than it was in 2007.In Greece, the lowest rates of default were observed in the years 2006 and 2007.These rates doubled during the crisis of 2008-2009 and even quadrupled in the year 2011.In the short sample of Irish data, the DR grew steadily over the 2008-2009 crisis and in the post-crisis period, which suggests a negative effect on Irish households during the sovereign debt crisis in the euro zone.In Latvia and in Spain, one can observe the single peak of the rate of default in the middle of the global financial crisis.Such a peak was also apparent for Poland, but in a much noisier neighborhood of similar peaks in 2004, 2006, and 2011.In turn, the crisis period for Romania and Portugal was difficult to distinguish from other times when using the DR time series.
We investigated how our measure of credit risk was correlated with macroeconomic factors, and the results are shown in Table 1.We found that DRs were often strongly correlated with macroeconomic variables in the investigated samples.The patterns of correlation were not perfectly clear, however, and they strongly depended on the investigated countries.It is interesting to note that the DRs measured at time t were more correlated with the unemployment rates in the upcoming quarter (measured at time t + 1) rather than those measured in the present quarter (at time t) in six out of nine cases.For the growth of wages, correlations were more negative between present defaults and the future growth of wages in five out of nine cases.Similarly, correlations were more negative between present defaults and the future interest rates in five out of nine cases.We expected a negative link between present DRs and future interest rates if central banks adjusted the interest rate to worsening or improving economic conditions proxied by DRs.We conclude that there is no strong evidence that proves DRs rates can predict macroeconomic variables very well; however, the contemporaneous linkages of credit risk and macroeconomic factors are strong in many countries.
We also tested for any evidence of pairwise Granger causality between macroeconomic variables and the DR.The results are presented in Table 2 only for the countries where enough time-series data were available.Our results confirmed the outcomes from the correlation analysis and suggested that there was no clear direction of lagged causality between the analyzed variables.
Conclusions
Deriving probabilities of default from raw indices of NPLs is a difficult task when only limited information is available.Depending on the country, the computed DRs are volatile or smooth, but are always more irregular than original NPL ratios.This may suggest that economic fluctuations and their effects on the loan quality are volatile.Banks tend to smooth their economic performance (and the NPL ratio) by controlling their loan supply.There are several factors that influence changes in the NPL ratios independently of the changing rates of default.For example, selling some parts of a loan portfolio will significantly affect the termstructure of the aggregate loan portfolios.Changing accounting regulations, and changing policies of banks with respect to new loan supply and old NPLs would also affect the results.In practice, more information about the structure of the loan portfolio improves the precision of calibrated DRs.
Further analyses should compare our proxy of credit risk for a loan portfolio with the true values of DRs in bank loan portfolios.This task is complicated by the fact that few banks publish estimates of PDs cyclically.
One direct extension of our results could be regression analyses of macroeconomic factors affecting credit risk in different countries.It is possible that the combined macroeconomic variables in regression models may explain a large share of credit-risk volatility.This may lead to more efficient models used to predict credit risk, and our Granger causality analysis represents a first step in this direction.
Figure 1 :
Figure 1: The NPL ratio and macroeconomic data for the Czech Republic
Figure 3 :
Figure 3: The NPL ratio and macroeconomic data for Ireland
Figure 5 :
Figure 5: The NPL ratio and macroeconomic data for Latvia
Figure 7 :
Figure 7: The NPL ratio and macroeconomic data for Portugal
Figure 9 :
Figure 9: The NPL ratio and macroeconomic data for Spain | 3,889.2 | 2016-06-04T00:00:00.000 | [
"Economics"
] |
Thermal effects of CO 2 capture by solid adsorbents: some approaches by IR image processing
– Thanks to infrared thermography, we have studied the mechanisms of CO 2 capture by solid adsorbents (CO 2 capture via gas adsorption on various types of porous substrates) to better understand the physico-chemical mechanisms that control CO 2 -surface interactions. In order to develop in the future an efficient process for post-combustion CO 2 capture, it is necessary to quantify the energy of adsorption of the gas on the adsorbent (exothermic process). The released heat (heat of adsorption) is a key parameter for the choice of materials and for the design of capture processes. Infrared thermography is used, at first approach, to detect the temperature fields on a thin-layer of adsorbent during CO 2 adsorption. An analytical heat transfer model was developed to evaluate the adsorption heat flux and to estimate, via an inverse technique, the heat of adsorption. The main originality of our method is to estimate heat losses directly from the heat generated during the adsorption process. Then, the estimated heat loss is taken for an a posteriori calculation of the adsorption heat flux. Finally, the heat of adsorption may be estimated. The interest in using infrared thermography is also its ability to quickly change the experimental setup, for example, to switch from the adsorbent thin-layer to the adsorbent bed configuration. We present the first results tempting to link the thin-layer data to the propagation speed of the thermal front in a millifluidics adsorption bed, also observed by IR thermography.
Introduction
According to the Climate Change 2007: Synthesis Report [1] from the Intergovernmental Panel on Climate Change (IPCC), the greenhouse gas (GHG) emissions due to human activities have considerably grown since preindustrial times.It was also found that carbon dioxide (CO 2 ) is the most important anthropogenic GHG, representing 77% of total GHG emissions.The largest growth in GHG emissions has come from energy supply, transport and industry.The report states that most of the observed increase in global average temperatures since the mid 20th century is very likely due to the observed increase in anthropogenic GHG concentrations.Extensive efforts have been devoted to reducing CO 2 emissions, especially to the development of CO 2 capture technologies [2].CO 2 capture by solid adsorbents (CO 2 capture via gas adsorption on porous materials) is one of the most promising capture technology due to its high selectivity and low energy penalties [3].The study of the a Corresponding author<EMAIL_ADDRESS>mechanisms that control CO 2 -adsorbent interactions is essential when we are looking for efficient CO 2 capture concerning the adsorbent and the design of capture processes.The adsorption of a gas on an adsorbent is exothermic.The study of the energy released during the interaction CO 2 -substrate (heat of adsorption), leads to a better understanding of these physicochemical mechanisms.The estimation of adsorption heats is thus crucial.A high heat of adsorption leads to a high CO 2 -adsorbent interaction.This means that the material has a good affinity with the CO 2 molecules, allowing favorable CO 2 capture.However, energetic interactions may impede the material regeneration (desorption of CO 2 from adsorbent).Therefore, the heat of adsorption is a key parameter for the selection of materials.The heat of adsorption may be measured by calorimetric techniques [4] or estimated via adsorption isotherms [5].Both methods are widely used, but high cost equipments and time-consuming experiments are necessary, hindering the use of high-throughput methodologies.In most adsorption processes, the adsorbent is in contact with fluid in a Article published by EDP Sciences packed bed.The analysis and rational design of such processes therefore require an understanding of the dynamic behavior of such systems [6].Until recently, many authors have utilized the simultaneous solution of a set of coupled nonlinear partial differential equations (PDEs) based on heat and mass balances to predict the adsorption column (adsorber) dynamics [7][8][9].The balance equations take into account complex phenomena (e.g.diffusion, dispersion. . . ) and many parameters have to be used as inputs in the model.Moreover, these parameters are hardly measured or estimated.A direct methodology is used by these authors, which consists in fitting experimental results (adsorbent temperature and gas-phase concentration) with the mentioned PDEs.The disadvantage of this method is the use of adjustable parameters.The experimental data are usually recorded by thermocouples and gas-phase analyzer along the model adsorber.The intrusiveness and the influence of thermocouple thermal inertia may be a limitation for transient measurements such as adsorption processes [10].
Taking notice of these inconveniences, this work presents two high throughput devices based on infrared thermography for the estimation of adsorption heat via thin-layer approach, and of the propagation speed of thermal front in a millifluidic adsorption column.Simplified heat transfer model is then developed and the heat of adsorption is estimated using an inverse technique.The thin-layer approach allowed the prediction of the thermal front rate in the millifluidic adsorber, which is proportional to breakthrough time.
Thinlayer approach
The IR camera FLIR System A20M (Fig. 1: number 1) was coupled with an analytical balance METTLER TOLEDO XS204 DeltaRange (Fig. 1: number 2) to record the average temperature and mass variation of a thinlayer of adsorbent during CO 2 adsorptions and desorptions at room temperature.The sample mass variation is associated with adsorbed-phase evolution during CO 2 adsorption/desorption.The IR camera allows the detection of the thermal radiation (from 7.5 to 13 μm) coming from the adsorbent surface, and the temperature values can be directly measured through calibration parameters.A differential temperature measurement is recorded (sampling rate = 11 Hz) by using the same material as a reference to take into account the material emissivity and room temperature variation during the experiment.The precision balance measures the sample mass, and thanks to home-made Labview interface, the sample mass evolution is recorded (sampling rate = 1 Hz).Two filtering cartridges (made of insulating material) filled with a thin-layer of adsorbent (approximately 3 mm) are placed on the balance scale pan (Fig. 1: number 3) and one of them is connected with a gas injection system (Fig. 1: number 4).The injection system (multi-position valve, mass flow meters and 3-way valves) allows the switch on either CO 2 gas or N 2 gas (inert gas used to study CO 2 desorption) with a controlled injection time (homemade Labview interface).The 3-way valves are used to prevent excessive pressure between gas switching, thus the flow rate can be controlled without variation during adsorption/desorption cycles.The two mass flow metres Bronkhorst EL-FLOW are calibrated to take into account the gas temperature and density, thus allowing high accuracy (standard: ±0.5% Rd) measurement and control of gas flow rate.Moreover, thanks to controlled temperature of experiment room, the gas flow temperature is kept constant.Therefore, the temperature and flow rate of gas injected through thin-layer of adsorbent, consequently the related thermal exchanges, are mastered.
Simplified analysis of the rough signal through a lumped body model
As our experimental device leads to CO 2 adsorption on a thin-layer of adsorbent, we assumed that there is no temperature gradient in the layer thickness direction (Bi ≈ 10 −4 ).Concerning the heat transfer between the gas-phase and adsorbent grains, the internal resistance (adsorbent grain) is negligible compared to the external resistance (gas-phase) and the grain temperature is thus assumed as isothermal (Bi ≈ 10 −5 ).Moreover, an average temperature of the thin-layer surface is recorded by the IR camera.The physical properties of the adsorbent are considered constant, CO 2 is considered as an ideal gas, and the accumulation of energy in the gas-phase is negligible [11].With these assumptions, the thermal problem can be written as a lumped body model: heat generated by the adsorption with the initial condition at: where m s is the adsorbent mass (kg), Cp s the adsorbent heat capacity (J.kg −1 .K −1 ), Cp a the adsorbed-phase heat capacity (J.mol −1 .K −1 ), q(t) the amount of molecules adsorbed (mol.kg−1 ), T (t) the average temperature of thinlayer (K), φ(t) the adsorption heat flux (W), h 0 (t) the overall heat transfer coefficient (W.m −2 .K −1 ), A eff the effective surface area (m 2 ) and T ∞ the room temperature (K).Considering a differential temperature measurement (T (t) = T (t) − T ∞ ), equation (1) can be rewritten as follows: with: where α(t) and H(t) are grouping parameters describing the heat source and heat losses respectively.
The presented lumped body model allowed a simplified analysis of the rough signal.In order to present the analysis, the recorded average temperature and adsorbedphase evolution of an adsorbent thin-layer (about 3 mm) during CO 2 adsorption is shown in Figure 2a, and the plot of time derivative of temperature versus temperature at every adsorption time step is shown in Figure 2b.These data have been obtained with a commercial activated carbon (m s = 144 mg) which was subjected to continuous flow rate of pure CO 2 (40 ml.min −1 ) during an injection time equals to 600 s.
Note: The smoothed data in Figure 2b were obtained by first-order finite difference method.
The results show two adsorption-time behaviors.In short-time adsorption period (ST), from t 0 = 0 s to t f = 14 s (see Fig. 2a), a high non-linear behavior between dT (t) dt and T (t) was observed.In long-time adsorption period (LT), from t f = 14 s to t ∞ = 600 s, there is a quasilinear behavior between dT (t) dt and T (t).Thanks to the lumped body model (Eq.( 3)) we can propose an interpretation of results.At short-time, the non-linear behavior is related to the heat released during the adsorption process.This means that the heat source is on.However, at long-time, the results suggest that the energy released by the adsorption is negligible and that the heat source is off α (t f → t ∞ ) = 0. Therefore, a quasi-linear behavior is observed and can be interpreted as an evolution of the heat losses during CO 2 adsorption.This evolution is due to adsorbed-phase loading during CO 2 capture and to probable variation of overall heat transfer coefficient h 0 (see Eq. ( 5)).
The results suggest that the main release of heat is due to the first interactions between CO 2 molecules and adsorbent surface during the formation of a monolayer of adsorbed-phase.The adsorbates occupy the most energetically favorable positions and adsorbent-adsorbate interactions dominate.The evolution of adsorbed-phase after the short-time adsorption can be explained by the formation of a multilayer of adsorbate on adsorbent surface, which is intensified during the cooling of the adsorbent.
In the multilayer, the main interactions are adsorbateadsorbate, which are less energetic, and the heat flux becomes negligible.
Heat of adsorption estimation
From the lumped body model (Eq.( 3)), an inverse thermal model can be written as follows: The integral heat of adsorption could be estimated by the equations below: with: where Q is the integral adsorption energy (J), N mol the total amount of adsorbed CO 2 (mol), dq(t)/dt the adsorption rate (mol.s−1 ) and t i , t f the time window where the heat source is active (s).Therefore, to estimate the integral heat of adsorption, it is necessary at first, to estimate the heat flux φ(t).The main difficulty was thus the estimation of the heat losses.The heat losses were estimated at long-time adsorption, when the heat source is off.To strictly distinguish when the heat source is inactive, the coefficient of correlation (ρ Ft ) was studied [12].This coefficient represents the normalized measure of the strength of linear relationship between variables (in our case, the average surface temperature of thin layer and its time derivative).If such correlation coefficient is close to −1, it is a proof that the thermal model has a linear behavior.It means that the heat source is off and the heat losses can be estimated.
Therefore, heat losses values (H) have been estimated within the range of −0.99 > ρ Ft > −1.The maximum value of H = 0.049 s −1 was used for posteriori estimations.After heat losses estimation, we can calculate the heat flux using equation (6).In order to estimate the integral heat of adsorption, the final step was the integration of heat flux curve from t i = 5 to t f = 14 s (for the estimation of integral adsorption energy Q) and then estimate, in the same time interval, the total amount of adsorbed CO 2 (N mol ) from the adsorbed mass data.Finally, applying equation (7), ΔH ads = 28 kj.mol −1 was estimated for the interaction CO 2 -adsorbent.The result is in good agreement with the values found in the literature for activated carbons: ΔH ads = (20-30) kj.mol −1 [13][14][15].For more information about our method see [16].
Millifluidic adsorption column: temperature field processing
High throughput studies of the influence of operating conditions (e.g.flow rate of injected gas into adsorption bed) on thermal behavior of adsorbers can be achieved by coupling a millifluidic adsorption column with an IR camera.The cylindrical adsorber is connected to a gas injection system and placed in an aluminum support (Fig. 3a).The milli-adsorber has internal diameter d 2 = 5 mm, external diameter D 2 = 8 mm, wall thickness e w = 1.5 mm and adjustable length L. It was made in thermoplastic polymer (insulating material, λ = 0.1-0.22W.m −1 .K −1 ) to reduce the heat losses with surroundings and thus maximize the thermal signal received by the IR camera.The aluminum support (high thermal conductivity, λ = 237 W.m −1 .K −1 ) was painted in black, to avoid reflections and to homogenize the surface emissivity.The CO 2 adsorption occurs thus in isoperibolic conditions, in other words, the temperature around the milli-adsorber is constant and uniform.The heat losses are thus mastered [17].In order to illustrate some applications of the device, a commercial activated carbon (the same adsorbent used in thin-layer approach) was employed to run an experiment, which consisted in recording the temperature field during CO 2 adsorption for different flow rates (30, 40, 50, 60 and 70 ml.min−1 ) of pure CO 2 injected into the adsorber.The milli-adsorber and its temperature field during CO 2 adsorption (flow rate = 40 ml.min −1 ) at t = 60 s, is shown in Figure 3.The maximum observed temperatures were lower than in thin-layer approach.This can be explained by the semi-transparent behavior of adsorber wall in the infrared region.The IR signal detected by IR camera was thus weaker than in thin-layer approach.A thermal front with constant speed was observed during CO 2 capture.The analysis of thermal images and temperature evolutions (Fig. 4a), allowed the estimation of the propagation speed of the thermal front (TFR) in the milli-adsorber for each injected flow rate.Specifically, the TFR was estimated by the time it takes the thermal wave to pass through a section of the millifluidic adsorber.
For a same flow rate traversing an identical volume of both thin-layer and millifluidic bed of adsorbent, a correlation between short-time adsorption (in thin-layer approach) and the thermal front rate (TFR) can be written as: where e fl is the thin-layer thickness (mm), τ ST the breakthrough time (s) (temporal window of short-time adsorption, from t i to t f ) and d 1 , d 2 the inner diameter of adsorption cup and millifluidic adsorber, respectively.For example, the presented experiment in thin-layer approach (CO 2 injection of 40 ml.min −1 through adsorbent thin-layer with e fl = 3 mm) has led to a breakthrough time of τ ST = 9 s.Applying equation (9) to the found breakthrough time (with e fl = 3 mm, d 1 = 8 mm and d 2 = 5 mm), a thermal front rate of T F R = 0.85 mm.s −1 was calculated.This result is in good agreement with the estimated TFR in the millifluidic adsorption column for a same injection flow rate of 40 ml.min −1 : T F R = 0.86 mm.s −1 (see Fig. 4b).
Conclusions and prospects
In order to study the mechanisms of CO 2 capture by solid adsorbents, we have developed two experimental devices based on IR thermography.The first device consists in coupling IR thermography and gravimetric techniques for simultaneous recording temperature and adsorbed mass evolution during CO 2 adsorption on adsorbent thin-layer.The second device consists in a millifluidic adsorption column in which the temperature field during adsorption process is also detected by IR thermography.
The thin-layer approach allows the identification and estimation of main parameters influencing CO 2 capture.These influent parameters, such as the heat of adsorption, can be used as key parameters for high throughput selection and overall comparison of materials.The millifluidic adsorber allows a high throughput study of operational conditions influencing temperature evolution and the estimation of thermal front rate, which is related to bed saturation.Moreover, a multi-scale analysis from thin-layer to adsorbent bed can be achieved.
The developed systems will be applied to perform systematic studies to better understand complex phenomena such as heat diffusion and transport.
Finally, thanks to its rapidity, the presented approach is interesting for preliminary design of processes to be used at industrial scale.
Fig. 2 .
Fig. 2. (a) Recorded average temperature and adsorbed-phase evolution of adsorbent thin-layer during CO2 adsorption.(b) Plot of time derivative of temperature versus temperature at every time step.
Fig. 4 .
Fig. 4. (a) Average temperature in different sections of adsorber, from L0 = 0 to L f = 15 mm (see Fig. 3b) as a function of adsorption time.(b) Propagation speed of thermal front as a function of CO2 flow rate. | 4,054.4 | 2013-01-01T00:00:00.000 | [
"Engineering"
] |
Phase-dependent light propagation in atomic vapors
Light propagation in an atomic medium whose coupled electronic levels form a diamond-configuration exhibits a critical dependence on the input conditions. In particular, the relative phase of the input fields gives rise to interference phenomena in the electronic excitation whose interplay with relaxation processes determines the stationary state. We integrate numerically the Maxwell-Bloch equations and observe two metastable behaviors for the relative phase of the propagating fields corresponding to two possible interference phenomena. These phenomena are associated to separate types of response along propagation, minimize dissipation, and are due to atomic coherence. These behaviors could be studied in gases of isotopes of alkali-earth atoms with zero nuclear spin, and offer new perspectives in control techniques in quantum electronics.
I. INTRODUCTION
Experimental evidence has demonstrated that the nonlinear optical properties of laser-driven atomic gases exhibit counter-intuitive features with promising applications. A peculiarity of these media is the possibility to manipulate their internal and external degrees of freedom with a high degree of control. Recently the control of the internal dynamics in an atomic vapor by means of electromagnetically induced transparency (EIT) [1] was demonstrated for the generation of four-wave mixing dynamics [2] and of controlled quantum pulses of light [3,4]. Zeeman coherence has also been used to induce phase dependent amplification without inversion in Samarium vapors [5] and in HeNe mixtures [6]. In another experiment, the interplay of internal and external degrees of freedom in an ultracold atomic gas by means of recoil-induced resonances [7] was used to achieve waveguiding of light [8]. From this perspective, it is important to identify further possible control parameters on the atomic dynamics for the manipulation of the non-linear optical response of the medium.
Recent studies have been focusing on the dynamics of light interacting with atoms featuring coupled energy levels in a so-called 'closed-loop' configuration [9,10]. In this configuration a set of atomic states is (quasi-) resonantly coupled by laser fields so that each state is connected to any other via two different paths of coherent photon-scattering. As a consequence, the relative phase between the transitions critically influences dynamics [9] and steady states [10,11,12]. Applications of closed-loop configurations to nonlinear optics have featured double-Λ systems where two stable or metastable states are -each-coupled to two common excited states. A rich variety of nonlinear optical phenomena has been predicted [11,13,14] and experimentally observed [2,5,6,15,16,17,18,19]. In [19], in particular, it has been shown experimentally that the properties of closed-loop configurations can be used to correlate electromagnetic fields with carrier frequency differences beyond the GHz regime. Moreover, coherent control based on the relative phase in closed-loop con-figuration has been proposed in the context of quantum information processing [20]. Each transition |i → |j is resonantly driven by a laser field at frequency νij. Here, |g is the ground state, |1 and |2 the intermediate states, which decay into the ground state at rates γ1g and γ2g, respectively, and |e the excited state, which decays with rates γe1 and γe2 into the corresponding intermediate states. Each pair of levels is coupled by two paths of excitation, hence the dynamics depends critically on the relative phase between the paths. The coherent dynamics of the ♦configuration is equivalent to that of the double-Λ scheme, whereas the radiative instability of the atomic levels differs.
In this work we investigate the phase-dependent dynamics of light propagation in a medium of atoms whose energy levels are driven in a closed-loop configuration, denoted by the ♦ (diamond) scheme and depicted in Fig. 1. This configuration consists of four driven transitions where one ground state is coupled in a V-type structure to two intermediate states, which are in turn coupled to a common excited state in a Λ-type structure. It can be encountered, for instance, in (suitably driven) isotopes of alkali-earth atoms with zero nuclear spin [21]. Although the coherent dynamics of ♦ schemes is equivalent to that of double-Λ systems [9], the steady states of the two systems exhibit important differences due to the different relaxation processes [11,12].
The dynamics of light propagation in a medium of ♦atoms is studied by integrating numerically the Maxwell-Bloch equations. We find that, depending on the input field parameters, the polarization along the medium can be drastically modified. The propagation dynamics may exhibit two metastable values of the relative phase, namely the values 0 and π, corresponding to a semi-transparent and to an opaque medium, respectively. For different values of the initial phase, light propagation along the medium tends to one of these two values, depending on the input values of the driving amplitudes. These two types of the medium response are supported by the formation of atomic coherences leading to a minimization of dissipation by depleting the population of one or more atomic states. This phase dependent behavior, selected at the input by the operator, offers promising perspectives in control techniques in quantum electronics.
The article is organized as follows. In Sec. II the model is introduced and discussed. In Sec. III the results for the dynamics of light propagation, solved numerically from the equations reported in Sec. II D, are reported and discussed in some parameter regimes. Conclusions and outlooks are reported in Sec. IV. The appendices present in detail equations and calculations at the basis of the model derived in Sec. II.
II. THE MODEL
We consider a classical field propagating in a dilute atomic gas along the positive z-direction. The field is composed of four optical frequencies ν 1g , ν 2g , ν e1 and ν e2 , its complex amplitude is a function of time t and position z of the form where k ij denotes the wave vector and e ij the polarization of the frequency component ν ij . The input field enters the medium at z = 0, and the effect of coupling to the medium is accounted for in the z dependence of the amplitude E ij (z, t) and phase φ ij (z, t) whose variations in position and time are slow with respect to the wavelengths λ ij = 2π/k ij and the oscillation periods T = 2π/ν ij , respectively. The atomic gas is very dilute and we can assume that the atoms interact with the fields individually. In particular, each field component at frequency ν ij drives (quasi-) resonantly the electronic transition |i → |j of the atoms in the medium, such that the atomic levels are coupled in a ♦-shaped configuration.
The relevant atomic transitions and the coupling due to the lasers are displayed in Fig. 1. The ground state |g is coupled to the intermediate states |1 , |2 at energies ω 1 , ω 2 by transitions with dipole moments d 1g = 1|d|g and d 2g = 2|d|g , respectively. The intermediate states decay back into the ground state at decay rates γ 1g and γ 2g . The intermediate states are also coupled to the excited state |e at an energy ω e with respect to the ground state |g , by the dipole transitions d e1 = e|d|1 , d e2 = e|d|2 . The excited state |e decays into states |1 and |2 at rates γ e1 and γ e2 , respectively. A similar configuration of levels can be found in isotopes of alkali atoms with zero nuclear spin [21].
The light fields propagating through the dilute atomic sample will induce a macroscopic polarization in the atoms. This polarization will depend on intensities and phases of the light fields. The polarization, in turn, will affect absorption and refraction of the light fields, altering their propagation. Below we introduce the equations for field propagation and the corresponding atomic dynamics.
A. Equations for field propagation
We denote by P(z, t) the macroscopic polarization induced in the atomic gas whered is the dipole operator, n is the density of the medium, which we assume to be zero for z < 0 and uniform for z > 0, and σ(z, t) is the atomic density matrix at time t and position z, which has been obtained by tracing out the other external degrees of freedom. Details of the underlying assumptions at the basis of Eq. (2) are discussed in Appendix A. We decompose the polarization P(z, t) into slowly-and fast-varying components, namely P(z, t) = 1 2 i,j P ij (z, t)e ij e −i(νij t−kij z+φij (z,t)) + c.c., (3) whereby the complex amplitudes P ij and the phases φ ij vary slowly as a function of position and time. We consider the parameter regime where the driving fields are sufficiently weak so that the generation of higher-order harmonics can be neglected. By comparing Eqs. (2) and (3), the amplitudes P ij can be expressed in terms of the elements of the atomic density matrix σ, where σ ij = i|σ|j . We have expressed the dipole moments in direction of the electric field polarization as e ij · d ji = D ij e −iθij , thereby separating the complex amplitudes P ij into modulus and phase. Here, the term D ij is real, θ ij are the dipole phases (θ ij = −θ ji ), and is the sum of the slowly-varying field phases φ ij (z, t) and the dipole phases θ ij . Using definitions (1) and (3) and applying a coarsegrained description in time and space, the Maxwell equations simplify to a set of propagation equations for each of the slowly-varying components of the laser and polarization fields [22] ∂E ij ∂z which are defined for z > 0. Here, each amplitude E ij and phase φ ij is coupled via the corresponding polarization P ij to all other field amplitudes and phases. We rescale the propagation equations using the dimensionless length and time Here κ 1g is the absorption coefficient cǫ 0 such that 1/κ 1g determines the characteristic length at which light driving the transition |g → |2 penetrates a medium with density n. We denote the dimensionless field amplitudes by where is the real valued Rabi frequency for the transition |i → |j . In this notation the propagation Eqs. (6) and (7) reduce to the form where denotes the atomic density matrix elements in a rotated reference frame. In the remainder of this paper we consider laser field geometries where |1 and |2 are states of the same hyperfine multiplet so that ν 1g ≃ ν 2g and ν e1 ≃ ν e2 .
B. Atomic dynamics
The time evolution of the density matrix σ(z, t) for the atomic internal degrees of freedom at position z > 0 is governed by the master equatioṅ where z is a classical variable. Equation (14) is obtained by tracing out the degrees of freedom of momentum and of position in the transverse plane, in the limit in which the medium is homogeneously broadened and the atoms are sufficiently hot and dilute such that their external degrees of freedom can be treated classically. Details of the assumptions at the basis of Eq. (14) are reported in Appendix A. Here the Hamiltonian describes the coherent dynamics of the internal degrees of freedom, and it depends on z through the (real-valued) Rabi frequency Ω ij (z, t) given in Eq. (10), and through the field and dipole phases, Eq. (5). The states |1 , |2 and |e are unstable and decay radiatively with rates γ 1g , γ 2g and γ e = γ e1 + γ e2 , respectively. The relaxation processes are described by where the recoil due to spontaneous emission is neglected since the motion is treated classically. In the remainder of this paper we assume a symmetrical decay of the excited level, γ e1 = γ e2 = γ e /2. We note that the transitions |g → |j (j = 1, 2) are saturated when Ω jg ≥ γ jg . Correspondingly, the upper transitions |j → |e are saturated when Ω ej ≥ γ e + γ jg . For later convenience, we introducẽ which explicitly shows the scalings of the upper field amplitudes with the corresponding decay rates.
C. The relative phase
In so-called closed-loop configurations, like the ♦ scheme, transitions between each pair of electronic levels are characterized by -at least-two excitation paths, involving different intermediate atomic levels [10,13]. In the ♦ scheme the relative phase between these excitation paths critically determines the solution of the master equation, and hence the atomic response during propagation. The role of the relative phase in the atomic response is better unveiled by moving to a suitable reference frame for the atomic evolution, which is defined when all amplitudes E ij are nonzero.
We denote by ρ the density matrix in this reference frame, obeying the master equatioṅ In this reference frame the Hamiltonian (15) is transformed to [9,12] with the detunings The Hamiltonian (19) exhibits an explicit dependence on the phase where with χ ij as defined in Eq. (5). The four-photon detuning ∆ν results in a timedependent phase, the wave-vector mismatch ∆k in a position dependent phase, and ∆χ(z, t) comprises the relative dipole and field phases. In [11,12] it has been discussed how Θ(z, t) affects the dynamics and steady state of the atom. The latter exists for ∆ν = 0 and in the remainder of this article we assume ∆ν = 0, ∆k = 0, i.e., the atoms are driven at four-photon resonance and by copropagating laser fields, such that the wave vector mismatch is negligible. Hence, the phase depends solely on the relative dipole phase, which is constant, and on the relative phase of the propagating fields, which evolves according to the coupled Eqs. (11) and (12).
D. Propagation of the field amplitudes and phases
Having introduced the basic assumptions and definitions, we now report the equations for the propagation of the field amplitudes and phases in the ♦-medium, which are numerically solved in Sec. III. We relate the elements of the density matrix ρ in the new reference frame with the elements p ij from eq. (13) by ρ g1 = p g1 , ρ g2 = p g2 , ρ e1 = p e1 , and The propagation equations for the light fields in the new reference frame can then be obtained from Eqs. (11)- (12) and take the form for j = 1, 2 and where we have introduced the variable These equations describe the evolution of field amplitudes and phases as a function of the atomic density matrix elements ρ ij . In turn, the values of ρ ij depend on the field amplitudes and the relative phase Θ according to Eqs. (18) and (19). The propagation dynamics now can be investigated by solving the coupled Eqs. (18) and (27) The numerical study of the solutions of Eqs. (27)-(32) presented in this paper is restricted to certain parameter regimes that single out the role played by the phase and the radiative decay processes in the dynamics. In particular, we consider the situation where each atomic transition is driven at resonance, namely ∆ i = 0, for i = 1, 2, e. Moreover, we restrict ourselves to the regime where the fields are initially driving the corresponding transitions at saturation. This latter assumption is important to guarantee a finite occupation of the excited state |e , and thus to highlight the dependence of the dynamics on the relative phase Θ.
During propagation, it may occur that one of the field amplitudes vanishes in just one point of the propagation variable ξ ′ . When this happens, the relative phase Θ is not defined and its value has to be reset manually by imposing continuity of the trajectory, when integrating the field equations in amplitude and phase (see Eqs. (27)-(32)). The correctness of this procedure has been checked by comparing the results with those obtained by integrating the field equations for the real and imaginary parts of the complex field amplitudes.
III. LIGHT PROPAGATION IN THE ♦-MEDIUM
In this section we summarize some peculiar properties of the ♦-level scheme, which have been extensively discussed in [12]. These properties provide an important insight into the propagation dynamics, which we study by solving numerically the Maxwell-Bloch Equations, Eqs. (18) and (27)-(32), in the regime where the input fields couple resonantly and saturate the corresponding electronic transitions, as described in Sec. II D.
A. Symmetries of the ♦-level scheme Before entering the detailed discussion of the numerical results, it is instructive to review some basic properties of the ♦-level scheme, which significantly affect its response to light-propagation. Special symmetries of this configuration are encountered when the laser amplitudes, resonantly driving the upper (lower) transitions, are initially equal, namely when In this regime, the Hamiltonian (19) substantially simplifies for some values of the phase. In particular, for Θ = 0, π (modulus 2π) the dynamics can be mapped to those of well-known three-level schemes [12]. Insight is gained by studying Hamiltonian (19) Here, |Ψ ± eg are symmetric and antisymmetric superpositions of the states |e and |g . However, if spontaneous decay is included, the two schemes are not equivalent. The relaxation processes select one configuration over the other depending on the stability of state |Ψ − eg , which decays at a rate γ e , with respect to the stability of state |Ψ − 12 , which decays at a rate γ 1g + γ 2g . It is then important to introduce the parameter which is the ratio between the decay rates of the two decoupled states, or, equivalently, the ratio between the decay of the excited and intermediate states. Hence, for α ≪ 1 (i.e. the excited state is longer lived than the intermediate ones), |Ψ − 12 is essentially empty, and the effective dynamics can be mapped to a Ξ-level scheme, see Fig. 2(b). For α ≫ 1 instead (i.e. the intermediate state is longer lived than the excited one), |Ψ − eg is empty, and the effective dynamics can be mapped to a Λ-level scheme, see Fig. 2(c). Hence, if the ratio α is sufficiently different from unity, the dynamics of the ♦scheme can be mapped to three-level schemes and leads to coherent population trapping (CPT) [1] in the stationary state. In the Λ case (α ≫ 1), a large coherence between the intermediate states is observed as reported in the transient dynamics of pulse propagation in a medium of ♦-atoms [15]. Here, for some parameter regimes one can observe population inversion at steady state on the transition |g → |1 , |2 [12]. In the Ξ case (α ≪ 1), a macroscopic coherence between ground and excited states is created. For some parameter regimes one can observe population inversion at steady state on the transition |1 , |2 → |e [25].
These properties have important consequences for the propagation dynamics. We note that for Θ = 0, π the components of the polarizations Re(ρ 1g ), Re(ρ 2g ), Re(ρ 1e ), Re(ρ 2e exp(iΘ)) vanish. This means that the field phases remain constant upon propagation in agreement with Eqs. (28), (30) and (32). Hence, if at the input then ∂Θ ∂ξ = 0 (37) and the relative phase remains constant during propagation along the medium. We recall that for Θ = π we observe a V-type dynamics (from now on denoted as destructive interference) and for Θ = 0 metastable CPT on a Ξ or Λ-scheme (from now on denoted as constructive interference). Hence, from these simple considerations we expect that for different values of the input phase and relaxation rates, energy will be dissipated at very different rates along the medium.
B. Destructive interference in the atomic excitations
For Θ(0) = π the atoms are perfectly decoupled from the upper fields independently of their intensity and the upper state is empty as described in III A. Destructive interference makes the polarizations of the transitions between the intermediate and the upper states as well as the population of the excited state to vanish identically, i.e., ρ e1 = ρ e2 = ρ ee = 0 [12]. Correspondingly, the dynamics of light propagation of the lower fields is expected to be that encountered in a medium of V-atoms. Figure 3(a) displays the propagation dynamics along the medium for Θ(0) = π and equal initial field amplitudes, G ij (0) = G 0 . Here, one sees that the upper fields propagate through the medium as if it were transparent, keeping a constant value. The amplitudes of the lower fields display identical decays. Figure 3(b) presents the corresponding populations of the energy levels along the medium. The energy level |e remains depleted while the intermediate states |1 and |2 maintain the same population as a function of ξ corresponding to the fact that the lower fields decay identically along the medium. The value of ground and intermediate state populations is the saturation value of the corresponding dipole transition until about ξ ∼ 200 when the lower fields G jg (ξ) do not saturate the transition any longer. After this penetration length only the ground state is appreciably occupied. Note that these dynamics are independent of the upper field amplitudes, as they remain decoupled from the atoms.
We can find an analytic expression for the dynamics shown in Fig. 3 and for the propagation length of the lower fields by solving the propagation equations (19) and (27)-(32) for Θ(0) = Θ = π. Setting G ej = G e and G jg = G g , we obtain the equations for the dimensionless amplitudes Here the right hand side in Eq. (39) vanishes since Im{ρ ij (Θ = π)} = 0. Therefore, the relative phase Θ and the upper field amplitudes G e are constant along the medium and the medium is transparent for the upper fields. Equation (38) is the equation for an electric field propagating in a medium of resonant dipoles so that the lower field amplitudes G g decay during propagation at a rate that depends only on the value of G g itself. In the case of large input intensities (see Fig. 3) a simple equation for G g (ξ) is obtained [23] G g (ξ) = G 2 g (0) − ξ/2 allowing for an estimate of the penetration depth (2G 2 g (0)) of the lower fields in the medium.
C. Constructive interference in the atomic excitations
As discussed in section III A, for Θ = 0 the response of the system is similar to that of a Ξ or of a Λ level scheme, depending on the ratio of the decay rates α in Eq. (35). Atomic coherences between either the intermediate states or the ground and excited state may form, and correspondingly the imaginary part of the polarizations may become very small, thus reducing dissipation. Figure 4 displays the propagation dynamics along the medium for different values of the ratio α for Θ(0) = 0. For α ≫ 1 and α ≪ 1 the amplitudes decay slowly as a function of ξ, as expected from the formation of EITcoherences. Figure 5 shows light propagation for the case when the initial conditions of the fields give rise to population inversion at steady state due to metastable CPT. We find that population inversion is maintained until ξ ∼ 100 along the absorbing medium, but it gradually decreases, since the atomic coherences that are supporting CPT are not stable.
In the simulations of Figures 4 and 5, the lower (upper) field amplitudes remain equal during propagation. If we assume that G ej = G e and G jg = G g for all relevant ξ, then with D 0 = G 4 e (1 + α) + (1 + 4G 2 g ) α (G 2 g + α + 2α 2 ) +G 2 e α (2 + 3α) + G 2 g (1 + 3α + 2α 2 ) . (43) Equations (41) and (42) describe the dissipative propagation of the field amplitudes, and exhibit a nonlinear dependence on the amplitudes and the ratio α of the decay constants. Here, one can see that for different values of α the absorption lengths can vary by orders of magnitude. Limiting cases are found for α → 0, i.e. when the excited state is stable, and for α → ∞, i.e., when the intermediate states are stable. In these cases, the right hand sides of Eqs. (41) and (42) vanish, damping is absent, and light propagates through the medium as if it were transparent [24].
D. Stability of interference under amplitude fluctuations
In the cases discussed so far, the input phase is a constant of the propagation, and the amplitudes of the upper fields, as well as the amplitudes of the lower fields remain equal along the medium. We now address the question of stability of these configurations against phase and amplitude fluctuations.
Numerical investigations show that at Θ = π the Vlevel dynamics is robust against phase and amplitude fluctuations, from which we infer that this is a stable configuration. It should be remarked that the overall behavior is transient in that the medium dissipates the lower fields until (well inside the medium) the atoms are all found in the ground state.
The case Θ = 0 is more peculiar. In the cases discussed in Sec. III C, energy is exchanged between upper and lower fields until the lower field amplitudes go below saturation. Then, the upper fields decouple as the population of the intermediate states becomes negligible. In order to study the long term dynamics of propagation, we now focus on the regime where the lower transitions are driven well above saturation and where we may expect different length scales for the propagation of the upper and lower fields. Figure 6 displays propagation when the lower transitions are driven well above saturation for different values of α. The dynamics observed in the α = 1 case separates the regimes corresponding to a Ξ-like response for α ≪ 1, and a Λ-like response for α ≫ 1. For the separating case of α = 1 in Fig. 6(c) and (d) we find that the damping of the lower fields below saturation is accompanied by a drop of the population from intermediate to ground states. For α = 100, shown in Fig. 6(a) and (b) propagation is characterized by EIT-like coherences between the intermediate states, which are established through the medium by the action of the lower fields. These coherences increase the penetration depth of the lower fields in the medium and allow the upper fields to propagate quasi undamped. This is consistent with the behaviour discussed in Sec. III C. However, for α = 0.01 in Fig. 6(d) and (e) we observe a clear deviation from the symmetric decay of the upper field amplitudes at long propagation length.
We now focus on this case, which exhibits novel features with respect to the cases studied so far. In Fig. 6(e) one observes that the upper field amplitudes, G e1 and G e2 , initially equal to each other, undergo a transient behavior where they become different: energy is transferred from one field to the other, such that the amplitude of one increases while the other gradually vanishes. This behavior is accompanied by a depletion of the excited state, while the intermediate states continue to be equally populated. At the same time of the vanishing of one of the upper field amplitudes, the phase Θ jumps from 0 to π and energy is redistributed between the two upper fields till they reach almost the same value. After this transient, the field amplitudes of the excited states remain at a constant value across the medium. Correspondingly, during and after this transient, the excited state population in Fig. 6(f) decreases until it reaches zero. This remarkable behavior hints to an instability of the phase value Θ = 0, which seems to be triggered here by numerical fluctuations of the values of the upper field amplitudes. Such conjecture is supported by the numerical analysis shown in Fig. 7, where we have introduced fluctuations In this latter case, the upper field amplitudes,Ge1 andGe2, become different: energy is transferred from one field to the other, such that the amplitude of one increases while the other gradually vanishes. At this point, the phase Θ jumps to the value π, and energy is redistributed between the upper fields till they reach the same value. This behavior hints to an instability of the phase value Θ = 0, which seems to be triggered by numerical fluctuations of the values of the upper field amplitudes.
between the initial values of the upper field amplitudes. As the initial discrepancy increases, the splitting of the upper field amplitudes appears at earlier locations in the medium although the behavior of the lower fields is unaffected. More detailed investigations on populations and phases show that the imbalance between the upper field amplitudes induces a depletion of the excited state until the vanishing of one of the upper amplitudes forces a phase jump to the value Θ = π and the upper fields decouple from the atom. After the phase jump, the upper field amplitudes tend to recover an equal value, but they decouple from the atoms once the lower field amplitudes have vanished.
An explanation of the phase jump from Θ = 0 to Θ = π is the tendency of the system to minimize the rate of dissipation in a way reminiscent of what is observed in Four-Wave Mixing experiments where interference effects are generated in order to minimize spontaneous emission [26]. We also note that with increasing values of α, the splitting of the upper fields for the same initial difference in their amplitudes is progressively delayed inside the medium and eventually disappears.
E. Generic phase at the input fields
Having identified and investigated two special values of the input phase, we now address the question, how the phase evolves starting from a generic input value, and correspondingly, how light propagates and is dissipated along the medium. We restrict to the configuration with initially equal lower field amplitudes and equal upper field amplitudes (33) and (34), and vary the input phase Θ(0) = π/2 in steps of π/4. Although the lower field amplitudes are clearly damped for all values of α, the mechanism of radiation dissipation depends on α and on the initial strengths of the field amplitudes. This can be associated with a particular evolution of the phase along the medium, which in the cases displayed in Fig. 8 reaches the stable value Θ = 0, and in the cases displayed in Fig. 9 tends first to the value π before eventually reaching Θ = 0. The choice between these behaviors depends on the input amplitudes of the fields. We now discuss these two behaviors in detail.
In Fig. 8 all atomic transitions are driven at saturation, and the saturation of the upper transitions is larger or equal to that of the lower transitions. We observe that the relative phase of the fields tends to the zero value. Before this value is reached, radiation is damped at a fast rate. Once Θ = 0, the rate of damping of the lower field amplitudes changes abruptly to a lower value. This sudden change occurs at a propagation length determined by the typical absorption length of the fast decaying transition. The system simulates a V-configuration, thereby switching to an EIT-like response. A similar kind of behavior is also observed in a medium of the Double-Λ atoms where EIT-coherences are established between the two stable states [11]. In the ♦ configuration the coherences and interferences are transient [16]. For a slower decay of the excited state, however, the system can also switch to a Ξ-dynamics and exhibit a transient coherence between ground and excited states. A manifestation of this phenomenon is population inversion between the excited and the intermediate states along the medium in Fig. 8(f).
In Fig. 9 the lower transitions are driven well above saturation, and the corresponding saturation parameter is larger than the saturation parameter of the upper fields. Here, during a transient regime the phase slowly tends to Θ = π. Nonetheless, the tendency of the medium for long propagation lengths is to eventually decouple upper fields and atoms, i.e. to switch to a V-dynamics. The onset of this dynamics depends critically on the value of G g , which must well saturate the lower transitions with respect to G e in order to populate the intermediate states on a time scale shorter than their decay rate, but long enough for incoherent decay of the upper state to take place. This behavior is in agreement with Sec. III D, showing that when the lower transitions are driven well above saturation the value Θ = π is the only stable phase under amplitude and phase fluctuations.
F. Four-wave mixing
So far, we have considered input fields with equal lower and upper field amplitudes. We now discuss propagation when one field is initially very weak while the other three transitions are driven at or above saturation, and study how the dynamics of energy redistribution among the fields depends on the input parameters and on the stability of the excited state. Figures 10 and 11 display the field propagation when the upper field amplitude G e2 is very small and the phase is initially set to the value Θ(0) = π/2. In both figures we have assumed the excited state to decay slower than the intermediate states, but one may also observe amplification of the weak field under different conditions. In Fig. 10 the three input fields drive the respective transitions well above saturation. Here, amplification of the weak field is accompanied by the asymptotic approaching of the phase to Θ = π. The field G e2 is amplified until the upper state is depleted because of interference between the upper fields. From this point further the phase Θ = π is stable and the lower fields dissipate, until they drop below saturation. The jump of the phase to the value 0 is an artifact due to all atoms being in the ground state.
In Fig. 11 the three input fields G 1g , G 2g and G e1 are just saturating the respective transitions. Here, amplification of the field G e2 is accompanied by a transient stabilization of the phase at Θ = π. This is accompanied by a fast decrease of the lower field amplitude G g2 , until, when G g2 vanishes, the phase falls to Θ = 0. After this point the behavior changes and G g2 first increases slightly and then decays slowly as a function of ξ in a way similar to G g1 , while the upper field amplitudes remain constant. The final configuration supports a coherence between the excited and the ground state, and indeed for Θ = 0 and this value of α the dynamics can be mapped to a Ξ-level scheme. In particular, due to destructive interference, the fields are only weakly coupled to the transitions and the medium is semitransparent. This is also supported by Fig. 11(c) where one sees that the population is redistributed between the ground and the excited state while the intermediate states are depleted. In this regime the medium is characterized by population inversion between the excited and the intermediate states.
IV. DISCUSSION AND CONCLUSIONS
We have investigated numerically light propagation in a medium of atoms whose electronic levels are resonantly Curves (a-c) are obtained for input parameters α = 100, and Gg =Ge = 10. Curves (d-f) have input parameters α = 0.01, Gg = 5 andGe = 10. During propagation the phase tends to Θ = 0. Once this phase is reached, the rate of energy dissipation along the medium changes abruptly to a significantly lower level. In the case(d-f) this change is accompanied by the establishing of population inversion on the transition |1 , |2 → |e , see (f), due to the formation of EIT-coherences. driven by lasers in a ♦ configuration. Propagation is critically affected by the initial parameters of the input fields and shows the tendency to reach configurations which minimize dissipation. An important role is played by the relative phase Θ between the fields. It exhibits two fixed points, Θ = 0 and Θ = π, whose stability during propagation depends on the field amplitudes and on the ratio α between the rates of dissipation of excited and intermediate states. A generic input phase evolves, in general, to one of these values, again depending on the input amplitudes and α.
These two metastable phase values are associated with two different types of atomic coherences. The response of the medium, corresponding to the phase Θ = 0, is characterized by the formation of atomic coherences typical of EIT-media. Similar behaviors have been observed for instance in [15,16] and are analogous to the response predicted for light propagation in double-Λ media [11].
The response of the medium for the phase Θ = π is supported by a different type of interference which leads to a depletion of the upper state and to a complete decoupling of the upper fields from the atom. For this value of the phase, the medium acts like a V -level configuration. We note that this value of the phase appears to be the preferred value for the ♦ medium if the lower transitions are driven well above saturation. This behavior is novel to our knowledge and it is reminiscent of the phenomenon of suppression of spontaneous emission observed in fourwave mixing studies in atomic gases [26].
In general, the system exhibits a rich dynamics and several novel features due to atomic coherence which offer new perspectives in control techniques in quantum electronics. These could be studied in atomic gases where the ground state has no hyperfine multiplet, like e.g. alkali-FIG. 9: (color online) Propagation of relative phase (a), field amplitudes (b), and populations of the atomic levels (c), for various input phases Θ(0) = 0, ±π/4, ±π/2, ±3π/4, π. Curves (a-c) are obtained for input parameters α = 5, and Gg = 10,Ge = 1, and curves (d-f) for α = 0.5 and the same initial amplitudes. Here, the lower transitions are driven well above saturation and during a transient regime the phase tends to the value π, while the upper fields decouple. The transition to the value Θ = 0 at large values of ξ is an artifact, since for these lengths the lower fields are very weak and the atoms are essentially in the ground state.
earth isotopes which are currently investigated for atomic clocks [21].
In the future we will extend our analysis to the case in which the transitions are not resonantly driven and we will address the asymptotic behavior of the system following the lines of recent works [27,28]. , and atomic populations (c) for input parameters Θ = π/2, α = 0.1, Gg1 = Gg2 = G1e = 10, and G2e = 0.1. Energy is exchanged between fields G2e and Gg1 and also between G1e and Gg2 until the excited state decouples and the upper fields propagate freely. The jump of the phase to the value 0 is an artifact due to all atoms being in the ground state.
dxdpw(x, p) = N and N is the number of atoms. The spatial density of atoms n(x) is found from w(x, p) according to dpw(x, p) = n(x). In this work we assume uniform density, namely and H(z, t) is defined in Eq. (15). The Liouvillian L in Eq. (16) describes the relaxation processes, which we consider here to be purely radiative. The corresponding macroscopic polarization has the form P(x, t) = dpw(x, p) Tr{d̺(x, p)} (A3) Assuming that the atomic gas has been Doppler cooled, so that line broadening is homogeneous, the kinetic energy can be neglected in evaluating the atomic response to the light. By integrating over p and x, y we hence obtain Eq. (14), whereby σ(z) = dpdxdyw(x, p)σ(x, p), and polarization as in Eq. (2). | 9,342.2 | 2005-06-12T00:00:00.000 | [
"Physics"
] |
Dileptons and four leptons at Z' resonance in the early stage of the LHC
The LHC era just began. The first discovery at the LHC experiment would be arguably a new resonance pole at TeV scale, if it exists. While the discovery of the Z' would be exciting by itself, it may also suggest what other new physics signals should be looked for while the LHC experiment is still at its early stage. We argue that the four lepton resonance at the Z' pole is a well-motivated and promising signal especially in supersymmetry framework, which can serve as a supersymmetry search scheme even in the early stage of the LHC experiment.
I. INTRODUCTION
The Large Hadron Collider (LHC) era finally arrived. Though the LHC aims at many targets such as Higgs boson and supersymmetric particles, the earliest discovery is expected to be a resonance pole at TeV scale, if it exists. The dilepton resonance would be a clean signature even at the hadron collider, and the spin of the new gauge boson Z ′ can be easily verified from the angular distribution of the dilepton.
While it could be any neutral component of the SU (N ) gauge boson, we consider Abelian gauge group U (1) ′ for its origin. The U (1) ′ is predicted by many new physics scenarios including extra dimensions, grand unified theories, and string theories [36]. More recently, it has appeared in the hidden valley models [3].
Though Z ′ is considered as a source of the fermion pair in most experimental setup, it can couple to the gauge boson pair and the scalar pair as well. The gauge boson case, for example, W + W − pair is possible when there is a mixing between Z and Z ′ [4,5,6]. The scalar case can be a Higgs or other kind of scalar. The Z ′ decaying to the Higgs pair can possibly serve as a good channel for Higgs search. It can be probed by looking for heavy particles such as the electroweak gauge bosons or heavy fermions.
There may be other kind of scalars, in general, whose major decay mode is a clean signal of light charged leptons (e, µ). Sfermions are the scalars naturally provided in the supersymmetry (SUSY) framework, as superpartners of the fermions. Sfermions would decay to the lightest supersymmetric particle (LSP) through a cascade decay. If a sfermion is the LSP, it may still decay to the standard model (SM) fermions if the R-parity is absent. In contrast to the Higgs case where the dominant decay mode is related to the masses of final particles, the scalar LSP decay through the R-parity violating coupling may have the light fermions as a major decay channel. Superpartner pairs may be produced abundantly by the new resonance [7,8]. The sfermion LSP pair can decay to 4 fermions making a resonance at Z ′ pole in the absence of the R-parity.
If the LSP is the sneutrino ( ν), the lepton number violating term λLLE c would give 4 charged lepton resonance, which could be a clean signal even in the early stage of the LHC experiment (see Figure 1). Therefore,q the "Z ′ → scalar pair → 4 leptons" mode is not only a novel channel for the LHC but also a natural scenario motivated by the SUSY.
We consider this lepton number violating model with the ν LSP as our example to study the feasibility of the 4 lepton channel at the LHC. In general, it applies to any new physics scenario that has the Z ′ coupling to scalar and the scalar coupling to charged leptons though. In the SUSY case, the charged slepton cannot be too much heavier than the sneutrino, and it would serve as an additional source of the leptons. This additional contribution will not be included in our numerical analysis since the sneutrino part result alone is large enough to give evidence to support our conclusion. Since the charged sleptons can only increase the number of leptons, our result can still serve as a minimally expected signal. If the 4 lepton signals can be discovered early enough, it will serve as a new SUSY signal search scheme.
II. TEV SCALE U (1) ′ GAUGE SYMMETRY
A great motivation for the TeV scale extra Abelian gauge symmetry or U (1) ′ can be found in supersymmetrization of the SM. The general superpotential of the minimal supersymmetric extension of the SM (MSSM) before R-parity is imposed is given as following.
SUSY is, arguably, the best motivated new physics paradigm that can address various problems of the SM, most notably the gauge hierarchy problem. However, a mere realization of the supersymmetric SM has some issues that should be addressed. Among them are (1) proton decay problem, (2) dark matter candidate stability problem, and (3) µ-problem. SUSY needs a companion mechanism or symmetry that can address these problems. R-parity is the most popular SUSY companion symmetry, and it guarantees the stability of the LSP, providing a good dark matter candidate if a neutral particle such as neutralino happens to be the lightest among the superparticles. R-parity also prevents the proton decay through the renormalizable lepton number (L) violating terms (LLE c , LQD c , H u L) or baryon number (B) violating term (U c D c D c ). For this reason, the MSSM with R-parity has been most extensively studied among the supersymmetric models. Nevertheless, the R-parity does not prevent dimension five L and B violating terms (QQQL, U c U c D c E c ), which still can mediate too fast proton decay [9], and the µ-problem still needs another solution. Although R-parity may still be a valid SUSY companion symmetry, possibilities are limited, and it suggests us to consider an alternative SUSY companion symmetry.
It turned out that a new TeV scale Abelian gauge symmetry U (1) ′ is an attractive alternative to the Rparity[37]. We will consider this U (1) ′ as the SUSY companion symmetry. In this letter, after we briefly review how the U (1) ′ can help with aforementioned problems in the absence of the R-parity, we argue that this scenario suggests 4 lepton resonance at the Z ′ pole as a plausible channel that can be discovered at the LHC, after Z ′ discovery, even in the early stage of the experiment.
One of the problems of the MSSM is that it does not explain why its new parameter µ should be of electroweak (EW) scale as the natural EW symmetry breaking requires [11]. The U (1) ′ gauge symmetry can replace the original µ term (µH u H d ) with an effective µ term (hSH u H d ) with a Higgs singlet S that spontaneously breaks the U (1) ′ gauge symmetry (see, for examples, [12,13]). The sfermion masses get extra D-term contributions of the U (1) ′ breaking scale, which should not be much larger than TeV scale in order to preserve the solution to the gauge hierarchy problem. Once the U (1) ′ is broken at TeV scale, the effective µ parameter of EW/TeV scale is dynamically generated. One of the direct implications of the model is the existence of a new gauge boson at TeV scale, which is accessible with the LHC. The mass of the Z ′ gauge boson is given by
charges (vacuum expectation values) of the Higgs fields
H u , H d , and S, respectively. With TeV scale S , Z ′ mass is expected to be at the same scale. See a recent review [2] and references therein for general aspects of the U (1) ′ including the U (1) ′ breaking mechanism.
The R-parity could be added on top of the U (1) ′ or even as a discrete subgroup of the U (1) ′ in its equivalent form of matter parity, in principle. In this letter, we consider the U (1) ′ as an alternative of the R-parity and consider only the case that R-parity is not conserved. This allows the L or B violating terms, which are in fact one of the most general predictions of the SUSY. General review of the R-parity violation can be found in Ref. [14].
Without the R-parity, the proton may still be sufficiently stable even at the higher dimension level due to the U (1) ′ . The U (1) ′ can have B 3 (baryon triality) [15] in the MSSM sector naturally as its residual discrete symmetry [16,17] and the proton decay (∆B = 1 process) is completely forbidden by this selection rule [18]. The LSP can decay without the R-parity and it is not a good dark matter candidate anymore in general. However, the U (1) ′ may have a new parity (called U -parity) as its residual discrete symmetry for the hidden sector, and the lightest U -parity particle (LUP) can be a good hidden sector dark matter candidate [19]. The discrete symmetries for both the MSSM sector (B 3 ) and the hidden sector (U 2 ) can be originated from the common U (1) ′ gauge symmetry that solves the µ-problem, which makes the model highly economic [20,21].
It was also shown that the LUP dark matter can satisfy current experimental constraints from direct detection and relic density [19]. So it is quite clear that the R-parity violating U (1) ′extended supersymmetric model is not only realistic but also highly motivated alternative supersymmetric model to the usual R-parity conserving MSSM, which can address all aforementioned problems.
III. COUPLINGS
Now the question is how we can distinguish this model from the usual MSSM besides the Z ′ pole at the LHC. How do we know if the discovered Z ′ originated from the U (1) ′ of the previous section?
One possible way is to connect the Z ′ with the L violating process. We consider a chain of process where the Z ′ decays into the LSP pair, which then decay into the SM particles through the L violating interaction at the LHC experiment. The ν LSP pair can decay into 4 SM fermions through the L violating terms (λLLE c , λ ′ LQD c ). For the sake of simplicity in the numerical analysis, we assume a hierarchy of |λ ′ | ≪ |λ|. Then the ν LSP will decay only leptonically, and also we will not have to consider dilepton production through s-channel sneutrinos or t-channel squarks which all need sizable λ ′ couplings at the LHC.
If we see both dilepton resonance and 4 lepton resonance at the same invariant mass, it will strongly hint that the R-parity is violated and the U (1) ′ is acting as a SUSY companion symmetry. This is a novel channel to produce 4 fermions, and we even know where to look since the dilepton resonance will tell us the mass and width of the Z ′ .
We do not assume any right-handed neutrino or sneutrino, and take our sneutrino LSP a pure left-handed one. The active neutrino can still get mass through the L violating couplings without any right-handed neutrino [22,23].
The partial decay width of the ν into dileptons with λ ijk L i L j E c k is given by The SU (2) L gauge invariance requires λ ijk = −λ jik , which results in 9 independent parameters in λ ijk .
The ratio among the 4 light charged lepton final states from the ν LSP pair, for universal λ coupling, is given in Table I when signs of charge are ignored. For sneutrino mass of a few × 100 GeV, universal |λ| coupling as large as O(|λ|) ∼ 10 −3 is allowed by the lepton flavor violation constraint (such as µ → eee) [14]. Since the λ coupling is the only channel the ν LSP can decay through, the exact value of λ is not relevant unless it is too small, causing a displaced vertex. Taking into account of the τ decay into the light leptons with additional neutrinos as well as nonuniform λ may alter the ratio.
For a quantitative analysis, we need to specify our Z ′ couplings for the SM fermions [38]. The general U (1) ′ charges (z) for the SM fermions, which has the B 3 and the solution to the µ-problem, is given by [16,17] when we assume no SU (2) L exotics. The coefficient b is any real number, which is from the hypercharge shift invariance property of discrete symmetries. The charges can be normalized arbitrarily with a scale factor A.
The ratio of a partial decay width of Z ′ into the sneutrino to that of the charged lepton, for each flavor, is determined independent of the U (1) ′ charge assignment as which is about 1/10 if m e ν ≪ M Z ′ . In fact, this ratio always holds independent of model as long as there is a LLE c term [27], which provides the relation Therefore the sneutrino pair produced from the Z ′ decay is expected to be about 10% of the charged lepton pair generically. We take two study points with common m e ν = 200 GeV. The SM fermion contributions to the Z ′ width, in the massless fermion limit, is given by There will be additional contributions such as from Higgs and superparticles, which are hard to quantify since they depend on specific spectrums. It is sufficient for our purpose to assume the total decay width is given by Γ Z ′ = 1.1Γ SM , and also assume the Γ Z ′ is 5% of the M Z ′ . For a detail of the supersymmetric contribution to the Z ′ width, see Ref. [7]. The product of the U (1) ′ coupling constant g Z ′ and the scale factor A is then determined as shown in Table II. The table also Figure 2 shows the production cross sections of pp → Z ′ → ℓ + ℓ − (solid curve) and pp → Z ′ → ν ν * (dashed curve) for a single flavor of the charged lepton and the sneutrino for our study points in the LHC experiment with E CM = 14 TeV. For numerical analysis, we use CompHEP/CalcHEP [29,30] and the parton distribution function of CTEQ6L [31]. In agreement with Eq. (8), the sneutrino pair production is slightly smaller than 1/10 of the dilepton production.
IV. DISCOVERY REACH
Now, we investigate the required luminosity for the 4 lepton events in comparison to the dilepton events. We use a single lepton flavor for dilepton case, and also a single sneutrino flavor for 4 lepton case. The production cross section of the dilepton events will be doubled from that in Figure 2 if both electron and muon flavors are counted. If three generations of the sneutrino are degenerate, the total sneutrino production will be tripled.
We require the basic cuts on p T , η, m inv as follows.
• p T > 20 GeV (each lepton) • |η| < 2.4 (each lepton) We find the background is negligible with these cuts for our study points. Instead of the significance of the signal over background, we just require 10 signal events that pass the cut to claim discovery. Figure 3 (solid curve) shows the discovery reach of Z ′ through the dilepton search at the LHC. For example, the necessary luminosity to discover the resonance by 10 dilepton events for M Z ′ = 2 TeV is (for a single flavor) Considering that the first year LHC run is expected to gain total luminosity of about 1 fb −1 , the Z ′ is expected to be discovered in the early stage of the LHC.
For 4 lepton signal (pp → Z ′ → ν ν * → 4ℓ), the SM background is pp → V V → 4ℓ (V V = ZZ, γγ, γZ) only with ℓ + ℓ − ℓ ′+ ℓ ′− type. The SM background is small, and furthermore the 4 lepton flavors from the ν LSP pair decay depends on the details sensitively. For example, it is hard to have eµµµ from the SM whose invariant mass can be around the Z ′ resonance. The pp → H → ZZ → 4ℓ [39] is not considered as the background here since the Higgs boson is still a new particle we need to search for. where Br(4ℓ) is the branching ratio of a sneutrino pair decaying to 4 light charged leptons. The 4 leptons can be discovered even in the early stage of the LHC depending on M Z ′ , ν decay branching ratio, and other parameter values. An individual ν decay branching ratio depends on the flavor of the ν and λ ijk (see Table I, for example).
It is important to keep in mind that these processes can be fully reconstructed and the center-of-mass frame can be found for each event. Therefore, in principle, one can also measure the spins of Z ′ and ν easily as well as the ν mass, which would be a discovery of the superparticle. One way to measure the spin of ν is to use the azimuthal angle between production and decay planes, which was recently advertised in Ref. [35]. Due to its scalar nature, a flat distribution is expected.
V. SUMMARY
In this letter, we suggested a search for the 4 lepton final states at the Z ′ resonance, with a support from the basic numerical analysis. Our channel is unique; a spin 1 particle decays into two spin 0 particles and each spin 0 particle decays into charged lepton pair. This is a well-motivated and attractive channel especially in the SUSY framework when the R-parity is replaced by the U (1) ′ gauge symmetry. The U (1) ′ can address the stability of the proton and the dark matter candidate without the R-parity [10]. The 4 lepton resonance channel can serve as a new SUSY search scheme even in the early stage of the LHC experiment when the sneutrino is the LSP. Inclusion of jets and missing transverse energy in the analysis would allow to test scenarios with other types of the LSP as well. | 4,264.4 | 2008-12-10T00:00:00.000 | [
"Physics"
] |
Effect of High Calcium Fly Ash, Ladle Furnace Slag, and Limestone Filler on Packing Density, Consistency, and Strength of Cement Pastes
Environmental considerations and technical benefits have directed research towards reducing cement clinker content in concrete, and one of the best ways to do this is to replace cement with supplementary cementitious materials. High calcium fly ash, ladle furnace slag, and limestone filler were investigated as supplementary cementitious materials in cement pastes, and binary mixtures were produced at 10%, 20%, and 30% cement replacement rates for each material. The water requirement for maximum packing and for normal consistency were obtained for each paste, and strength development was determined at 3, 7, 28, and 90 days for the 20% replacement rate. Furthermore, two ternary mixtures at 30% cement replacement were also prepared for maximum packing density and tested for compressive strength development. The results showed that high calcium fly ash decreased cement paste packing and increased water demand but contributed to strength development through reactivity. Ladle furnace slag and limestone filler, on the other hand, were less reactive and seemed to contribute to strength development through the filler effect. The ternary paste with 70% cement, 20% high calcium fly ash, and 10% limestone filler showed equivalent strength development to that of the reference cement paste.
Introduction
Supplementary cementitious materials (SCMs) are used increasingly in cement-based products, either for improving their properties or for reducing the carbon footprint of cement. Given that the hydration of ordinary Portland cement (OPC) is not yet understood in full, these materials bring even more complex reactions into the hydration process [1,2]. The benefit from utilization of SCMs lies either in their reactivity [3,4] or in the enhancement of cement hydration, as explained by the filler effect [5,6]. The environmental benefit from cement replacement with SCMs increases with the rate of replacement [7,8], but cement substitution must be limited to the extent that the performance of the final product is not undermined. The effect of SCM use could be beneficial to the performance of the produced concrete, depending on its fineness and on its cement substitution rate [9,10]. Cement substitution by a SCM is expected to affect the rate of strength development and the final strength, but also the water requirement and consistency of the cement paste [11,12].
SCM particles have a different size and specific surface area compared to Portland cement and, therefore, alter the microstructure and packing density of the cement paste. The particle size distribution of the cementitious materials has been found to influence both workability and hydration of cement pastes by improving their packing density [13,14]. Mixture design optimization by considering particle packing has been utilized in the design of ultra-high performance concrete [15,16]. There have been several mathematical packing models proposed to predict the packing density of multi-component mixes [17,18]. In order to predict the effect of cement substitution with SCMs, Yu et al. [19] proposed a linear packing model, considering the surface characteristics (sphericity) of particles. De Larrard [20] proposed the compressive packing model, considering the degree of compaction rather than particle surface characteristics, while Fennis et al. [21] proposed the compaction-interaction packing model, considering particle interaction and aggregate inclusion. According to Zhang et al. [22], the fresh cement paste can be seen as a suspension, with water either filling the voids between particles (filling water) or coating particles and providing fluidity (excess water). SCMs can act as fillers, improving the packing density of the suspension, which means that they can reduce the required amount of filling water. They can also provide space for the hydration of cement, accelerating strength development (filler effect). On the other hand, large SCM particles could block smaller cement particles from hydrating (wall effect) and a large proportion of small SCM particles could increase the distance between hydrated cement particles (loosening effect), affecting the consistency of cement pastes [23]. Mehdipour and Khayat [24] suggested that the presence of more fine particles than the amount required to fill the voids in the cement matrix contributes to the flowability of concrete. SCM particles themselves, on the other hand, may be contributing directly to strength development, if they exhibit hydraulic or pozzolanic properties.
There are several well-known SCMs, such as siliceous fly ash, silica fume, ground granulated blastfurnace slag, and limestone filler (LF). Their availability, however, varies locally, and several other materials are being researched, based on local availability, such as high calcium fly ash (HCFA), metakaolin, ladle furnace slag (LFS), and rice husk ash [25]. The present research investigates the use of HCFA, a by-product of lignite-fired power plants; LFS, a by-product of the steelmaking process; and LF, ground natural limestone, in cement pastes. HCFA is known to exhibit both pozzolanic and self-cementing properties and has been used for the past decades in blended cement manufacturing [26,27]. LFS is a weak pozzolan with some latent hydraulic properties and is mostly considered as filler [28,29]. LF is receiving increasing attention in the literature as a SCM since it seems to promote cement hydration [30][31][32]. A combination of SCMs in ternary systems is often proposed since there seems to be some synergy between alternative materials of different chemical composition and of different fineness [33,34].
The aim of the present research was to investigate the effect of cement replacement with the above SCMs, considering packing density, in binary and ternary mixtures. Since the fineness and reactivity of the SCMs have an impact on fresh paste consistency and strength development, it is important to understand how increasing cement replacement and altering packing density affects these properties. Furthermore, the ternary binders consisting of OPC, HCFA and LFS or OPC, HCFA, and LF were studied to identify possible synergistic effects. The goal was to identify ways of designing cement pastes with SCMs in the most beneficial way possible, since successful implementation can result in maximizing the positive effects of cement substitution and increasing SCM use.
Materials and Methods
CEM type I 42.5 N according to EN 197-1 [35] was used as OPC for all the tests. HCFA was used unprocessed, as received from the power plant. LFS was water-quenched and air-cooled and then sieved through the 100 µm sieve. LF was used as received from the cleaning of the limestone aggregate silos in a ready-mixed concrete plant. Table 1 shows the chemical composition of all the materials used, measured by atomic absorption spectroscopy (AAnalyst 400, Perkin Elmer, Waltham, MA, USA). The loss on ignition (LOI) and the chloride, sodium, and sulfate ion contents were determined by ionic chromatography (Thermo Scientific, Waltham, MA, USA, Dionex ICS-1100) for all the materials used. Figure 1 shows the particle size distribution of the materials used, and Table 2 shows their median particle size diameter d50, specific surface area, and apparent specific density values. The particle size distribution, d50, and specific surface area of the fine materials were measured using a laser particle size analyzer (Malvern Mastersizer 2000, Worcestershire, United Kingdom). The apparent specific density of the fine materials was determined using a Le Chatelier flask, according to ASTM C188-14 [36]. The first step in the experimental program was to identify water demand when substituting OPC with each of the three alternative binders. The required water to cementitious material (w/cm) ratios were determined both for maximum packing density and for equal consistency. The wet packing density approach, as proposed by Wong and Kwan [36], was followed in order to determine packing density for pastes with 100% OPC and for pastes with 10%, 20%, and 30% cement replacement with HCFA, LFS, and LF. The w/cm for maximum packing was recorded, referred to as optimum water demand. The reduction of the w/cm ratio increases packing density up to the point that the water fills the voids amongst solid particles, but further water reduction decreases packing. Thus, the optimum water demand is determined at the point where packing density is maximized. However, at maximum packing, the workability of the fresh paste is typically very low and serves as a measure of the effect of SCM use on packing and maximum strength development. In order to determine water demand for workable pastes, the w/cm ratio for normal consistency, according to the Vicat method as described in European Standard EN 196-3 [37], was also determined for the same replacement rates.
The cement pastes were prepared in a laboratory mixer by adding water first and then adding the dry-mixed binders and mixing for 120 s. Additional mixing time of 30 s was allowed if required. The fresh pastes were placed in the 40 mm deep truncated conical Vicat mold and compacted on a vibration table for elimination of entrapped air and weighed for the determination of wet packing. Wong and Kwan [38] have identified packing density as the solid concentration ϕ, which is calculated from Equations (1) and (2) as follows: where V c is the solid volume of the cementitious materials and V is the volume of the mold. The solid volume V c can be calculated from the following formula: where M is the mass of the paste in the mold; ρ w , ρ α , ρ β are the densities of water and cementitious materials α and β, respectively; u w is the water to cementitious material ratio by volume (w/cm V ); and R α and R β are the volumetric ratios of cementitious materials α and β, respectively. After weighing, the truncated conical specimens, still in the mold, were subjected to measurement of Vicat plunger penetration, according to EN 196-3, and the depth of penetration was recorded. The depth of plunger penetration, with a minimum of 0 mm and a maximum of 40 mm, characterizes the consistency of the paste and was used as a measure of workability. Normal consistency is described in the standard as the consistency that allows the plunger to penetrate the specimen 34 mm. Lecomte et al. [39] followed a similar approach to characterize the packing ability of various cement pastes. A series of pastes, 8 to 12 for each paste formulation, was prepared for various w/cm ratios in order to identify the optimum water demand for maximum packing and to determine the w/cm ratio for a paste of normal consistency, which was selected as a suitable level of workability. The above procedure was carried out for 100% OPC as reference and for 10%, 20%, and 30% wt. OPC replacement with HCFA, LFS, and LF, resulting in ten different formulations.
Based on the w/cm ratios for optimum water demand and for normal consistency, binary pastes with 20% OPC replacement with HCFA, LFS, or LF were prepared and tested for compressive strength development at 3, 7, 28, and 90 days. At least six 40 mm cubic specimens were tested at each age and paste, after curing at 20 • C and 95% relative humidity. These were compared to reference 100% OPC cement pastes and were used to assess the contribution of each SCM to strength development, either at maximum packing (optimum water demand), or at equal fresh state workability (normal consistency). Since the tested SCMs had varying effects on workability and strength development, it was decided to test ternary binders to identify possible benefits from the interaction of binders. Based on the strength development test results, two ternary cement pastes, one with 70% OPC, 20% HCFA, and 10% LFS and one with 70% OPC, 20% HCFA, and 10% LF were prepared and tested for strength development at 3, 7, 28, and 90 days at optimum water demand. Scanning Electron Microscopy (SEM-JEOL 840A JSM, Tokyo, Japan) was also used to assess the crystals formation and the microstructure of the reference and ternary cement pastes.
Material Properties
The chemical composition of the materials, as shown in Table 1, shows that all the binders used were rich in calcium. This is expected to influence the kinetics of hydration in a different way compared to traditional silica-rich SCMs (siliceous fly ash, silica fume, ground granulated blastfurnace slag), especially due to the presence of free-CaO in HCFA and LFS; HCFA has both hydraulic and pozzolanic properties, while LFS can be described as a weak pozzolan with latent hydraulic properties and is often regarded as filler [40,41]. HCFA shows a relatively high sulphate ion content, which is not expected to affect cement hydration negatively [41].
Although SCM use for increasing particle density often relies on materials finer than cement, the particle size distribution of the SCMs used shows that all of them were coarser than cement. According to the literature, however, coarser SCMs may still contribute to cement hydration through the filler effect [13,22,42]. HCFA is the coarsest material, considering both median particle size diameter d50 and specific surface area. LFS and LF have similar specific surface areas but different particle size diameters. This occurrence can be explained by their surface characteristics, as LF is ground natural stone, resulting in more spherical shaped particles compared to LFS, which is molten and then waterquenched, resulting in more irregularly shaped particles. According to Sakai et al. [43], spherical-shaped particles are expected to increase packing density and fluidity of cementbound mixtures.
Effect on Wet Packing Density
Figures 2-4 show the relationship between packing density and w/cm ratio for various cement replacement rates with SCM. As can be seen in all cases, reducing w/cm ratio increases packing up to the point where the water is not sufficient to fill the voids between the particles. From the curves of the figures, it is easy to estimate the w/cm ratio for the maximum packing, but it is also possible to estimate the effect of each SCM on water demand and its impact on packing density of the cement paste. The use of HCFA, as shown in Figure 2, increases water demand considerably, even at 10% OPC replacement. It also reduces the solid concentration in the paste in all cases.
Cement replacement with LFS, on the other hand, seems to have a different effect. Although water demand was increased (to a lesser extent compared to HCFA), particle packing was equal or even slightly increased compared to that of the reference paste. As with HCFA, the rate of replacement, ranging from 10% to 30%, had little effect. Cement replacement with LF had a similar effect on packing to that of LFS, but more pronounced. At 10% and 20% OPC replacement rates with LF, packing density increased, while it decreased at the 30% replacement rate. Water demand increased, but only slightly. The effect of cement replacement with the various SCMs on packing density seems in all cases to be linked with their fineness. Indeed, an analysis of variance shows that the influence of fineness on packing density is statistically significant (p < 0.05) for all the materials. The finer materials (LFS and LF) contribute to the increase of solid concentration, while the coarser material (HCFA) decreases packing. OPC substitution with 10% and 20% LF results in the highest packing densities, while the lowest are observed with 30% HCFA substitution. HCFA, and LFS to a lesser extent, require increased w/cm ratios in order to reach maximum packing. An analysis of variance shows that the influence of w/cm ratio on packing density is statistically significant (p < 0.05), which means that increased water demand, mostly for HCFA, but also for LFS, results in lower packing density. This increase in water demand, however, does not seem to be linked with fineness, but can be associated with chemical composition, and more specifically, with free lime content [44]. LF shows a slight increase in water demand, which can be attributed to the water absorption of limestone particles.
Effect on Consistency
Figures 5-7 show the relationship between w/cm ratio and consistency, as expressed by the Vicat plunger penetration, for each SCM. Figure 5 shows that OPC replacement with HCFA increases the w/cm ratio for normal consistency, and the w/cm ratio needs to be increased further for higher replacement rates. This can be explained due to the higher water demand of HCFA but also due to its negative effect on packing density. OPC substitution with LFS (Figure 6), on the other hand, seems to have a minimal effect on consistency, and the w/cm ratio for normal consistency remains unchanged regardless of the rate of cement replacement. The same seems to be the case for OPC substitution with LF (Figure 7), where there seems to be a small increase in consistency at the 30% replacement rate. Again, the effect of LFS and LF on the consistency of fresh cement pastes can be linked with their effect on packing density, which agrees with the literature [45,46]. Table 3 shows the cement pastes tested for compressive strength development. The reference and binary mixtures were prepared either with w/cm ratios for maximum packing (optimum water demand) obtained from Figures 2-4 or with w/cm ratios for standard consistence, obtained from Figures 5-7. Two ternary mixtures were prepared with w/cm for maximum packing, determined following the same procedure as for the binary mixtures. Table 3 shows that 20% replacement with either HCFA, LFS, or LF at optimum water demand gave 92-94% of the 28-day compressive strength of the reference paste, despite lower packing density and increased w/cm. Figure 8 shows that the same pattern continued at 90 days, while HCFA had some accelerating effect at 7 days, which may be attributed to the presence of free lime. Strength development was similar for all of the SCMs in the binary systems, despite the fact that pastes with HCFA had higher w/cm ratios and reduced packing densities. For normal consistence, as shown in Figure 9, 20% OPC replacement with HCFA showed slightly better strength development compared to LFS and LF, again despite higher w/cm ratios and lower packing densities. The results show that the reactivity of HCFA had a greater effect on strength development than fineness, water demand, or packing. LFS and LF, on the other hand, seemed to be contribute to strength development mostly through the filler effect. Based on these results, it was decided to test ternary binders with 20% HCFA and 10% LFS or LF cement substitution. The results shown in Table 3 showed that the ternary binders reached a packing density close to that of the reference paste at a higher w/cm ratio; 28-day compressive strength, however, was higher when LF rather than LFS was used as the third constituent. The rate of strength development, as shown in Figure 10, shows that the accelerating effect at 7 days observed with 20% HCFA use was enhanced with an extra 10% LF replacement, while the 90-day compressive strength was equal to that of the reference.
Effect on Strength Development in Binary and Ternary Pastes
The results show a good synergy between HCFA and LF, also identified by other researchers. De Weerdt et al. [47] suggest that limestone interacts with the hydration products of OPC-fly ash systems and increases compressive strength. Thongsanitgarn et al. [48] have shown that the increase in strength development of OPC-fly ash systems is higher when finer limestone is added. Other researchers point out that different SCMs may co-operate in the cement paste matrix, despite showing different chemical activity and physical characteristics [49]. The same synergy, however, did not take place when HCFA was combined with LFS, resulting in reduced strength development (88% of the reference at 28 and 90 days). A possible explanation for this is that the calcium in LF was in the form of CaCO 3 , which is known to promote cement hydration [50], while in LFS, it was mostly in the form of CaO or Ca(OH) 2 , as shown from the values of loss on ignition in Table 1. SEM photos taken from samples at 28 days were used to explore the microstructure of the reference and ternary pastes (Figures 11-13). The pores observed were of similar size, while micro cracks were visible in all the cement pastes. Portlandite (Ca(OH) 2 ) crystals were identified in the pores of the reference paste, as a result of cement hydration ( Figure 11). Ettringite needle-shaped crystals, on the other hand, were visible in the pores of the ternary paste with 70% OPC + 20% HCFA + 10% LF, confirming enhanced reactivity in the pore solution.
Discussion
The results showed that the design of binders with SCMs can be optimized by considering their physical and mechanical characteristics. HCFA had reduced fineness and increased water demand compared to LFS and LF, which rendered pastes with OPC substituted with HCFA less workable. Combining OPC with HCFA and LFS or LF in ternary pastes showed a solution to this problem. Furthermore, considering packing density can aid the design process and help design pastes with either the lowest w/cm ratio at maximum solid concentration for strength optimization or with the w/cm ratio for the required consistency. Fineness has been shown to influence the packing density and consistency of pastes greatly, but reactivity and synergy between binders also seems to be very important for compressive strength development. The reactivity of HCFA seemed to contribute more to strength development compared to the finer LFS and LF, despite the increased water demand. Regarding fineness, the use of SCMs coarser than cement can still lead to equal packing densities but grinding the SCMs to higher fineness could lead to further improvements. The ternary mixtures with OPC, HCFA, and LFS or LF, however, showed packing densities similar to that of 100% OPC paste. HCFA with LF in ternary pastes with OPC showed good synergy in cement-based binders, reaching the strength development of the reference paste, despite having a higher w/cm ratio at the 30% replacement rate, which was not the case when LFS was used as the third constituent. Overall, it seems that designing ternary binders with suitable SCMs by considering particle packing could compensate for strength loss and result in equal performance with reduced cement content.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 5,105.8 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Thermal Comfort Assessment during Winter Season: A Case Study on Portuguese Public Social Housing
: Many public social housing building stocks were constructed before the introduction of national thermal regulations, and, as a result, in some situations, energy poverty conditioning during severe winter seasons results in little to no heating habits involving active systems in order to improve building thermal performances. Besides rigorous summer seasons, climate change predictions also indicate rigorous winter seasons will occur that will prevail in some Iberia Peninsula locations, worsening this scenario for this Southern European region. Among others, understanding the extension of discomfort in social housing buildings during heating seasons is therefore essential so as to perceive the suitability of the building stock to deal with present and future climate scenarios. Thus, this article presents a thermal comfort assessment during a winter season period applied to two social housing dwellings located in Covilh ã , Portugal, inhabited by elderly residents, under realistic heating habits. An experimental campaign was performed and the results show that discomfort was found to be extremely significant for the majority of the occupied time. Passive means alone and resident heating habits were not enough to achieve proper indoor thermal and humidity conditions, resulting in important losses of well-being to the risk group of the elderly.
Energy Poverty
Besides the decrease in energy consumption, the eradication of energy poverty and the mitigation of climate change are currently considered the main challenges related to the building sector [1].
Among other definitions [2], energy poverty is defined as the inability of family units to pay energy bills and maintain comfortable indoor living standards. The repercussion of this phenomenon in Europe affects 50 to 125 million people, and is regarded by the European Commission as an urgent, but complex issue to solve, considering the interaction between diverse energy, social, and economic contexts [3,4]. For users who spend a significant amount of time in their dwellings-whether for personal, professional, or health issues, or due to special events such as the coronavirus pandemic-their wellbeing is directly affected once they are forced to experience several hours of thermal discomfort. This has a considerable impact on human health, especially for groups of people that present great vulnerability, like the elderly, who have already been the focus of some developed studies [5] and official guidelines [6] worldwide, especially concerning health issues that may even be associated with a risk of death [4]. Southern European countries are identified as some of the most vulnerable regions for energy poverty, where heating and cooling habits through the evolution of seasons in housing buildings are mainly intermittent or insignificant, regarding the use of active systems to correct poor building performances [7]. The European Energy Poverty Observatory established several indicators for the proper quantification and evaluation of energy poverty, considering that many Southern European countries present concerning results related to energy poverty exposure [8]. Portugal is ranked as one of the worst countries regarding important indicators like electricity/gas prices for the residential sector and the feasibility to obtain comfortable indoor thermal conditions in summer and winter seasons [8]. This is because Portuguese family units present a median disposable income that is 39% lower than the European Union average, although its gas and electricity prices are some of the highest [7]. Although Portugal presents smoother winter seasons compared with other European countries, they are particularly challenging as users cannot afford the energy costs to maintain properly heated homes; therefore, it has the fifth worse rank among European Union countries regarding this issue, with an estimated 19% of the population being exposed to this phenomenon [9]. Some national strategies are already being implemented to deal with this problem, such as the National Energy and Climate Plan [10], which aims to properly perform a diagnosis and characterization of the existing limitations, as well as establish monitoring indicators and national, regional, and local objectives.
Climate
The climate also plays a decisive role in defining the conditions in which proper indoor thermal performances can be achieved. Current climate scenarios in Southern European countries where Mediterranean climate zones prevail indicate severe summer seasons in many regions with high mean temperatures and intense solar radiation, although winter severity is also highlighted as being rigorous in specific geographic conditions such as the site altitude [11]. Several reports of the Intergovernmental Panel on Climate Change (IPCC) highlight multiple consequences of the impact of climate change, resulting in future climate scenarios characterized by increases in temperature and sea level, with it being expected that Southern Europe will experience considerable adverse effects such as more frequent extreme events, as summer severity will increase and winter seasons are predicted to become smoother [12,13]. Within Southern European countries climate scenarios, the Iberian Peninsula is a particular region that presents different climatic zones that, depending on the altitude and latitude, can strongly differ from nearby locations, as the central and northern regions present considerable demanding heating seasons in many locations, in contrast with southern regions where winter seasons are milder [11]. In Portugal, regions such as the Beira Interior and Trás-os-Montes, located in the inner central and northern regions, respectively, present the country's highest heating degree days (HDD) in many locations, with values mainly between 1800 HDD and 2000 HDD [14]. Thus, in contrast with other Southern European regions, predicted winter seasons for these specific Iberia Peninsula locations indicate that they will maintain a rigorous profile: for instance, Burgos (Spain), Guarda (Portugal), and Bragança (Portugal) predictions for the Representative Concentration Pathway (RCP) 8.5 scenario indicate around 5.5 • C and 7.0 • C monthly mean temperatures in heating seasons for 2050 and 2080, respectively, and monthly mean minimum temperatures of around 2.5 • C and 4.0 • C, respectively [15]. For the specific case of Portugal, which is consistently identified as the country with the highest number of excess winter mortalities in Europe [11], the fact that the energy poverty impact will remain during winter seasons for many of its locations must be considered as an urgent matter.
Public Social Housing Building Stock
Within the stock of housing buildings and considering the above-mentioned scenario, public social housing-understood as buildings with construction made or supported by national housing programs, for, among others, low and medium income residents who could not afford to buy their homes in the regular private market [16] -represent a relevant field of study in Southern European countries, considering its pivotal role in meeting the housing needs for disadvantaged sections of the population. The impact of energy poverty is considerable in these contexts regarding the use of active systems to achieve suitable indoor conditions, considering that in Southern European countries such as Portugal and Spain, the majority of neighborhoods present no relevant heating and cooling habits [17], which is the reason proper thermal behaviors are imperative. In fact, the risk of energy poverty can also be the result of a poor thermal and energy performance of buildings [4], whose comfort requirements are expected to increase due to the mentioned climate change impact. Southern European countries currently present an aged building stock, with about 70% having been constructed prior to the implementation of national thermal regulations [4,18], resulting in poor performances regarding indoor environmental conditions. The European Union has implemented relevant legislative frameworks, such as the Energy Performance of Buildings Directive 2010/31/EU and the Energy Efficiency Directive 2012/27/EU, amended by Directive (EU) 2018/844, highlighting improvements needed for energy performance and thermal comfort; thus, several studies have been carried out in Southern European countries regarding, among others, passive measures for the design of new buildings [19,20] and constructive retrofit measures to improve the existing ones, for either actual [21,22] or future [18] climate scenarios. Regarding the specific case of Portuguese public social housing, an important part of its stock was constructed before 1990 as a result of several national housing guidelines during the 20th century [16], before the implementation of the first national thermal regulation (D.L. 40/90 of 6 February). This resulted in solutions that lacked the quality of materials and constructive processes used until then, without proper thermal criteria in the building envelope design. Thus, some projects were also designed using typologies and constructive systems so as to facilitate their quick replication across several Portuguese locations [16], some of them presenting severe climate conditions. Winter seasons must be regarded with a high importance for this issue, considering their repercussions through significant losses in the well-being of users and resulting in health issues, as economic conditioning implies little or no heating habits in the majority of cases [23].
Indoor Thermal Conditions during Winter Seasons
Several studies [24][25][26] regarding Southern European countries' housing have focused on specific issues to solve or minimize residents' vulnerability to energy poverty, such as a reduction of energy consumption and energy efficiency improvements. Nevertheless, some authors' [7,17,23] approach towards Southern European country studies have been focused on minimizing thermal discomfort in scenarios with little to no energy consumption for heating practices, in contrast with Central or Northern Europe, where users have strong practices of heating and for whom the focus should rely on energy efficiency. Therefore, several studies in Southern European countries regarding thermal comfort assessments in social housing buildings based on experimental campaigns during rigorous winter seasons have been developed. Ramos et al. [27] performed monitoring campaigns and assessments of the thermal indoor condition quality during occupancy periods in two neighborhoods-one rehabilitated and another non-rehabilitated-located in Porto. Curado [23] performed an extensive monitoring campaign and comfort assessments in several retrofitted dwellings, also located in Porto. These procedures were also considered as part of more embracing studies. Curado and Freitas [17] studied the influence of thermal insulation of facades on the performance of retrofitted buildings, using experimental measurements in a neighborhood so as to validate a dynamic numerical simulation model for a reference dwelling in order to assess thermal comfort in eight different Iberian climates. Alonso et al. [28] proposed a methodology to characterize the existing conditions in dwellings located in Madrid, so as to identify the main factors regarding proper energy retrofit measures, with indoor measurements during winter seasons being performed and included in the proposal. Soares et al. [29] approached other useful topics related to this area, using on-site measurements taken during winter seasons to study of the perception of users about their living conditions, habits, health, and quality of life, considering retrofitted and non-retrofitted neighborhoods of Porto.
Objectives
Considering the lack of literature found for the specific Iberia Peninsula location of Beira Interior and/or similar climate scenarios regarding public social housing building thermal and comfort performances, this article aims to provide a contribution through an experimental campaign and comfort assessment applied to a case study located in the city of Covilhã during a winter season period. A precast concrete building that was replicated across some Beira Interior locations was selected, as concrete precast systems are more abundant in Eastern Europe-studies regarding Portuguese public social housing are common in buildings where stone or brick masonry systems are applied-so perceiving their performance in built environments under demanding climate conditions and realistic heating habits is considered as highly relevant. Two dwellings inhabited by elderly residents were analyzed, aiming to provide an understanding of the extension of discomfort in this region under the influence of the resident's realistic heating habits, so as to perceive the resilience of public social housing buildings to provide acceptable indoor conditions in order to face present and future climate scenarios. This contribution provides improved and more comprehensive databases on real and measured building use under specific behaviors and lifestyles, in order to allow for developing research paths, guidelines, and programs for planners and stakeholders interested in public social housing buildings improvement applicable to similar contexts within Southern European countries.
Materials and Methods
The main objective of this study was to assess the comfort performance of two public social housing dwellings located in Covilhã inhabited by elderly residents, which were monitored during a winter season period. This section provides the information about the materials and methodology used.
Case Study
The Beira Interior region in Portugal was delimited in this work as the set of units "Beiras e Serra da Estrela" and "Beira Baixa", according to national nomenclature of territorial units for statistical purposes (NUTS III), as this region is one of the Portuguese zones that presents the most severe scenarios for winter and summer seasons, according to Figure 1. In Figure 1, the climate severity is represented for winter and summer seasons according to the Portuguese national building energy performance certification system [14].
Objectives
Considering the lack of literature found for the specific Iberia Peninsula location of Beira Interior and/or similar climate scenarios regarding public social housing building thermal and comfort performances, this article aims to provide a contribution through an experimental campaign and comfort assessment applied to a case study located in the city of Covilhã during a winter season period. A precast concrete building that was replicated across some Beira Interior locations was selected, as concrete precast systems are more abundant in Eastern Europe-studies regarding Portuguese public social housing are common in buildings where stone or brick masonry systems are applied-so perceiving their performance in built environments under demanding climate conditions and realistic heating habits is considered as highly relevant. Two dwellings inhabited by elderly residents were analyzed, aiming to provide an understanding of the extension of discomfort in this region under the influence of the resident's realistic heating habits, so as to perceive the resilience of public social housing buildings to provide acceptable indoor conditions in order to face present and future climate scenarios. This contribution provides improved and more comprehensive databases on real and measured building use under specific behaviors and lifestyles, in order to allow for developing research paths, guidelines, and programs for planners and stakeholders interested in public social housing buildings improvement applicable to similar contexts within Southern European countries.
Materials and Methods
The main objective of this study was to assess the comfort performance of two public social housing dwellings located in Covilhã inhabited by elderly residents, which were monitored during a winter season period. This section provides the information about the materials and methodology used.
Case Study
The Beira Interior region in Portugal was delimited in this work as the set of units "Beiras e Serra da Estrela" and "Beira Baixa", according to national nomenclature of territorial units for statistical purposes (NUTS III), as this region is one of the Portuguese zones that presents the most severe scenarios for winter and summer seasons, according to Figure 1. In Figure 1, the climate severity is represented for winter and summer seasons according to the Portuguese national building energy performance certification system [14]. The main cities located in the northern unit, "Beiras e Serra da Estrela", present more rigorous winters, while the ones in the southern unit, "Beira Baixa", normally present The main cities located in the northern unit, "Beiras e Serra da Estrela", present more rigorous winters, while the ones in the southern unit, "Beira Baixa", normally present more severe summers, and have similar climate scenarios compared to nearby Spanish northern and southern provinces, Salamanca and Cáceres, respectively [11,14]. Although future climate scenario projections indicate a progressive increase in temperature and the occurrence of more frequent and severe heatwaves for the Beira Interior, winter seasons must also be regarded with particular caution, as they are expected to present a slightly reduced severity but still a rigorous profile [30], as the geographic and topographic specificities result in cities located at considerably reference altitudes, such as Guarda (1000 m), Manteigas (800 m), and Covilhã (650 m) [14].
Regarding the Beira Interior housing building stock, buildings constructed until 1990 represent 62% of the local social housing, with the predominant ones built between 1975 and 1990 [31] in a flourishing period of construction supported by a considerable amount of housing national policies [16]. Multifamily buildings were identified as being predominant when considering the amount of dwellings covered.
Covilhã, part of the "Beiras e Serra da Estrela" unit, was the city chosen to perform the present study. It is the Beira Interior city with the most social housing buildings constructed before 1990 [31], representing a considerable amount of diverse interventions that took place during the 20th century with the need for significant thermal improvements. Thus, studies focused on mapping energy poverty in Portugal identified this city as having a considerable vulnerability to the energy poverty phenomenon in housing contexts, with several locations ranked as the highest for winter seasons [3]. The following meteorological data of some winter season months for the 2000-2019 period, as for predicted by RCP 8.5 scenario for 2050 and 2080, are specified in Table 1 [15]. The public social housing building chosen here as the case study was built in the late 1970's, as part of a national program created in 1976 called CAR, a refugee housing commission that aimed to provide a quick answer to the housing needs Portuguese families that returned during decolonization. For this reason, the building was constructed according to the previously mentioned precast concrete constructive system in order to get fast construction times. The analyzed project has been replicated in other Beira Interior buildings in cities such as Castelo Branco and Fundão, making this case study interesting because of its representativeness. A considerable variety of construction systems and typologies were applied in Portuguese public social housing during the 20th century [16], with precast systems comprising some innovative features applied in the envelope components, such as thin thermal insulation layers, comparing with other applied systems that used granite or hollow brick masonry without any insulation applied.
The selected building has four residential floors (including the ground floor, where the entrance is located), with all four facades exposed. Each floor of the building has four dwellings, and each dwelling consists of a one-or two-bedroom typology, with a living/dining room, a bathroom, and a kitchen with an extension area for laundry. The average area is around 50 m 2 for dwellings with one bedroom and 60 m 2 for dwellings with two bedrooms, with the ground floor having four dwellings-two dwellings with one bedroom and two dwellings with two bedrooms-while each of the upper floors have four dwellings with two bedrooms each. Regarding each of the dwelling glazed areas, they represent around 15% of the pavement area. The dwellings studied in this research are both located on the ground floor and are both inhabited by one elderly resident each: Dwelling A has only one bedroom and has two exposed facades (east and south oriented), while Dwelling B has two bedrooms and two exposed facades (north and west oriented). Figure 2 represents the building ground floor plan.
components, such as thin thermal insulation layers, comparing with other applied systems that used granite or hollow brick masonry without any insulation applied.
The selected building has four residential floors (including the ground floor, where the entrance is located), with all four facades exposed. Each floor of the building has four dwellings, and each dwelling consists of a one-or two-bedroom typology, with a living/dining room, a bathroom, and a kitchen with an extension area for laundry. The average area is around 50 m 2 for dwellings with one bedroom and 60 m 2 for dwellings with two bedrooms, with the ground floor having four dwellings-two dwellings with one bedroom and two dwellings with two bedrooms-while each of the upper floors have four dwellings with two bedrooms each. Regarding each of the dwelling glazed areas, they represent around 15% of the pavement area. The dwellings studied in this research are both located on the ground floor and are both inhabited by one elderly resident each: Dwelling A has only one bedroom and has two exposed facades (east and south oriented), while Dwelling B has two bedrooms and two exposed facades (north and west oriented). Figure 2 represents the building ground floor plan. Table 2 synthetizes the building's physical characteristics, which were obtained through on-site visits and sorting through the respective municipality archives. Conversations with residents, onsite visits, and municipality data revealed that no relevant modifications to the external envelope were made. Table 2 synthetizes the building's physical characteristics, which were obtained through on-site visits and sorting through the respective municipality archives. Conversations with residents, onsite visits, and municipality data revealed that no relevant modifications to the external envelope were made.
Methodology
A three-stage methodology was considered in order to assess comfort vulnerability in both dwellings: (1) the first stage consisted of obtaining qualitative data through questionnaires delivered to residents in order to provide an initial understanding about the dwelling occupancy habits; (2) the second stage consisted of obtaining quantitative data regarding the dwelling performance under occupant behaviour through monitoring the indoor temperatures and relative humidity during a specific heating season period; and (3) the third and final stage consisted of plotting the monitored indoor temperatures against the proper comfort standards so as to perceive the extent of discomfort, as well as the conformity of the monitored relative humidity values regarding the recommended indoor ranges.
Qualitative Data
Qualitative data were obtained through qualitative research, which consisted of developing and providing a questionnaire to occupants regarding their occupancy and heating habits during winter seasons. The questionnaires were a mix of two types of qualitative interviews: structured and semi-structured. Structured interviews consisted of questions with a fixed choice response, while semi-structured interviews consist in questions with open-ended response, which also gathered specific details about occupancy. The residents were asked about their inhabited periods in each room, main activities during occupancy periods, window and shutter operation, heating habits using active systems, further strategies used to reduce discomfort, possible relevant sources of internal gains, and their perception about the building constructive and insulation quality. However, the residents were unavailable to record information related to clothing and specific characteristics of the activities carried out throughout the day, as well as their specific schedules. Questions about dwelling evaluation were also included in the questionnaire, regarding dwelling constructive and thermal insulation quality, and specifications or conditioning regarding the use of active systems. Qualitative data were then used to validate the findings of the quantitative data, mainly referring to occupant behavior for correcting or minimizing eventual poor building performance-a procedure that was used in other studies related to social housing contexts both on an individual and/or collective level [5].
After conversations with residents, the living rooms and the used bedrooms were identified as the areas with the longest occupancy time, and were thus the rooms analysed in this study. Although data were obtained about how and when these areas were used, allowing for the definition of occupancy profiles for winter season, residents mentioned that eventually, slight variations to this occupancy might occur in some days, mainly regarding occupied times and heating habits. Considering that no further data could be obtained to provide information for each of the monitored days, the occupied periods mentioned in the questionnaires for each division were the ones considered for the comfort assessment of the monitored indoor temperature values and relative humidity analysis.
Quantitative Data
Quantitative data were obtained through quantitative research, which consisted of an experimental campaign to provide real data about the dwelling performance through the monitored period under typical occupancy behavior. Once some conditioning were found regarding the installation of specific equipment in the studied rooms, only temperature and relative humidity were monitored during the 2020 heating season in the studied areas for Dwellings A and B, according to Figure 2, using bedrooms and living rooms, as well as outdoor conditions. As limitations accessing the dwellings and the outbreak of the coronavirus pandemic limited the experimental campaign during the entire heating season, the results are presented for a representative period when the residents' heating habits enabled a consistent understanding of the occupancy impact in both dwelling performances, namely from 28 January 2020 to 10 February 2020. Five data-loggers (Lascar Electronics EL-GFX-2) were used as measuring instruments and were programmed to register the existing temperatures and relative humidity every 10 min. These loggers had ranges of −30 • C to 80 • C and 0% to 100%, respectively; a typical accuracy of ±0.5 • C and ±3%, respectively; and they were positioned as defined in Figure 2, 1.1 m above the floor surface, having been assured proper protection from close sources of internal gains and/or solar radiation.
Comfort Assessment
An extensive review of thermal comfort models based on people's thermal sensation to several environments can be found in [32], and specifically for human thermal comfort in the built environment in [33]. Thermal comfort, defined as "that condition of mind that expresses satisfaction with the thermal environment" [34], is generally assessed in the built environment using two different conceptual approaches: the static model and the adaptive model. The static model is mainly derived from experiments in climate chambers [35] and uses an approach based on a balance between building occupant metabolic heat production and its interaction with indoor environmental conditions, considering individuals as the only a recipients of the thermal stimulus. The adaptive model uses an approach that considers the interaction between occupants' physical and psychological conditions with the indoor environmental conditions, as well as a correlation between the perception of comfort temperature and outdoor temperature. The concept that the individual will react in order to restore or maximize their thermal comfort conditions exposed to a thermal stimulus (through the use of simple procedures, such as adding more clothes or the use of specific building components) is aligned with typical Portuguese social housing heating/cooling habits, once economic conditioning makes the occupants' interactions with themselves and the building decisive to define specific indoor conditions [23]. The adaptive model is therefore suitable to assess thermal comfort in a Portuguese public social housing case study and was chosen as the primary model, although for comfort assessments the authors of [18] also considered it useful to compare the results obtained with the static model and the adaptive model in order to better understand the thermal comfort behaviour. This is why this procedure was selected for this study, in order to enlarge the building performance understanding regarding the likely reality of intermittent or no heating habits recurring in active systems.
Regarding the adaptive model, a review about the adaptive thermal comfort models and its integration in built environmental regulatory documents is extensively described in [34,36], as, among the several available possibilities, the ASHRAE 55-2017 and the European standard EN 16798-1:2019 are currently the two documents regarded as international standards for adaptive thermal comfort and have been used in several studies worldwide [37]. Although both documents consider that comfort temperature inside buildings depend on the variation of outdoor temperatures in the preceding days, the EN 16798-1:2019 [38] was chosen for this study as it includes the adaptive thermal comfort model developed and applicable for Europe, and it constitutes the modification of the previous standard (EN 15251:2007), for which relevant data source was collected in several locations (including Portugal) so as to define the model. The model derives a simple linear relationship between indoor comfort conditions and the outdoor temperature, considering some of Fanger's conventional thermal comfort factors such as clothing insulation and metabolic rate, which present a significant correlation with outdoor air temperature [39]. Once relative humidity and air velocity were shown to not strongly depend on the outdoor air temperature, they were not included in the model, despite their relevance in defining thermal comfort conditions [40]. The model has an applicability for buildings without mechanical cooling systems, as well as human occupancy with mainly sedentary activitiesranging from 1.0 to 1.3 met-where occupants have easy access and the possibility of opening and closing operable windows located in the building envelope, and are also able to freely adapt their clothing to indoor and/or outdoor thermal conditions. It is structured defining the informative default choices that are considered in this study. Default values are given for a specific category of indoor environmental quality, related to the level of expectations the occupants may have, considering that for elderly occupants the standard recommends the selection of Category I, corresponding to a high level of expectation for users with less thermal adaptation. The so-called adaptive criteria consists of the definition of upper and lower temperature limits that change with the running mean outdoor temperature, considering residential buildings used mainly for human occupancy with sedentary activities, where easy access to operable windows or clothing adjustments are available. Upper 1 and Lower 1 limits for the Category I indoor environmental level (EN-C1) are defined considering 2 • C above and 3 • C below the optimal operative temperature (T c ), respectively, which satisfies the greatest percentage of occupants at a given clothing and activity level in the current thermal environment, and is calculated through Equation (1): Exterior conditions are considered in the form of the weighted running mean of the daily mean outdoor temperature (T rm ). T rm is calculated according to Equation (2): The outdoor mean air temperature of the previous days (T n−i ) is considered, specifying that the limits only apply for a T rm range from 10 • C to 30 • C. If T rm is outside this range, the standard assumes that mechanical cooling or heating systems have to be installed and operated according to specific setpoint conditions, and the indoor temperature would decouple from the external conditions. The work by Sánchez-García et al. [41] highlighted the possibility of horizontally extending the maximum and minimum comfort limits of the adaptive model as static setpoint temperatures, which was considered in this study so as to achieve moderate and realistic values. Regarding winter season applicability, the standard applies the adaptive criteria for summer and intermediate seasons for buildings without mechanical cooling, although it is also stated that, among others, the criteria required for its use should be defined by individual project specifications: the use of adaptive models for winter seasons, in cases where heating systems have intermittent or no usage at all, were already applied in some studies [17,23], considering their suitability for applications in low-income buildings in Southern European countries [17], which was why both Upper 1 and Lower 1 thresholds were considered in the study.
Regarding the static model, considering its importance in the definition of reference conditions for a significant number of thermal regulations in European Union member states so as to improve their building stock energy efficiency, the framework provided by EPBD 2010/31/EU and transposed for the Portuguese thermal regulation through the National Building Energy Performance Certification System-the "Sistema de Certificação Energética" (SCE)-was considered in this study [42]. The framework provides energy certificates for residential buildings, defining the nominal energy consumption needed so to achieve the predefined comfort conditions. These conditions are independent of exterior conditions and were established as fixed comfort temperature limits for energy calculation demand purposes, where indoor acceptable values are non-adjustable to individual or environmental variables-defined as air temperature values within the range from 18 • C (Lower SCE) to 25 • C (Upper SCE). Thus, among other features, the energy certificates identified proper constructive measures so as to improve the thermal conditions and energy efficiency of the existing building, and are currently a key tool to diagnose and support decision making in order to intervene in the existing building stock-which is the reason this model was also chosen. Figure 3 shows the above-mentioned thresholds of the selected approaches to perform comfort assessment. Reference standards [38,[43][44][45][46] define the operative temperature, among others, as a key variable for assessing the likely thermal comfort of the occupants of a building. In order to overcome existing limitations in obtaining the data needed to calculate operative temperatures for EN-C1, these were simplified as the monitored indoor temperatures, which are the ones required for SCE. Nevertheless, some comments must be made regarding this issue. ASHRAE 55 allows for the use of indoor air temperature as a simplified approximation of the comfort operative temperature if some conditions are fulfilled, mainly related with the inexistence of both indoor radiant heating/cooling panels and the relevant heating generating equipment, as well as the average U-factor of the building vertical envelope and the window solar heat gain coefficients. Experimental investigations regarding this subject can also be found, such as the one by Matias [47], which was performed in several building typologies such as schools, office buildings, and residential buildings-the latter including nursing homes-so as to evaluate indoor comfort conditions. Several indoor and outdoor environmental variables were monitored, enabling the analysis of the correlation between air temperature and operative temperature, with the Pearson correlation coefficient observed between those variables being 0.99, so for those case studies air temperature was considered as an approximated value of the operative temperature. Thus, despite the fact that using indoor air temperature as the dominant factor for the adaptive approach can be pointed out as a potential limitation, this procedure in residential contexts can also be found [17,23]. must be made regarding this issue. ASHRAE 55 allows for the use of indoor air tempera-ture as a simplified approximation of the comfort operative temperature if some conditions are fulfilled, mainly related with the inexistence of both indoor radiant heating/cooling panels and the relevant heating generating equipment, as well as the average U-factor of the building vertical envelope and the window solar heat gain coefficients. Experimental investigations regarding this subject can also be found, such as the one by Matias [47], which was performed in several building typologies such as schools, office buildings, and residential buildings-the latter including nursing homes-so as to evaluate indoor comfort conditions. Several indoor and outdoor environmental variables were monitored, enabling the analysis of the correlation between air temperature and operative temperature, with the Pearson correlation coefficient observed between those variables being 0.99, so for those case studies air temperature was considered as an approximated value of the operative temperature. Thus, despite the fact that using indoor air temperature as the dominant factor for the adaptive approach can be pointed out as a potential limitation, this procedure in residential contexts can also be found [17,23]. Therefore, the monitored indoor temperature values corresponding to the occupied periods of each division were then plotted against the considered static and adaptive models thresholds, and when a non-compliance with those limits was verified the room was considered to be in discomfort for that time period.
Regarding relative humidity, EN 16798-1 [38] establishes the design criteria for the relative humidity in occupied spaces, where for Category I buildings a range between 30% and 50% is recommended. Therefore, in order to analyze how acceptable the indoor humidity conditions are, an analysis of the conformity of the relative humidity values obtained with that range was also taken into account. Therefore, the monitored indoor temperature values corresponding to the occupied periods of each division were then plotted against the considered static and adaptive models thresholds, and when a non-compliance with those limits was verified the room was considered to be in discomfort for that time period.
Regarding relative humidity, EN 16798-1 [38] establishes the design criteria for the relative humidity in occupied spaces, where for Category I buildings a range between 30% and 50% is recommended. Therefore, in order to analyze how acceptable the indoor humidity conditions are, an analysis of the conformity of the relative humidity values obtained with that range was also taken into account.
Qualitative Research
The information obtained from the questionnaires is synthetized in Table 3. Both dwellings presented some similar occupancy characteristics, with sedentary activities identified as being usual during the occupied period, which, according to ISO 7730 [45], are defined as 1.2 met. Once specific bedtime period schedules weren't obtained, all occupied periods were considered with this metabolic rate, complying with the defined adaptive model range. Regarding the use of building components, possible solar gain restriction due to activated glazing protections is noticeable. Information about internal gains from sources such as lighting and equipment were considered to be aligned with typical housing internal gains at a low level [48], and window opening for air renewal was only performed for brief periods. Regarding specific heating habits, economic conditioning had a significant impact on the use of active systems-both residents only used portable fan heaters for some time periods-with an increase in clothing thermal insulation being the common strategy to work around this limitation, with no specific health problems demanding particular thermal conditions mentioned. Regarding their perception about the building's constructive characteristics, residents identified it as having a low quality, besides lacking proper thermal insulation.
Regarding specifically each of the analyzed dwellings, Dwelling A presented a predominant occupancy during daytime periods in the living room and during early night and bedtime periods in the bedroom, with active systems normally used only in the living room as during night time strategies such as clothing and hot-water bottles were used to minimize the effect of lower indoor temperatures. Dwelling B presented a predominant oc- cupancy during late afternoon in the living room and during early night until mid-morning in the bedroom, where active systems were normally used before bedtime.
Quantitative Research
The obtained monitored values from the performed experimental campaign applied to the dwellings are specified below. These could not be compared with data from the Portuguese Institute for Sea and Atmosphere (IPMA), as the local weather station was inoperative for a significant amount of time when the monitoring campaign was performed.
In Figure 4 the resulting monitored outdoor and indoor air temperatures are shown. In Table 4, some indoor and outdoor air temperature statistical variables for the entire monitored period are defined. A one-way analysis of variance (ANOVA) was also performed so as to identify statistically significant differences for the mean achieved temperatures, followed by a post hoc Tukey's HSD test if a statistical significance between the groups was detected. The obtained results are shown in Table 5. This information is also presented according to the air temperature frequency (Figure This information is also presented according to the air temperature frequency ( Figure 5). Outdoor temperature includes all measurements obtained from the previously mentioned monitoring period, while indoor temperatures only include measurements within the occupied periods of the respective analyzed area, according to schedules defined in Table 3. This allowed for a more clear understanding of indoor conditions during inhabited periods. Outdoor values presented an average temperature close to 12 °C and a considerable temperature variation over several days, which resulted in an average value of 5.4 °C. It is noticeable that the range between 10 °C and 12 °C represents more than 40% of the temperature frequency, although other close ranges also represent a considerable amount of frequency as a whole. Outdoor values presented an average temperature close to 12 • C and a considerable temperature variation over several days, which resulted in an average value of 5.4 • C. It is noticeable that the range between 10 • C and 12 • C represents more than 40% of the temperature frequency, although other close ranges also represent a considerable amount of frequency as a whole.
The Dwelling A living room presented a set of days with considerable temperature increments during short morning or afternoon periods, likely related to the use of active systems, which matched the information obtain through qualitative data. The most representative period was the one from 28 January to 5 February, when these thermal peaks were registered in the majority of days and when the highest maximum temperatures from the entire monitoring period were registered, reaching values close or above 18 • C, while temperature variation reached 4 • C for some days. However, it is noticeable that from 6 to 10 February, the use of active systems was very low or even non-existent when those peaks were slight; with indoor temperatures always close to 16 • C and temperature variations mainly below 2 • C. A predominant number of occurrences in occupied periods (almost 95%) was registered in the temperature range between 14 • C and 18 • C.
The Dwelling A bedroom presented the most days without a significant temperature increase, therefore without apparent use of active systems as mentioned in the surveys. Only a few thermal additions were registered, although coincident with the use of active systems in the living room. The registered thermal oscillation was slight, with average values of around 1 • C, average minimum temperatures close to those recorded in the living room, and maximum temperatures only close when the use of active systems was scarce or non-existent in the living room. During occupied periods, the range between 14 • C and 16 • C represented a considerable amount of frequency, with more than 60% of occurrences.
The Dwelling B living room presented all analyzed days without a significant temperature increase associated with the use of active systems, as the repercussion in indoor temperatures of short occupation periods and solar gain restriction clearly matched the information obtained through qualitative data. The indoor daily temperature was near 14 • C for the majority of days, presenting a slight variability, mostly below 1 • C. A predominant number of occurrences (around 65%) was registered during occupied periods in the range of temperature between 14 • C and 16 • C, with the remaining ones within the range between 12 • C and 14 • C.
The Dwelling B bedroom presented a significant difference in thermal behavior compared with the living room considering the majority of analyzed days with temperature peaks during night and sometimes also during mid-morning periods, associated with the use of active systems, as mentioned in the qualitative data. During the remaining daytime periods, no significant temperature increments were registered for the majority of days. The use of active systems allowed for reaching temperatures above 18 • C in the majority of days, and above 20 • C in some days. However, a significant amount of registered minimum temperatures were below 16 • C, with predominance during dawn and unoccupied periods. During the occupied periods, almost 40% of occurrences were within the range of temperature between 16 • C and 18 • C, although this was the only studied room to achieve a considerable amount of occurrences (more than 20%) above 18 • C.
The results for the ANOVA tests show that the critical value was widely exceeded and had a p-value lower than 0.05, indicating strong evidence against the null hypothesis and suggesting that one or more treatments were significantly different. Thus, the results for the Tukey's HSD test exhibited a statistically significant difference between all of the studied rooms, which matched the described thermal behavior differences. It is noticeable that lower mean differences were identified in Dwelling A-where a significant consistent occupancy occurred in both the living room and bedroom during daily periods-and between the Dwelling A living room and Dwelling B bedroom-the rooms where active systems were used with more consistency. Figure 6 shows the resulting outdoor and indoor relative humidity values, and Table 6 shows the respective statistical variables for the monitored period. Energies 2021, 14, x FOR PEER REVIEW 16 of 25 Figure 6. Monitored relative humidity.
The outdoor values presented a mean value of 84% and a variation around 51%. It should also be noted that the majority of the monitored values were above 50%, which represents a considerable severity for built environments to comply with the reference values of the EN 16798-1 standard.
The indoor, minimum, and maximum values registered during the entire monitored period were 59% (in Dwelling B bedroom) and 90% (in Dwelling B living room), with the mean and minimum values being considerably above 50% for both dwellings.
For the previously identified periods when active systems were used, the Dwelling A living room relative humidity decreased slightly, by around 10% for the period from 28 January to 1 February. It was also noticeable that when the heating systems were turned on, the indoor relative humidity decreased to values lower than 70%-although for a short period, until those systems were turned off-even when outdoor values sometimes exceeded 90%.
Without the use of heating systems, the Dwelling A bedroom presented a more stable relative humidity profile than the living room, with slightly higher humidity values than the latter. The Dwelling B living room presented a stable relative humidity profile, although short humidity increases were verified during morning, afternoon, or night periods, possibly related to occasional window opening or other specific indoor activitywhich did not exactly match what was mentioned in the surveys-although no significant repercussion was detected in the temperature values.
The Dwelling B bedroom was the room where the use of active systems was most notorious for their impact on relative humidity values-particularly until 4 February, after which the occurrence of significant changes in external conditions made this less evident-as in occupied periods its behavior was similar to that of the Dwelling A living room, regarding some humidity decrease when the heating systems were in operation. However-and considering that it was the room where those systems were used for the longest time-the obtained values were still high, with it noticeable that only very The outdoor values presented a mean value of 84% and a variation around 51%. It should also be noted that the majority of the monitored values were above 50%, which represents a considerable severity for built environments to comply with the reference values of the EN 16798-1 standard.
The indoor, minimum, and maximum values registered during the entire monitored period were 59% (in Dwelling B bedroom) and 90% (in Dwelling B living room), with the mean and minimum values being considerably above 50% for both dwellings.
For the previously identified periods when active systems were used, the Dwelling A living room relative humidity decreased slightly, by around 10% for the period from 28 January to 1 February. It was also noticeable that when the heating systems were turned on, the indoor relative humidity decreased to values lower than 70%-although for a short period, until those systems were turned off-even when outdoor values sometimes exceeded 90%.
Without the use of heating systems, the Dwelling A bedroom presented a more stable relative humidity profile than the living room, with slightly higher humidity values than the latter. The Dwelling B living room presented a stable relative humidity profile, although short humidity increases were verified during morning, afternoon, or night periods, possibly related to occasional window opening or other specific indoor activitywhich did not exactly match what was mentioned in the surveys-although no significant repercussion was detected in the temperature values.
The Dwelling B bedroom was the room where the use of active systems was most notorious for their impact on relative humidity values-particularly until 4 February, after which the occurrence of significant changes in external conditions made this less evidentas in occupied periods its behavior was similar to that of the Dwelling A living room, regarding some humidity decrease when the heating systems were in operation. Howeverand considering that it was the room where those systems were used for the longest time-the obtained values were still high, with it noticeable that only very occasionally did they decrease from 70% when the external values were higher, even when the systems were in operation. Figure 7 shows the comfort assessment performed for each analyzed room of the dwellings, according to the applied static (SCE) and adaptive (EN-C1) models. Each spot in the graph corresponded to an individual measurement value obtained during occupied periods. Thus, a regression analysis on the operative temperature (T op ) and T rm was also performed, also representing the regression line and the respective equations for the considered number of samples (n), as well as the coefficient of determination (R 2 ). The percentage of time when discomfort was experienced is synthetized in Table 7, with the criteria to estimate it relates the number of individual measurements during the occupied time outside the comfort thresholds with all individual measurements for that same period. It can be observed that for all studied rooms, discomfort was extremely significant. The Dwelling A living room presented some insignificant portion (2%) above the Lower SCE, with some of the remaining values close to it, although most values were far from the Lower 1 threshold. The Dwelling B bedroom presented 22% of the values above the Lower SCE and a relevant amount of the remaining ones close to it, although only 7% were above Lower 1, with some of the remaining ones considerably far from it. Dwelling A bedroom and Dwelling B living room presented all temperature values in discomfort for each of the applied comfort models, with most of the values considerably far from all of the minimum thresholds.
Comfort Assessment
periods. Thus, a regression analysis on the operative temperature (Top) and Trm was also performed, also representing the regression line and the respective equations for the considered number of samples (n), as well as the coefficient of determination (R 2 ). The percentage of time when discomfort was experienced is synthetized in Table 7, with the criteria to estimate it relates the number of individual measurements during the occupied time outside the comfort thresholds with all individual measurements for that same period. Although the Dwelling B living room presented a significantly reduced number of samples compared with the remaining rooms, the obtained regression lines presented similar slopes for all rooms, except for the Dwelling A bedroom, whose slope was sharper, probably due to the inexistence of active system use during night periods.
Regarding indoor relative humidity frequencies for each of the monitored rooms, it was observed that neither of the analyzed rooms met the EN 16798-1 defined range for relative humidity during any of the occupied periods, with the indoor values always above the maximum reference value of 50%.
Discussion
The obtained results make clear that both dwellings do not ensure adequate indoor thermal conditions, either using static or adaptive models. As expected, the outdoor air temperature has an influence on indoor temperatures throughout the monitored period. Nevertheless, both dwellings present considerable low temperatures, considering that the building thermal behaviour is favoured by moderate external conditions-the registered values are considerably higher than the ones indicated for both present and future climate scenario severities for heating seasons in this region, which is therefore representative of a period with a moderate severity for this season of the year. Considering the existing climate conditions during the monitored period-with a mean temperature around 12 • C-it is expected that there will be a significant building thermal and comfort demand increase for harsher winter months in Covilhã or other Beira Interior cities, with harsher present/future winter scenarios. Thus, it can be assumed that no proper response to demanding winter seasons is achieved even with the intermittent use of active systems, which is a conclusion common for other developed social housing studies in Iberian Peninsula locations; considering the use of intermittent heating, the work from Ramos et al. [27] showed improper indoor temperatures in non-retrofitted dwellings located in Porto, and regarding scenarios without the use of heating systems, the work from Curado and Freitas [17] showed that in cities such as Madrid, Bragança, and Bilbao, the use of active systems is required to obtain proper levels of thermal comfort.
The thermal fluctuations observed outside reduced the impact on dwellings' overall thermal fluctuations, not exceeding 1 • C-for both non heated or inhabited rooms-which is partly explained by constructive and geometric features, such as the effect of the high thermal mass existing in the exterior walls with insulation and the interior walls, as well as by the moderate amount of glazed area, respectively. The benefit of similar buildings' constructive and geometrical features to minimize adverse cold indoor conditions was also observed by Shahi et al. [49], who conducted a field survey in winter for houses located in Nepalese cold, temperate, and subtropical regions, obtaining resultant mean indoor air temperatures of 10.9 • C, 18.0 • C, and 20.0 • C, respectively-which were 6.3 • C, 2.9 • C, and 2.0 • C lower than the average estimated comfort temperature, respectively; although the results showed that a significant increase of indoor air temperature was required for the cold region, its buildings' thermal mass combined with moderate door and window size contributed to a smaller variation in indoor air temperature than for temperate and subtropical region buildings.
Although indoor activities and internal gains have no significant relevance in differences between each dwelling's thermal behaviour, no significant variations in indoor temperatures associated with opening windows are also identified. Both qualitative and quantitative data indicate window opening during brief or occasional periods for all rooms, which somehow matches with the findings of some works. Rijal et al. [50] monitored window opening behaviour and thermal comfort during some years in both the living rooms and bedrooms of some dwellings, and the results show that when heating systems were operating, the dwellings windows were rarely opened, while for free running mode, the winter season was the period when window opening was less, with the proportion of windows open generally being lower in bedrooms than in living rooms. Imagawa et al. [51] conducted occupant behaviour surveys in some dwellings during a four-year period, and it was noticed that the proportion of window opening in winter was very low for both living rooms and bedrooms, while heating was often used until spring, from when the proportion of heating use decreased and window opening increased. Therefore, changes in occupancy are those that have some impact in defining proper indoor conditions, mainly regarding the user's heating habits: the registered temperature "peaks" decoupled from external temperature conditions are decisive in Dwelling A living room and Dwelling B bedroom, so as to present more pronounced maximum temperatures and therefore some daily time periods with acceptable indoor conditions, although the average minimum temperature values are close to those of the unheated rooms, inhabited or not. Therefore, although presenting some potential constructive features, conditioning using active systems results in poor indoor conditions for a large amount of time during inhabited periods, which matches similar scenarios within social housing contexts found in studies such as the ones by Alonso et al. [28], where the analyzed dwellings presented a worse constructive quality combined with the inhibition by the residents to use heating systems.
Nevertheless, the impact of individual adaptive strategies and/or activities to a proper adaptation to cold indoor conditions-besides the use of active systems-must also be regarded: its potential is shown from works such as the one of Rajan et al. [52], who performed an experimental campaign in a condominium equipped with the same home energy management system, who noticed that in winter season periods a 4 • C difference was registered between high and low temperature groups, mainly because of individual adaptive activities of the occupants to adjust the indoor thermal environment. However, in the present study, both quantitative and qualitative data suggest that satisfactory indoor comfort conditions are not achieved through adaptive behaviour. Such an achievement can be found in the study of Rijal [53], who performed a thermal measurement and comfort survey during the winter for traditional vernacular houses exposed to extreme cold climates: indoor conditions presented rigorous cold environments-with 10.7 • C as the mean comfort temperature-but it was noticed that the residents were very satisfied with indoor thermal conditions as a result of successful adaptation behaviours to these buildings-such as proper clothing insulation, eating habits, and proximity to the fire-alongside passive heating effects that were found in some constructive features-this successful combination between adaptive habits and existing constructive features appeared not to have been achieved either in the studied dwellings. Its potential may also have considerable energy repercussions, as the effectiveness of reducing energy poverty can be considered dependent on the users' thermal adaptation [4], and for specific conditions, the constructive quality can increase indoor temperature without necessarily increasing the use of energy for heating purposes [49]. Rijal et al. [54] performed thermal measurements and a thermal comfort survey in several dwellings, and the results showed a high level of thermal comfort mainly resulting from a successful thermal design and adaptation to indoor conditions from residents, suggesting that low energy consumption was achieved if the building behaviour provided comfortable indoor conditions over a wide range of outdoor temperatures. Regarding thermal comfort and building energy use implications, the literature review by Yang et al. [55] highlighted that adaptive comfort models present a wider comfort temperature range with a significant energy saving potential in both air-conditioned and naturally ventilated buildings. While several works regarding relevant seasonal differences in comfort temperature and related energy saving are mentioned in [54], the specific field of energy saving by adjusting the temperature setting during heating seasons can be found in studies such as the ones by Nicol et al. [56]-highlighting that a reduction of 1 • C in indoor temperature represented about 10% energy saving for heating in winter seasons-and Wang et al. [57]-stating that about a 9.6% energy saving for centralized heating system could be achieved. Regarding studies in Southern European countries, Bienvenido-Huertas et al. [4] studied the possibility of reducing the building energy consumption and minimizing energy poverty cases in Seville recurring to adaptive setpoint temperatures, and found that for Category I of EN 16798-1 a decrease in the number of energy poverty was 43%, for Category II it was around 82%, and for Category III it was up to 98.5%, although its effectiveness was higher during summer months.
Regarding variations related to different orientations, the significant temperature increases identified in the monitored rooms are mostly related with the use of heating systems. As no relevant changes to indoor thermal conditions are observed in periods when heating systems are turned off, the impact of orientation and the respective solar gains on indoor temperatures is low. This aspect is certainly reinforced by the fact that residents sometimes partially or fully activate glazing protections, as mentioned in the surveys. However, given the results obtained, a considerable reduction in discomfort is unlikely to be achieved only by resorting to passive systems, even with more favourable outdoor climate.
Once all rooms presented different occupation schedules and heating habits, analysing each one of them allowed for obtaining distinct understandings so as to consolidate the mentioned general observations regarding indoor thermal performance:
•
The Dwelling A living room allows for an understanding dwelling daytime performance with and without relevant artificial heating influence. In days when active systems are perceived to have been used, its impact consists of a corrective measure that somehow improved indoor thermal conditions, although its use for short periods does not allow for a significantly decrease in discomfort during the relevant occupied time periods. In days when active systems use is reduced or inexistent, alongside with the insignificance of solar gains during morning periods through east-exposed glazed elements, the indoor temperature is generally low, although the impact of high thermal mass is noticeable in its stabilization, considering existent outdoor temperature variability for these periods; Their impact is notorious from the moment they are turned on, with a high temperature increase, so some portion of the inhabited time is within comfort ranges. However, its restricted use until bedtime periods results in a quick drop of indoor temperatures that is unable to be prevented from the moment they are shut down, which demonstrates some insufficiencies in the existing thermal insulation. Nevertheless, it would be interesting to observe the use of active systems during longer and sequent periods to somehow obtain a better understanding of envelope thermal insulation limitations.
Regarding the comfort assessment performed using distinct comfort models, for SCE, it is noticeable that discomfort is slightly decreased in the Dwelling A living room and Dwelling B bedroom. The use of active systems clearly guarantees this decrease, besides setting an important amount of values close to those thresholds-particularly for the Dwelling B bedroom-as once in unheated rooms all registered values are in discomfort, with most of them far from minimum thresholds. As for EN-C1, similar results are obtained. Dwelling B bedroom is the only room to present at least a slight discomfort decrease, as once results are clearly insufficient considering a high level of expectation in comfort, with many of the registered values not even close to the minimum threshold, particularly for the unheated rooms: it is noticeable that only rooms with active systems could reach some discomfort decrease, which is quite concerning, particularly for situations such as the ones of Dwelling A bedroom, which presents a significant daily amount of time with occupation during night periods.
Regarding relative humidity, the obtained results show that the dwellings do not assure adequate indoor humidity conditions, considering outdoor conditions that are only slightly higher for this season of the year. For all the analysed rooms-occupied during the day/night, with long/short occupation periods, and with/without the use of active systems-there is a clear inability to comply with the EN 16798-1 Category I reference range, as well as with less demanding ranges (defined for the remaining categories) for a significant amount of monitored values. Even so, residents' occupation habits have some impact on minimizing adverse indoor conditions: considering the repercussion of outdoor relative humidity throughout the monitored period, the analysis of each relative humidity profile in daily periods reveals smaller variations to those abroad, with the mean values for the rooms where active systems are used resulting in slightly lower results than in the other rooms. Nevertheless, the obtained results also show that when outside humidity conditions are more demanding, indoor values are sometimes above 80% for relevant amounts of time, which can increase the risk of mould growth [58] with clear repercussions on indoor air quality.
A remark should also be made regarding constructive improvements considering the actual role of retrofitting in Portuguese national strategies for housing buildings [59], as well as the need to improve the retrofitting strategy for social housing neighborhoods [17]. Applicable constructive retrofit measures are extensively described in [60] for both inland and coastal locations regarding this CAR project, although its impact during summer seasons and indoor air quality must also be considered. Nevertheless, among several benefits provided by retrofit measures, such as adding thermal insulation or window substitution, it is possible that passive means alone may be insufficient to achieve thermal comfort during the entire winter season. This assumption matches results obtained by Curado [23], that for retrofitted social housing dwellings in Porto and in cases with both intermittent or no heating habits identified around 40% of heating season total hours as still in discomfort, although a considerable discomfort decrease was achieved. Therefore, combining constructive measures with household energy use patterns is also a potential strategy, as proposed by Pokharel et al. [61], who performed several surveys on households during the winter, and identified low indoor air temperature measurements and per-capita daily energy use from 20 to 37 MJ/(person·day) resulting in significant improvements recommended in building envelope insulation combined with small energy use strategies.
Conclusions
The goal of this research was to assess indoor thermohygrometric conditions in a public social housing case study located in the Beira Interior region during winter season, so as to provide an understanding of indoor performance suitability for present and predicted climate scenarios. It was observed that the analysed case study performance is not aligned with the needed proper responses, regarding high probable exposition of risk groups such as the elderly to similar insufficient indoor conditions with a serious impact on their health, and representing important losses of well-being. In many cases, this problem is even serious, regarding risk groups with specific health problems that need to stay at home for most of the day. Therefore, the key conclusions of this study are as follows: • Intermittent heating was used during some periods-as by several Portuguese family units-but with a reduced level of effectiveness: thermal discomfort was found to be extremely significant either for static or adaptive models applied, with the monitored rooms-heated or not-presenting 78% to 100% of time in discomfort during inhabited periods, whether during the day or night, as applicable; • Indoor humidity conditions were inadequate during all occupied periods, outside the recommended humidity ranges. Furthermore, residents' occupancy habits have little influence on improving these conditions, while many of the recorded values also indicate a high risk of mold growth as a consequence of improper indoor air quality; • Passive means alone-mainly the existing envelope's thermal insulation and thermal mass-considered individually or combined with both occupation and intermittent heating habits-were not enough to provide proper indoor thermal conditions; • It was not possible to properly analyse the impact of solar gains on improving indoor thermal conditions, due to constant use by residents of glazing protections; • Considering that the present case study focuses on a building with an envelope with superior thermal criteria compared with other identified constructive systems used in local public social housing, the need it to provide constructive improvements for this building stock becomes clear.
A note should also be taken regarding the static model used in this study. The described constraints during the experimental campaign-related with equipment installation as well as with clothing and activity details-restricted obtaining further relevant data throughout the days for hourly or sub-hourly periods, such as the mean radiant temperature, air speed, metabolic rate, and clothing insulation. Although the use of the SCE framework in indoor thermal comfort analysis can be found for studies that compare its results with those of adaptive model as primary models [18], potential limitations are pointed out regarding its methodology, such as establishing fixed limits of comfort temperature considering indoor permanent heating/cooling habits-which are unrealistic for some specific countries and/or contexts where those habits are mainly intermittent or insignificant, a reason alternative approaches were specifically developed to fill this gap [7]-as well as not considering the repercussion of changes to specific indoor individual and environmental variables in adjusting those limits. The use of PMV method [45] is another possibility for such studies, although the use of reference values in the absence of specific data can strongly affect the resulting comfort ranges [62]-particularly for case studies where adaptive practices are common and therefore variations of input data are frequent-besides some limitations that were found regarding its applicability in field surveys [63].
In this context, a possible evolution of this work would be by providing more complete data through the use of calibrated dynamic thermal simulation models so as to provide simulated indoor environmental variables, along with more extensive surveys applied to several public social housing dwellings that could provide precise clothing insulation and/or activity metabolic rates according to typical occupancy profiles. Additionally, proper constructive retrofit improvements-combined or alone-should be studied for building geometry and envelope, using those models to perform specific dynamic thermal simulations in order to identify how to reduce discomfort, also considering the positive influence that actions from occupants may have. These solutions should be studied alongside with other possibilities like proper ventilation and shading devices during summer seasons, in order to identify its opportunities and threats for both present and future climate scenarios [18]. Thus, specific situations regarding the elderly or users with specific health problems should be regarded in order to establish specific indoor requirements beside thermal comfort models used, such as the impact of other factors like indoor air quality. The extension of study comprehensiveness beyond building thermal and comfort performances, like the consideration of other relevant factors such as intervention costs, constructive feasibility or occupant's acceptance, is also proposed as possible future work, considering the importance of these factors in public social housing contexts. | 15,710.8 | 2021-09-28T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Some Remarks about Entropy of Digital Filtered Signals
The finite numerical resolution of digital number representation has an impact on the properties of filters. Much effort has been done to develop efficient digital filters investigating the effects in the frequency response. However, it seems that there is less attention to the influence in the entropy by digital filtered signals due to the finite precision. To contribute in such a direction, this manuscript presents some remarks about the entropy of filtered signals. Three types of filters are investigated: Butterworth, Chebyshev, and elliptic. Using a boundary technique, the parameters of the filters are evaluated according to the word length of 16 or 32 bits. It has been shown that filtered signals have their entropy increased even if the filters are linear. A significant positive correlation (p < 0.05) was observed between order and Shannon entropy of the filtered signal using the elliptic filter. Comparing to signal-to-noise ratio, entropy seems more efficient at detecting the increasing of noise in a filtered signal. Such knowledge can be used as an additional condition for designing digital filters.
Introduction
Digital filters are discrete-time maps that perform mathematical operations on a sampled signal [1]. Frequency response is usually applied to characterize filters [2,3]. Two main classes of digital filters are generally used. When an impulse response is not zero for a finite number of samples, then we have the finite impulse response (FIR) filters. In the case where the impulse response produces an infinite number of non-zero samples, then we have the infinite impulse response (IIR) [4,5]. The great performance of digital filters is believed to be one of the reasons explaining the popularity of DSP devices [6].
The process of digital filtering is extensively used in many applications in communications, signal processing, electrical and biomedical engineering, and control [7][8][9][10][11][12][13][14][15]; for example, coding and compression, signal augmentation, denoising, amplitude and frequency demodulation, analog-to-digital conversions, shape detection, and extraction [16][17][18][19][20][21][22][23][24][25]. For some applications, nonlinearity is tailored to a specific purpose [26]. Recently, the authors of [27] designed a digital sigma-delta truncated infinite impulse response filter, which furnishes adequate rejection with a digital-to-analog converter of no more than 8 bits. The application in [27] is related to human body communication, which for many researchers is a promising research topic as it plays an important role in wireless body area networks because of its low power and hardware cost. In this area, it seems that digital filters of medium to low word length has again attracted the attention of researchers.
When digital filters are employed under fixed-point arithmetic platforms, e.g., microcontrollers, DSP, and FPGA, or with very demanding performance specifications, the importance of filter coefficient accuracy increases, because the signal may be distorted [28][29][30]. Thus, a common goal in the finite precision analysis is to choose a word length such that the digital system presents sufficiently accurate realization. This design should consider the complexity and cost of hardware and software [31].
In digital signal processing, the issues of finite word length are some of the most significant components when the discrete poles are very close to the unit circle. Mullis and Roberts [32] and Hwang [33] have demonstrated that the influence of quantization errors on the digital filter performance depends on the filter implementation. In addition, Rader and Gold [34] have shown that for a given filter implementation it is possible that small errors in the denominator or numerator coefficients may cause large pole or zero offset. Moreover, Goodall and Donoghue [35] and Jones et al. [36] have observed a significant sensitivity of coefficient word lengths. This fact relates to the inability of computers to represent the infinite nature of real sets [37]. The influence of computer limitations opens a new perspective for computer environment simulation. For example, Nepomuceno [38] presents a theorem that identifies the reliability of calculations performed at fixed points; in [39,40], a technique has been developed to decline a simulation if a mandatory accuracy is greater than the lower bound error, growing numerical reliability in simulation, and still in [41], the authors show how sensitive a simulated system is in different processors.
It seems clear that much research has been devoted to investigating the influence of finite precision on digital filters [32,34,36,42,43]. In those investigations, there are many cases where the quality of filter is measured using the filter response or signal-to-noise ratio (SNR) [43]. Despite the fact that the effect of filters on entropy has been pointed out since the work of Shannon [44], there is much less attention given to the entropy effects due to finite precision digital filters on the filtered signal. One work in this direction has been undertaken by Badii et al. [45], who show the influence of an infinite-impulse response in the fractal dimension of the attractor reconstructed from a filtered chaotic signal. Other works have employed entropy to the design of digital filters. For instance, Madan [46] has introduced the use of the maximum entropy method for the design of linear phase FIR digital filters. In [47], another attempt to use entropy in the design of digital FIR filters has been observed. However, no work has been found investigating the effects on entropy on a filtered signal by an IIR filter. This paper seeks to relate the computational limitations and the variation of the main parameters of a filter in the measured entropy. As entropy is a good index to detect increasing of noise in a signal, we have used a boundary technique to observe the effects of finite precision on the parameters of the filters according to the word length of 16 or 32 bits. We noticed that entropy is more sensitive than SNR. It was important to show that despite the ideal linear filter do not increase entropy, numerical experiments using the elliptic, Butterworth and Chebyshev filters have shown an increasing of entropy. Additionally, a positive correlation between order and entropy has been observed in the elliptic filter. This information can be useful to design or to evaluated digital filters in situations where the growth noise should be mitigated.
The remainder of this paper is organized as follows. The definitions of IIR, FIR filters, quantization, and entropy are given in Section 2 as well as three scenarios of the simulation. Section 3 presents the results, where three filter types are investigated: Butterworth, Chebyshev, and elliptic. The remaining section is devoted to summarizing our results.
IIR Filter
IIR digital filters are characterized by having infinite impulse response [48]. They have output feedback, which makes them interesting because they allow achieving a more selective frequency response with lower number of coefficients. IIR digital filters are represented by the following transfer function, where N and M are the degree of the numerator and denominator polynomial, respectively; b k and a l are the filter coefficients. To find the difference equation of the filter, the inverse z-transform of each side of the Equation (2) is taken. The result is as follows.
A more condensed form of the difference equation is and taking a 0 = 1, we have
Quantization Error
In the implementation of digital filters, the limitation of finite word length results in coefficient quantization errors, which may have unexpected effect in the frequency response [49]. This quantization error may be seen in a more realistic way if we consider the coefficients of the filter bounded from above and from below. Thus, quantizing can be seen in some way as adding a certain amount of noise. The fewer bits we use in quantization; the more noise is added. This is precisely the noise source shown in Figure 1.
Using a fixed point representation, the quantization error is given by where Q = 2 b and b is the number of bits. Thus, the coefficients of Equation (5) present lower limits given by whereas the upper limits are given by This is equivalent to say that the quantization error produces an interval around the desired value of the coefficients. In other words, the approximated value of the coefficientsâ k andb k are given by [44]. In our case, we are interested in looking the channel as a filter and noise source as a consequence of finite precision implementation of the digital filters.
Entropy
Entropy reflects a direct relationship between the length of the information and its uncertainty. As entropy quantifies probabilistic and repetitive events, it is utilized so generally in different fields [50]. The maturation of the idea of entropy of random variables and processes by Claude Shannon furnished the origins of information theory. In fact, Shannon's first name for this concept was uncertainty and that was the reason for many to define entropy as "a measure of the uncertainty about the outcome of a random process" [51]. The connection with the digital filter becomes clear when the original scheme proposed by Shannon is noticed. This scheme has been adapted in Figure 1. Shannon was interested in how a message could be transmitted through a channel from a transmitter to a destination. In this process, a key a feature is to consider the presence of noise. Here, we see this scheme from the perspective of filtering. Thus, the channel is our filter, which takes the input and changes it into the output. The noise source in our case comes from the finite precision hardware/software where the digital filter is implemented. It is evident that in real applications many other sources of noise should be considered. Nevertheless, for the purpose of this work, we focus our attention only in the operation of the filter as source of noise.
In Section 22, Shannon [44] states "The operation of the filter is essentially a linear transformation of coordinates." Shannon deduced this by considering the fact that if an ensemble having an entropy H 1 per degree of freedom in band W is passed through a filter with characteristic Y( f ) the output ensemble has an entropy given by Equation (13). In other words, the new component's frequencies are just the old ones multiplied by a gain. Moreover, Shannon has described this in such way that a filter presents a direct impact on the entropy of a signal. It is clear from Shannon's idea that signals filtered by ideal filters high-pass, low-pass, passband, or stopband should have their entropy decreased, as can be seen in [44] (p. 40) There are a few sorts of entropy characterized in the literature. With regards to thermodynamics, entropy alludes to the measure of disorder. In statistical mechanics, it refers to the amount of uncertainty in the system. In information theory, it is a proportion of the uncertainty related with a random variable [44,52]. Shannon provides the optimal number of binary digits to represent each event of a given message so that the average number of bits/events of the message is as small as possible. Shannon entropy is defined by [53] where H(X) is the entropy (bits), X is a symbol, P i is the probability value of symbol X, and L is the size of the signal. In our case, we measure the entropy for word lengths of 16 and 32 bits. In a complete random signal represented by a word length of 16 bits, the entropy is exactly 16 bits.
To proceed with the calculation of Shannon entropy, we apply the following standardization process to the output signal as follows, where y k is the signal; ceil(x) is a function that returns the smallest integer not less than x; min and max return the lowest and the largest value from a vector, respectively; and W L is the word length given in bits. Figure 2a,b presents a sinusoidal wave y(k) = 2 sin(2π2)t sampled at δ = 0.01 to illustrate this procedure. For this sine the calculated entropy using W L = 8 and Equation (14) is H = 4.71. A uniform distributed random signal is shown in Figure 2c, for which the calculated entropy is H(k) = 7.59 ± 0.03. Increasing the number of samples of the random signal, the entropy value approaches 8, as expected. A last observation regarding this procedure is related to the need to discard the transient and limit the number of samples for calculating the entropy of filtered signals. The number of samples has been adopted as 2 10 , which limits the measure of the entropy up to 10. Only in one table, we have adopted 2 12 samples. Tests made with greater number of samples showed us that this limit is sufficient to a reliable estimation of Shannon entropy in this work.
Entropy to Detect Noise
Entropy has been widely used to detect noise in signal and images [52,[54][55][56]. To show the effectiveness of entropy as a way to detect growth of noise in a signal, we have calculated the entropy changing the variance in Gaussian noise from σ = 0.01 to σ = 0.02. The mean has been kept as µ = 0. A sine wave is shown in Figure 3a. Gaussian noise with σ = 0.01 and σ = 0.02 has been added to this sine wave and shown in Figure 3b,c, respectively. The calculated entropy are (a) 5.66, (b) 5.95 ± 0.03, and (c) 6.23 ± 0.04. The level of Gaussian noise is quite unseeingly, yet the entropy has been sensitive for the increasing of noise. Entropy is a sensitive way to measure uncertainty. To further show this property, let us compare this measure with the well-known signal-to-noise ratio (SNR) given in dB by the following equation where A is root mean square (RMS) amplitude. Let the relation between the entropy of signal and noise (ESN) be where H is the entropy of the signal and noise. Using these two equations, we are going to compare the sensitivity in a little variation of noise. Table 1 shows the difference between SNR and ESN for the signal of Figure 1b (sine wave with Gaussian noise of σ = 0.01) and the same signal but with a σ given in the first column of Table 1. The message of this table is simple. For the case of σ = 0.0200, the SNR gives a difference of 2.6359 ± 0.6920 dB, whereas the entropy for this difference is 15.9343 ± 3.3038 dB. When the difference between the variance of the noise of these two signals are only 0.0125 − 0.01 = 0.0025, we have more confidence to use the ESN to detect this level of difference of noise, as the difference between the SNR of these two signal is 0.527 ± 0.589, whereas for ENS we have 4.023 ± 2.866. For the SNR case, the interval given by one σ is (−0.062;1.116) and we have lost the confidence to ensure that one of the signals presents a higher level of noise than other.
Numerical Experiments
In this section, three numerical experiments are described. For each experiment, the main steps are outlined. All the numerical experiments have been performed in Octave [57] on a Windows computer. These routines are available upon request. These experiments have been designed to check some effects of finite precision in entropy of digital filtered signals. In the Numerical Experiment 1, poles and zeros are perturbed by quantization error due to a 16-and 32-bit fixed point representation. Numerical Experiment 2 aims at examining the increasing of entropy using the elliptic filter. The correlation between order and entropy increasing is verified in the Numerical Experiment 3.
Numerical Experiment 1
The proposed scheme can be summarized in the following steps.
Step 1: Use the commands butter, cheby, or ellip of Octave to generate the poles and zeros of the transfer function according to Equation (1).
Step 2: Choose number of bits and calculate quantization error according to Equation (6).
Step 3: Insert the quantization error at the poles and zeros. Using a similar strategy adopted in [49], Equation (5) can be rewritten as follows.
Step 4: The signal is filtered using 50 different combinations described by Equations (7) and (8).
Step 5: Apply the standardization procedure to the filtered signal according Equation (15).
Step 6: Calculate the mean and standard deviation of the entropy from the 50 filtered signals.
In Numerical Experiment 1, the filter poles and zeros are perturbed with the effects of 16-and 32-bit quantization. The input signal is composed as a sum of sinusoidal signals of 50, 75, 125, and 150 Hz. The order of the filters is given in Table 2. Table 2. Order of the filters for Numerical Experiment 1. We have adopted 100 Hz as cut-off frequency in the case of low and high pass. Passband and stopband filters have been designed with 70 and 130 Hz.
Numerical Experiment 2
The following steps outline the Numerical Experiment 2.
Step 1: Use the command ellip of Octave to generate the poles and zeros of the transfer function (Equation (1)).
Step 2: Choice of input signal (3).
Step 3: The signal is filtered using 50 values of W L within 1024 to 6024.
Step 4: Apply the standardisation procedure to the filtered signal according Equation (15) Step 5: Filter signal using Equation (5).
Step 6: Compute the mean and standard deviation of entropy of the filtered signal.
In Numerical Experiment 2, entropy was calculated for the original signal 3 and the filtered signal using elliptic filters. To compare, the input signal was simulated without the filtered frequency components. The complete description of the input signal and the ideally filtered signal can be seen in Table 3. The variation of the length of the signal has been used here to calculate mean and standard deviation of the entropy. Table 3. Input signals for the Numerical Experiment 2. We have designed three types of signals composed by different summation of harmonics. The values of frequencies 1-6 are 40 Hz, 60 Hz, 80 Hz, 130 Hz, 150 Hz, and 170 Hz, respectively. To compare, the input signal was simulated without the filtered frequency components as shown in the third column. This is equivalent to produce an output by an ideal filter. In all cases, a sample rate of 0.001 s has been adopted. Different values or even variable sample rate has not been investigated in this work and let for future research.
Numerical Experiment 3
The following steps describe the Numerical Experiment 3.
Step 1: Use the command butter, cheby, or ellip of Octave to generate the poles and zeros of the transfer function, Equation (1).
Step 3: The signal is filtered using 50 different values of W L within 1024 to 6024.
Step 4: Apply the standardisation procedure to the filtered signal according Equation (15).
Step 5: Compute the mean and standard deviation of entropy of the filtered signal.
Step 6: Change the order of the filter from 1 to 8 for each of the steps 1 to 5.
In this experiment, the filter order was varied for an input signal with frequencies of 20, 60, and 80 Hz and the cut-off frequency of 60 Hz.
Results
The results of Numerical Experiment 1 are shown in Table 4. Table 5 shows the result of the Numerical Experiment 2, whereas Table 6 and Figure 4 show the results of Numerical Experiment 3.
Discussion and Conclusions
This work has investigated the effects of finite precision in the entropy of digital filtered signals. This allows us to quantify the introduction of noise due the action of such filters. We have shown that entropy is a good alternative to identify the presence of noise. It has presented a better result than the signal-to-noise ratio for small amount of variance. To observe the effects of entropy in filtered signals, we have designed three numerical experiments. In Numerical Experiment 1, we have evidenced the increasing of the entropy in all types of filters investigated (Butterworth, Chebyshev, and elliptic) for 16 and 32 bits. The entropy of the input signal is H = 4.9255, whereas in all the filtered signal the entropy is H > 5.32. This is not what is expected for an ideal linear filter (see [44]). We should notice, according to Table 2, that elliptic has been set up with the lowest order. Even in such circumstances, this type of filter has shown practically the same level of entropy in the filtered signal.
The results of Numerical Experiment 2 are shown in Table 5. In this case, an ideally filter is simulated by taking out some of the frequency components of the signal. The entropy of filtered signal has been significantly increased varying from 6.5 to almost 8.
In Numerical Experiment 3, we have noticed another feature as described in Table 6. This experiment shows a significant positive correlation at the 0.05 level (2-tailed) for elliptic with p-value equals to 0.030. From these experiments, it seems clear that the elliptic filter introduces more uncertainty, that is, entropy, to the filtered signal when compared to Butterworth and Chebyshev filters. Figure 4 shows the FFT of the signals. It is possible to notice a slight difference between subfigures (b) and (c).
The remarks made in this manuscript is coherent to what have been presented by DeBrunner et al. [47]. As we are focusing our attention to the source noise furnished by arithmetical operations (see Figure 1), design strategies that look for more efficient ways to implement mathematical expressions can be useful to reduce entropy. In our future work, we intend to test different topologies of filter (direct or cascade, for instance) to verify its influence in the increasing of entropy as done in this manuscript. This seems a quite reasonable pathway as the order is related to the increasing of the number of mathematical operations, which is a well-known source of the noise. We also intend to investigate the influence of sample rate and the number of samples in the computation of entropy. | 5,083.4 | 2020-03-01T00:00:00.000 | [
"Engineering",
"Mathematics",
"Computer Science"
] |
Impact of a district-wide health center strengthening intervention on healthcare utilization in rural Rwanda: Use of interrupted time series analysis
Background Evaluations of health systems strengthening (HSS) interventions using observational data are rarely used for causal inference due to limited data availability. Routinely collected national data allow use of quasi-experimental designs such as interrupted time series (ITS). Rwanda has invested in a robust electronic health management information system (HMIS) that captures monthly healthcare utilization data. We used ITS to evaluate impact of an HSS intervention to improve primary health care facility readiness on health service utilization in two rural districts of Rwanda. Methods We used controlled ITS analysis to compare changes in healthcare utilization at health centers (HC) that received the intervention (n = 13) to propensity score matched non-intervention health centers in Rwanda (n = 86) from January 2008 to December 2012. HC support included infrastructure renovation, salary support, medical equipment, referral network strengthening, and clinical training. Baseline quarterly mean outpatient visit rates and population density were used to model propensity scores. The intervention began in May 2010 and was implemented over a twelve-month period. We used monthly healthcare utilization data from the national Rwandan HMIS to study changes in the (1) number of facility deliveries per 10,000 women, (2) number of referrals for high risk pregnancy per 100,000 women, and (3) the number of outpatient visits performed per 1,000 catchment population. Results PHIT HC experienced significantly higher monthly delivery rates post-HSS during the April-June season than comparison (3.19/10,000, 95% CI: [0.27, 6.10]). In 2010, this represented a 13% relative increase, and in 2011, this represented a 23% relative increase. The post-HSS change in monthly rate of high-risk pregnancies referred increased slightly in intervention compared to control HC (0.03/10,000, 95% CI: [-0.007, 0.06]). There was a small immediate post-HSS increase in outpatient visit rates in intervention compared to control HC (6.64/1,000, 95% CI: [-13.52, 26.81]). Conclusion We failed to find strong evidence of post-HSS increases in outpatient visit rates or referral rates at health centers, which could be explained by small sample size and high baseline nation-wide health service coverage. However, our findings demonstrate that high quality routinely collected health facility data combined with ITS can be used for rigorous policy evaluation in resource-limited settings.
Introduction
Health systems strengthening (HSS) interventions have become popular strategies to advance population health gains in low income countries [1][2][3][4][5]. Instead of focusing on disease-specific, or "vertical" programs, HSS interventions improve the platform of health service delivery across all health system components [1,3]. While the evidence base of successful HSS interventions that improve processes and health outcomes is growing [5,6], few evaluations have quantified the resulting changes in healthcare utilization, a critical step towards increasing coverage of health services [7,8]. Understanding the link between investment in systems and uptake can inform health resource allocation and planning efforts in low income settings [9]. However, the cost of collecting novel data to measure the effect of HSS interventions presents a barrier to measuring impact and often results in inadequate evaluation.
An underutilized resource that could be used to address this issue is the wealth of routinely collected service utilization data produced by national health management information systems (HMIS) in low income countries [8,10]. Analysis using HMIS leverages health systems time series data that are already being used for management and improvement, while allowing for evaluation designs informed by principles of causal inference [11]. A potential reason for limited use of HMIS for HSS evaluations is the perception that data quality is poor [12,13], despite several positive results following data quality assessments of these systems in low income countries [14][15][16].
Rwanda, a small, hilly country in East Africa, has embraced the use of HMIS and is well positioned to take advantage of this resource. Rwanda has heavily invested in its Rwanda HMIS (RHMIS), which provides managers and policy makers with healthcare utilization data from all public health facilities in the country. Reports on utilization of outpatient and inpatient services are collected from registers by facility data staff at every public health facility in the country, aggregated at the district level, and are subsequently uploaded to a web-based central repository at the national level [17]. A recent national data quality assessment demonstrated high levels of completeness and internal consistency of RHMIS [15]. An accuracy assessment that sampled facility records across three rural districts found concordance between hard copy reports and electronic RHMIS reports to be 73.3% and concordance between facility registers and electronic RHMIS reports to be 70.6% [18]. Assessments of external validity comparing coverage estimates for family planning and 4 ANC visits in RHMIS to the Demographic and Health household Survey (DHS) conducted by RHMIS analysts were found to be comparable. Institutional delivery estimates were slightly lower in RHMIS compared to DHS in 2010 (57% vs. 69%) [19].
Rwanda has begun to leverage this resource in national evaluations of several health policies, including performance-based financing (PBF) [20], Community Health Worker (CHW) programs [21], and uptake of maternal and reproductive health services following implementation of Human Immunodeficiency Virus (HIV) control programs [22]. These data systems have allowed scientists and policymakers to describe improvements in health outcomes that have occurred in Rwanda over the past decade, which include steep reductions in under-five mortality [23][24][25], as well as increased coverage of antiretroviral therapy services [26] and maternal health services [27], suggesting improved access to a range of health services across the country [20,[28][29][30][31]. While providing important contributions to the health systems literature by documenting health gains in Rwanda, these studies largely relied on cross-sectional or pre-post designs without a control, and so have limited utility for causal inference. Other countries in the region including Uganda and Burundi have also used their national health information systems to assess the impact of national health care financing policies on service utilization [32,33]. Basinga et al.'s evaluation of PBF relied on a randomized controlled trial design, and Falisse et al. employed a difference-in-difference analysis using two single time points for the intervention and control series in a PBF evaluation in Burundi, but few other analyses using routinely collected national data have leveraged the longitudinal data structure or the ability to sample control series from a national census.
In this manuscript, we evaluate the impact of a district-level HSS intervention implemented by the Rwanda Population Health Implementation and Training (PHIT) partnership in rural Rwanda on district-level health service utilization [34,35]. Using RHMIS as our data source, we use five years of monthly health center-level time series data to conduct a propensity score matched controlled interrupted time series analysis to estimate the district-level impact of the PHIT HSS intervention on delivery rates, outpatient visits rates, and referral rates for high risk pregnancies. Controlled interrupted time series analysis allows for unbiased estimation of population level effects of an intervention, assuming that no other co-interventions occur at the same time as the primary intervention [36]. This evaluation will inform other global health researchers in Rwanda and elsewhere about the effectiveness of health systems interventions on increasing service uptake, and demonstrate the power of combining national HMIS time series data with counterfactual-based methods to allow causal conclusions to be drawn from HSS impact evaluations in global health.
Study setting, population, and intervention
Rwanda has a population of 10.5 million, 83% of whom live in rural areas [37]. In 2012, Rwanda had 748 public (government-run) health centers and 174 private health centers, with most care provided at public health centers (85-89%) [15]. The first facility point of contact for patients are public health centers, which provide primary care and maternal and child health services. Each of Rwanda's 30 districts has roughly fifteen health centers, with the goal of every patient in Rwanda to be living within five kilometers of a health facility. Rwanda's health system is also served by a network of CHW who administer treatments for childhood illnesses (community integrated management of childhood illness), or refer patients to health centers or hospitals for further care [21]. Health centers in Rwanda provide a minimum package of services that span promotional (child growth monitoring and community health insurance), preventive (vaccination, prenatal and postnatal care) and curative activities (child health care, uncomplicated deliveries, HIV, drug dispensation). These services are offered at all health centers in the country [31]. The CHW program was implemented between 2008 and 2011, and overlapped with the implementation of our HSS intervention.
In 2009, Partners In Health (PIH) and the Government of Rwanda (GOR) entered a partnership to implement a five-year district-wide HSS intervention in two rural districts of Rwanda serving a catchment population of 480,000, which lagged behind the rest of the country in terms of health and social indicators [27,34]. These districts were chosen because PIH was already supporting the GOR through provision of technical and financial support to two district hospitals and seven health centers in these districts prior to the PHIT intervention. For this analysis, we use ITS to study the impact of the first component intervention that began in May 2010 and consisted of targeted instrumental support to PHIT health centers. This included a data-driven gap analysis to assess facility readiness among the fourteen health centers in the intervention area that had not received partnership support prior to the HSS intervention. Facility surveys were developed to ascertain dimensions of facility readiness, guided by the World Health Organization (WHO) health systems building block framework [9] and based on the Services Provision Assessment [38] and the annual health facility survey which was in use nationally at the start of the intervention. Core domains included infrastructure, human resources, monitoring and evaluation, and supplies with an emphasis on data utilization for decision-making. Partnership representatives met with health facility leadership to discuss prioritization of resource allocation based on survey results. Based on review of the results and these discussions, limited funds (average: 18,000 USD/health center) were transferred to intervention facilities to address prioritized gaps. Specific areas of focus varied by facility and ranged from investments in health center management, infrastructure renovations, medical equipment, salary support for additional health center staff, and social support for vulnerable patients resulting in overall improvement of facility readiness to provide care [35]. Following this work, additional district-wide interventions focused on further strengthening facility and CHW care quality and service utilization in the intervention area [39, 40].
Study design
We developed a conceptual framework describing the pathway from intervention to increased uptake of services among people living in the intervention catchment area (Fig 1). We hypothesized that the improved facility readiness to provide high quality care through strengthened infrastructure, supplies, staff and information systems would be recognized by community members who would be more likely to seek care at the intervention facilities and recommend use by others [35]. This framework was supported by research on factors associated with care seeking behavior have shown that availability of equipment and medicines leads to increased utilization [41], and that perceived poor quality of facilities can lead to reduced care seeking by pregnant women [42].
We framed our causal question as follows: what is the difference in level and monthly trend in mean health service utilization rates comparing observed rates from PHIT-supported health centers to the rates they would have had if they had not received the HSS intervention? Controlled interrupted time series operationalizes this question by modeling the counterfactual using segmented linear regression and the post-intervention level and trend from a control series [43].
There were a total of 21 health centers in the intervention districts, seven of which had received support from PIH prior to the PHIT HSS intervention. These seven health centers were excluded because we were interested in the impact of PHIT HSS on health centers which had never received any non-governmental support, leaving us with 14 PHIT intervention health centers [35]. Our control series contained 409 government-run health centers that were also providing services. We sought to limit our control series to public health centers because the staff, infrastructure and equipment at private health centers would not reflect those available at the government-run intervention health centers prior to HSS. We excluded 29 health centers due to incomplete reporting over the study period, and the remaining 380 (93%) were eligible for matching.
Data source
RHMIS data were migrated from a SQL-server database to a DHIS2 platform in 2011. We merged data from the two databases to generate an analysis dataset spanning January 2008 through December 2012.
Outcomes
Utilization of maternal health and outpatient visits were chosen a priori because most of the investments were directed at these services, through investments in improved maternity infrastructure, clinical mentoring and essential medicines and equipment for maternal and newborn care [35]. Using the RHMIS data, we constructed a dataset containing variables on maternal health (new antenatal care registrations, women with 4 standard antenatal care visits, facility deliveries, referrals for high risk pregnancy, family planning) outpatient visits, and child care (diphtheria-pertussis-tetanus (DPT) DPT1, DPT3, and BCG vaccination visits).
We converted these metrics to rates using population estimates from the Ministry of Health [37] and measured differences in the monthly number of facility deliveries per 1,000 women, the monthly number of referrals for high risk pregnancy per 10,000 women, monthly number of outpatient (OPD) visits per 1,000 catchment population, monthly number of DPT1 vaccination visits per 1,000 children 0-11 months, monthly number of DPT3 vaccination visits per 1,000 children 0-11 months, and monthly number of BCG vaccination visits per 1,000 children 0-11 months between the intervention health centers and propensity score matched health centers. Though our outcomes do not incorporate person-time, we refer to them as rates for succinctness. For analysis, monthly rates were aggregated by intervention group (PHIT versus propensity score matched non-intervention series).
Statistical analysis
To account for differences in health center characteristics across intervention and control groups and select control facilities that were similar to intervention facilities at baseline with respect to health center characteristics, propensity scores were derived for each health center using multiple logistic regression models [44]. We limited the number of covariates included in the model due to the small number of intervention health centers (n = 14). We hypothesized that population density, outpatient visit rates, and delivery rates would be associated with success of the PHIT intervention and so we wanted to match intervention health centers to control health centers with similar characteristics at baseline. We chose to restrict the baseline period from January 2008 to December 2009 to provide two full years, and eligible control health centers to those that had no more than four missing HMIS reports during the baseline period. We failed to find evidence of association between delivery rates and treatment after adjusting for population density and outpatient rates, so our final model included population density (continuous, number of people per square kilometer in a catchment area sector), and four covariates that summarized the monthly average outpatient rate at each health center over six month intervals (January 2008-June-2008, July 2008-December 2008, January 2009-June 2009, July 2009-December 2009). To improve precision, each intervention health center was matched to up to ten control health centers, using a caliper match of ±0.05 propensity score units [45,46]. We assessed the performance of the propensity score match by comparing the standardized difference in health service utilization between intervention and control health centers before and after applying the match [47, 48].
We used controlled interrupted time series analysis to study trends in healthcare utilization variables in the intervention group relative to the comparison group. We used the intervention date of May 2010 and fit time series models to test whether differences in changes in level or trend in indicators were statistically significant between the two groups. Controlled interrupted time series produces two main results of interest. The first is the difference in postimplementation change in level of mean outcome in the intervention relative to the control group, and the second is the difference in post-implementation trend in outcome in the intervention relative to the control group. These results are beta coefficient estimates produced by the time series models. For our study, we interpret these beta coefficients as 1) the difference in mean health service utilization rate from the pre-intervention to post-intervention period comparing the intervention facilities to control facilities, and 2) the difference in monthly change in service utilization rate from pre-intervention to post-intervention period comparing intervention facilities to control facilities. We plotted our results using fitted line segments to visualize these pre and post-intervention changes in level and trend by intervention group. The trend coefficient in our model allowed us to estimate any changes that would arise in our intervention over time, such as delayed improvements following the initial investments into the system.
We used generalized least squares (GLS) models including autocorrelation terms for both moving average or autoregressive processes that were assessed independently for each outcome as described by Wagner et al. [43]. Lag terms were determined using Durbin-Watson tests and autocorrelation and partial autocorrelation plots. We used a GLS model with an autocorrelation lag term of 4 and a moving average lag term of 4 to analyze trends in delivery rates. For OPD, we used a GLS model with a moving average lag term of 1. Finally, we used a GLS model with an autocorrelation lag term of 1 and a moving average lag term of 1 for analysis of rates of referral for high risk pregnancy. We chose to control for seasonality by including a seasonal dummy variable corresponding to three month quarters (January to March, April to June, July-September, and October to December) for facility deliveries and OPD [49]. This seasonality dummy was not included in the model for high risk pregnancy because we failed to find any statistically significant associations between seasonality and trends in this variable. We also conducted post hoc tests for effect modification of the post-implementation effect on healthcare utilization on the multiplicative scale by season in the April-June quarter using a multiplicative interaction term. Analysis was conducted SAS v. 9.4 (propensity score matching) and R v. 3.1.0 (controlled interrupted time series) and. Full details regarding the statistical models are provided in the supplementary appendix (S1 File). Statistical significance was assessed at the 0.05 level, and results are presented as point estimates and 95% confidence intervals (95% CI).
Baseline characteristics
We present baseline characteristics of the intervention and control facilities in Table 1. Intervention facilities covered a much smaller total population than control facilities (254,656 v. 8,474,422) and were distributed over much smaller geographic area. Intervention facilities had slightly fewer monthly new ANC registrations on average during the baseline period of 2008-2009 compared to comparison facilities (median: 44 v. 62), and also fewer monthly outpatient visits (median: 945 v. 1326) compared to comparison facilities. Monthly facility deliveries over the baseline period were similar for both groups. Both intervention and comparison facilities had few referrals for high-risk pregnancies.
Propensity score matching resulted in 13 intervention health centers matched to 86 control health centers on population density and monthly outpatient visits over four six-month intervals between January 2008 -December 2009. Parameter estimates and kernel density plots showing results of the matching are presented in the supplementary content (S1 Fig, S1 Table). Propensity score matching yielded improved balance with respect to facility-based deliveries (standardized difference post-matching compared to pre-matching: -0.
Referrals for high-risk pregnancies
Over the period prior to HSS, the mean rate of high-risk pregnancy referrals at intervention facilities was lower than at control facilities (-0.82/10,000, 95% CI: [-1.26, -0.39]) (Fig 3). Mean referral rates at intervention facilities were 0.035/10,000 higher than the rates in comparison
Outpatient visits
At baseline, the average outpatient visit rate was 26% lower in PHIT health centers compared to comparison health centers (13.3/1,000 people, 95% CI: [-29.1, 2.6]). There were no major differences in level or trend in mean monthly outpatient visit rates in PHIT health centers relative to comparison facilities following implementation of HSS (Fig 4). While there was an immediate 27% increase (18.0 visits per 1,000 people per month) in the rate of outpatient visits in intervention facilities compared to control facilities in the period following HSS, (6.64/ 1,000, 95% CI: [-13.52, 26.81]), this did not reach statistical significance. Over time there was a decline in the trend in monthly mean outpatient visit rate in the comparison facilities (-0.81/ 1,000, 95% CI: [-1.69, 0.08]). There was a slight increasing trend in monthly mean outpatient rates following implementation of HSS in the intervention facilities compared to the
Other indicators
We found no significant differences in level or trend for rates of childhood vaccinations (DTP1, DTP3, BCG), new antenatal care registrations, or women with 4 standard antenatal care visits in the intervention facilities relative to comparison facilities (S5, S6, S7, S8 and S9 Tables).
Discussion
We present results from one of the first evaluations of an HSS intervention on health service utilization conducted using controlled interrupted time series and routinely collected HMIS data. Our findings showed that the impact of the PHIT HSS intervention on service utilization was constrained to facility-based delivery rates. We failed to find evidence that the HSS intervention resulted in increases in monthly outpatient visit rates or rates of referral for high risk pregnancy over and above those found in comparison facilities. For deliveries, we found that the increase in rates following the PHIT HSS intervention was restricted from the April to June season. For outpatient visit rates, even though we did not find evidence of a significant increase in level or trend in PHIT compared to control health centers following HSS, the level and trend in PHIT health centers remained higher than in control over the post-implementation period despite having lower rates at baseline. Facility-based deliveries were declining in the intervention health centers prior to HSS, with significant increase in mean delivery rates during summer months over the implementation period. Prior to the intervention, PIH had provided support to the two district hospitals and only seven health centers (excluded from the intervention group) [35]. One possible explanation of the decline in the pre-HSS period was patients choosing to go to health centers that were already receiving non-governmental supports within the intervention districts over their local health centers. There is growing evidence that women will indeed bypass facilities which they perceive as providing worse quality of care [42]. It is possible that positive messages about improved quality following the intervention combined with better access to health centers for deliveries during April-June may have driven the increase in delivery rates. The April-June season overlaps with the end of the short rainy season and beginning of the long dry season in Rwanda, during which roads improve and therefore accessibility to health centers increases.
Following the health center strengthening intervention, implementation of a quality of care intervention began that could also have led to increased utilization of maternal health services began in May 2011 [40]. Improving service readiness at multiple facilities in the intervention area would have allowed more women to access health services closer to their homes-a predictor that has found to be associated with facility-based deliveries in other studies in Rwanda [50]. While previous studies have shown that health facility-level improvements focused just on supplies and other readiness factors do not always lead to increases in utilization in other developing countries [51-53], we demonstrate that in Rwanda, these improvements could have led to increased utilization for maternal health services. We attribute these increases to a comprehensive approach to HSS-our intervention was guided by all six of the WHO Building Blocks and included a strong focus on the quality of care delivered to address multiple components critical to responsive health service delivery [34, 35].
We did not find significant increases in referral rates in intervention health centers compared to control health centers following HSS. Since our health center strengthening intervention also included the provision of ambulances and materials to strengthen communication between health centers and hospitals, we hypothesized that these improvements could have led to increases in referrals for high-risk pregnancies. However, our integrated approach to improving health center readiness through infrastructure renovations, availability of equipment and supplies, staffing and social support could have also decreased barriers to care at intervention facilities. Increased receipt of maternal care at intervention health centers could have led to reductions in management of complicated deliveries through better care and skills. Our finding of suggestive trends in control areas post-HSS suggests that some other national co-intervention may have occurred concurrently with the PHIT intervention, thus mitigating our ability to detect additional improvement.
We failed to find significant increases in outpatient visit rates in intervention relative to the comparison health centers. Our intervention occurred during a period of rapid change in Rwanda's health system [24,25]. Many national policies to expand access to facility-based care and decentralize decision-making power were introduced in the years preceding the intervention and during the intervention [54]. We saw suggestion of decreasing trends in monthly outpatient visit rates in control health centers which suggests that this roll-out of communitybased treatment of children under-five may have decreased use of services at facilities in the intervention and comparison areas. This hypothesis is supported by a recent analysis of community-based integrated community case management of childhood illness in Rwanda found decreases in facility utilization of under-5 services over the period of implementation (between January 2010 and December 2011) [21]. CHW strengthening was an explicit component of our broader health system intervention and may have been implemented with greater intensity in our districts compared to others, yielding differing patterns of healthcare utilization with regards to outpatient visits [34]. Given that implementation of this national policy occurred alongside our intervention, it is possible that it may have attenuated increases in facility-based outpatient visits during our study period.
The impact of various national policies to increase demand for maternal and child health services in Rwanda has been described in the literature [20,[28][29][30][31]. Researchers have argued that the collective impact of decentralization of health policy decision-making, communitybased health insurance and the introduction of PBF have led to higher quality of services and increased utilization of public health services across the country, though only the PBF evaluation used a randomized controlled trial design. Further analyses have suggested that the benefits of these interventions have allowed the poorest patients in Rwanda to increasingly access health services over time [20,30], meaning that the additional improvement attributable to the PHIT HSS intervention would be challenging to measure. We did not see any differences in healthcare utilization at intervention compared to non-intervention health centers across a number of services, many of which started at very high coverage rates prior to implementation of our HSS intervention. Antenatal care coverage and vaccination coverage were roughly 90% nation-wide prior to implementation of our HSS intervention [27], leaving little room for improvement over the study period.
Few evaluations exist to estimate the impact of HSS interventions to improve district facility readiness on service utilization, and fewer still are conducted using designs that allow causal effects to be estimated. Our study provides findings from an evaluation of such an intervention and shows that even in the context of national policies aimed at increasing utilization, facility deliveries in intervention health centers increased relative to control health centers in specific seasons. We would expect that in countries where access to health services is more limited and fewer national health financing schemes are in place to encourage demand, a similar intervention could yield greater increases in healthcare utilization. Health systems researchers in northern Uganda found increases in facility deliveries following a combined community-level and health facility-level intervention that occurred between January 2010 and September 2011 to improve quality of services, though they presented results from their intervention area only [55].
Our study had several limitations. First, while our data source has been shown to have strong internal validity [15], full assessments of reliability and accuracy of RHMIS have yet to be conducted. However, preliminary assessments of external validity comparing RHMIS to DHS estimates for coverage indicators show little discordance [19]. Furthermore, in order to bias our results, RHMIS data quality would have had to have changed differentially at the same time as the HSS intervention between intervention and control areas. The electronic system transitioned from a French SQL-based database to an English DHIS2 database in 2012, which limited variables to those that were consistent across systems, though this affected both intervention and control facilities. We were limited in our ability to match our intervention health centers to control health centers using propensity scores due to small sample size for our intervention group and lack of detailed information on health center characteristics in all public health centers in Rwanda. Further limitations of this analysis include a lack of data on contextual factors associated with health centers that could help explain these findings. Use of geospatial analysis and contextual information at the health center and district level would provide additional information on possible mechanisms through which the intervention was successful or not. Future studies should account for district-level and health center-level interventions in the design, and power their studies to estimate effects at each level. These results are restricted to public health centers; since most care in Rwanda is provided at public facilities, we expect that bias due to differential increases and use of private health centers post-HSS is limited. Our quasi-experimental, longitudinal design controls for the impact of any national co-interventions that were rolled out across the country during the study period, strengthening the validity of our findings. Our HSS intervention was heterogeneous, with allocation of funds towards strengthening health systems building blocks tailored to specific needs of different health centers, making it difficult to attribute impact to a specific intervention component. Since we used aggregate data for our analysis, we cannot assess the impact of the intervention on individual patients to determine whether we successfully targeted the most vulnerable in our catchment populations.
Strengths of this analysis include use of five years of monthly time series HMIS data to allow modeling of counterfactuals following our HSS intervention, and use of propensity score matching to simulate exchangeability of intervention and comparison facilities by matching on baseline trend and health center covariates. Since the HSS intervention start date is clear, and control time series were included to account for other co-interventions that might influence utilization, we are unlikely to have introduced systematic bias in our analysis.
In summary, we evaluated the impact of a heterogeneous HSS intervention on health service utilization in rural Rwanda using routinely collected HMIS data. We found that facility delivery rates increased post-HSS in intervention compared to non-intervention health centers, though this increase was restricted to the April-June quarter in years following HSS, though changes in other patterns of utilization were limited. This example demonstrates how routinely collected HMIS data in Sub-Saharan Africa can be used for quasi-experimental evaluation designs. Time series data allow researchers to move away from pre-post and cross-sectional evaluation designs to quasi-experimental designs using counterfactuals. In addition to informing program implementation and providing useful estimates of impact for health providers, such analyses also encourage strengthened information systems and national engagement in policy research [8]. This manuscript involved collaboration between academic, nongovernmental and governmental scientists, and was supported through training funds to develop capacity in Rwanda to study national policies using interrupted time series analysis and HMIS data. We hope other implementers and policy researchers in low and middle income countries will be encouraged by such studies to use their national data sources for implementation and policy research. Further studies using these methods are needed to estimate effects of HSS interventions have on utilization and population health outcomes.
Acknowledgments
This study was completed as part of training in interrupted time series analysis developed and led by MRL and sponsored by funds from the Doris Duke Charitable Foundation's African Health Initiative. The authors thank the Rwanda Ministry of Health for providing expertise in using their national health management information system data. We also thank Partners In Health, the Brigham and Women's Hospital and Harvard Medical School for their logistical and technical support with this research study. We gratefully acknowledge the financial support from the Doris Duke Charitable Foundation's African Health Initiative, through PHIT Partnership funding for implementation of this study (Grant number: 2009P001941). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. HSI was supported in part by the National Institutes of Health research training grant (NIH, T32 CA 009001). Dr. Law received salary support through a Canada Research Chair and a Michael Smith Foundation for Health Research Scholar Award. | 7,576.8 | 2017-08-01T00:00:00.000 | [
"Economics",
"Medicine"
] |
THE IMPACT OF INTEGRATION ON SOCIAL AND ECONOMIC INEQUALITY OF REGIONS
Integration is always effect positive on the economy of the country: due to increase of competition between firms, growth of production’s volume, more effective territorial distribution of economic activity, reducing the cost of resources and the cost of goods to the value of the border’s costs. Domestically, the more successful regions receive more benefits from the integration that enhances the existing socio-economic inequality among the regions. The aim of the study is to identify and evaluate the effect of integration factors on regional inequality, building an econometric model of such a relationship and test a model in the Russian regions.
It is important to realize the mechanism of action of different forms of Russia integration into the world economy on the state of its regions to prevent enhancement of their inequality. The results of empiric studies with use of modern economic and mathematical methods and theoretical provisions of leading world scientific schools shall be the basis for understanding this mechanism.
The aim of the study is to identify and evaluate the effect of integration factors on regional inequality, building an econometric model of such a relationship and test a model in the Russian regions. The calculations were performed on 83 regions of Russia in the period from 2002 to 2013 with the help of the least squares method. The article presents the results of the analysis of the dynamics of integration factors of socio-economic inequalities in the regions.
Theoretical background and bibliography. The integration processes in the world economy firstly occur through changes in trade. It is recognized that within the long-term period liberalization is positive as it leads to obtaining economic and noneconomic benefits for trading parties [1,2]. At that its process is accompanied by two types of short-term expenses: distribution costs (protected economy sectors come off losers) and balance of payments position related to rapid growth of import [3].
Some works are related to the problems of integration processes influence on the regional inequality within the countries. In 1996 the integration processes in the countries of African South were described in the works due to the fact that they would be determined by interaction of the global and regional circumstances, liberality and active regional inequality [4]. It is assumed that obtaining positive effects is possible only in the long run and by means of definite losses.
In this sphere of the study action of integration on interregional inequality is analyzed through different trends. Firstly, this is change of industrial concentration and distribution of human resources due to reduction in transport expenditures [4].
Secondly, the integration processes are commonly associated with trade liberalization. Open economy creates the conditions for manifestation of scale effects. The majority of studies of regional concentration is based on the models of new economic geography, where returns to scale proceed from external technology factors. The models [6] explain how the factors of "the second nature" determine inequality in regional salaries and distribution of economic activity, that economic activity concentration is optional result of influence of the Marshall's theory external factors. The regional inequality is mainly intensified where the scale effect is observed. Minor inverse relation between the regional inequality and economic development level (as an integration prerequisite) was revealed in a range of the subsystems and European regions [7]. Intensification of regional inequality under conditions of the integration processes shall be considered through influence of economic growth. A variety of scientists studied comprehensively the problems of interconnection of globalization, regional inequality, growth and development of countries with the basis of the European countries' experience [8]. The study of interrelation of economic growth (as an inherent attribute of integration processes) and inequality explains the positive effect of redistribution for economic growth [9].
Relations of economic growth and inequality during 2000s were analyzed mainly in economies with identical participants, at that subjects for study were often interactions between endogenous technical changes resulted from improvement in quality of innovations and dynamics of reward structure [10].
The works with advisory character for economic policy of constraining interregional inequality caused by integration processes are of interest. As per the results of analysis of relations of globalization and global inequality, poverty and marginalization of the society it is suggested to pursue the alternative policy of containment of desperate poverty and inequality growth [9]. The similar conclusions are made in the work of Basu K. [12]; the importance of a rational share of the state participation in reduction in regional economic inequality is emphasized by Bowles S. and Gintis H. [13].
3. Research methodology. The task of assessment consists in obtaining the required and sufficient information about influence of integration on social and economic inequality of the regions for use at pursuing the corresponding regional policy. We suggest the following formulation of the model of influence: where Y is social and economic inequality of regions; L is human resources; K is physical capital; Intel is intellectual capital; Dist is market access; Exp is export; Imp is import; Spec is specialization.
The mathematical formulation of the assessment task consists in necessity of analysis of the dynamics of resulting indicator and factor indicators, determination their interrelation. The following stage of the procedure is determination of resulting indicator of the dynamics of regional inequality. If we assess the influence of globalization factor on social and economic inequality of the regions, assessment of static inequality level, for a definite moment of time, will not provide us with the required information. it is important to realize how globalization processes which involve Russia (trade liberalization, economic integration, growth of investment flows, inclusion in world financial markets, migrations, exchange of knowledge and technologies) change economic geography of the country, which regions intensify its position and which of them loss it. For such assessment the most appropriate indicator is indicator of the region percentage in aggregate Gross Regional Product (GRP) of the country which shall be considered in dynamics.
Let us to determine the system of factor indicators. The first trend of such action occurs through enhancement of efficiency of manufacturing production location, change of region sectoral structure and appearance of agglomerative effects [14]. Change of specialization occurs significantly slower than change of geographical concentration. At active participation in world economic relations the regions start to get profits from specialization in manufacture of export-oriented production and consequently the specialization level shall rise. The specialization dynamics shall be considered by the basic types of manufacturing activity.
In addition to total specialization indicator we suggest to assess agglomerative effects by four economy sectors. This will allow determining the extent to which the regions use the returns of concentration of one or another type of the economic activity. In particular, it is necessary to calculate agglomerative effects in agriculture, extractive industry, manufacturing and services. Agglomerative effects are determined as a sector size (that is number of employment in the region economy sector) multiplied by its specialization index. If the index value exceeds 1, the region specialization in this economy sector takes place.
One more factor indicator of this group is region access to large market. We consider that to reveal and assess influence of nothing else but globalization factors on social and economic inequality it is necessary to separately analyze the relation of the region to inner and outer markets. The region access to inner markets can be estimated by distance to the nearest big city (with population over 1 mln. people). Note that a distance to the state capital city is commonly estimated in the foreign literature. However our understanding is that for the Russian regions taking into account the country territorial sizes such approach will be incorrect and have controversial results. For this purpose 12 cities with population over 1 mln. people were selected. By every region the nearest big city was determined and distance to it along motor roads was calculated.
The second block of the indicators assesses influence of the globalization factors on social and economic inequality of the regions through growth of industrial and trade outputs. Here it is reasonable to calculate such indicators as volume of export and im-port of the regions (paying special attention to export as import is basically transitional in a range of boundary regions considered in the statistics). In addition to determination of absolute indicator and analysis of their dynamics it is necessary to assess the degree of the region involvement in external trade, to this end the indicators of export and import quotas shall be calculated. Both indicators are calculated as export/import relation to GRP.
One more indicator the dynamics analysis of which is necessary for determination of influence of the globalization factors on social and economic inequality of regions is an index of manufacturing concentration in regions -Herfindahl-Hirschman Index. Assessment of concentration of production and economic activity can be carried out both at the level of the region companies (this allows to form representative sampling of the data for empiric studies), and at the level of regions or cities (in this case for expansion of sampling it is appropriate to use panel data). For assessment of geographic concentration of industry and economic activity we suggest to use Herfindahl-Hirschman Index.
The third block of the indicators assesses influence of the globalization factors on social and economic inequality of the regions through change of territorial employment structure and labor efficiency. In this complex it is appropriate to analyze the dynamic of labor efficiency in the regions, change of a region share in total number of employment in the economy and to calculate the Herfindahl-Hirschman concentration index for employment in the economy.
The labor efficiency dynamics is an important indicator of the region development as, for instance, its growth shall exceed growth of wages (expenses for remuneration of labor) and this is mandatory requirement of intensification of the economy. The labor efficiency does not just reflect the efficiency labor utilization. In case of active implementation of new technologies into production, modern methods of managements the labor efficiency shall also rise. The dynamics of the share of the region in total employment is indicative of human resources transfer. If human resources are concentrated in more "prosperous" big regions then we can speak of growth of interregional inequality. The Herfindahl-Hirschman index calculated by the employment in the Russian regions economy is also indicative of such concentration tendency.
The fourth block of the indicators reflects influence of the globalization factors on social and economic inequality of the regions through the dynamics of capital investments including direct foreign investments. In this connection our understanding is that the indicators of this block of the assessment procedure can include the volume of direct foreign investments in the regional economy, Herfindahl-Hirschman index for capital investments, as well as such indicator of capital consumption as density of motor hardsurface roads. The motor roads density does not just reflect capital consumption in the regional economy but also is an important growth factor. High quality of motor roads of the region have a positive effect on access to the inner market and in case of boundary regions to outer markets.
As it is mentioned above the regional inequality occurs through the economic growth which in "prosperous" regions has higher rates than in "poor" regions. Addressing to theories of economic growth we can see that besides the basic factors such as labor and capital the third factor in the modern studies is influence of scientific and technical progress. This factor is suggested to consider in a separate fifth block, the basic indicator will be export and import of technologies to the Russia's regions.
Let us determine the following independent variables of the model by each influencing factor (Table 1). Export quota (relation of region export volume to GRP) Import (Imp) х 7 Import quota (relation of region import volume to GRP) Specialization of region, agglomerative effects (Spec) х 9 P. Krugman's specialization index of regional economy х 10 Agglomerative effects for agriculture х 11 Agglomerative effects for extractive industry х 12 Agglomerative effects for manufacturing industry х 13 Agglomerative effects for services In view of the fact that dependence of social and economic inequality on the provided factors is not linear the model has the following form: where А, i are equation coefficients which will be obtained by the least squares method; it is a measurement error.
The provided model allows considering direct and indirect factors of globalization for social and economic inequality of the region in comparison with the other regions which means that it reveals the causes of region inequality.
Integration processes and external trade of Russia.
At the moment the integration processes affect all countries and regions of the world and Russia is not an exception. Transparency of the Russian economy rises with accession to the WTO, development of the Customs Union activity (Russia, Kazakhstan, and Belarus). We can determine that "documentarily" the closest integration of Russia takes place within the former Soviet Union territory: Eurasian Economic Union, CIS free trade area, the Union State of Russia and Belarus, EurAsEC, Single economic space of Russia, Belarus and Kazakhstan, Black Sea Economic Cooperation. As per the other trends it shall be noted that Russia has been a member of the Council of the Baltic Sea States since 1992, a full-fledged member of the Asia-Pacific Economic Cooperation forum (APEC) since 1998, a fullfledged partner of ASEAN since 1996 (not included into the group).
Note that within the Asian-Pacific Region Russia is also a full-fledged member of such organization as the Pacific Economic Cooperation Council (1991), Pacific Basin Countries Economic Committee (1995). Russia has a status of a "non-regional member" of the United Nations Economic and Social Commission for Asia and the Pacific (ES-CAP). Let us consider the dynamics of the Russia's external trade by the indicators of export and import with the CIS states and non-CIS states in 1995-2014 (Fig. 1).
Russia's export to the CIS states Russia's import to the CIS states Russia's export to the non-CIS states Russia's import to the non-CIS states The values of root-mean-square errors are given in brackets, *** is significance level 1 %; ** is significance level 5 %; * is significance level 10 %. The determination coefficient is 0.77. Let us consider independent variables of the model and obtained results to some detail. First of all the model includes the basic factors of production that is labor and capital. Labor is represented by two variables: х 1the region share in the country employment, х 2labor efficiency in the region. Both factors have positive effect on social and economic development of the region, in the model they are statistically significant. The value of physical capital is estimated through the indicators of the direct foreign investments in the region economy (х 3 ) and density of motor hard-surfaced roads (х 4 )as a reflection of the state of the regional infrastructure. It shall be noted that both indicators are statistically significant, but they negatively affect the region development.
In fact, direct foreign investments are not always a panacea for the economy, their nature (and this is mainly the investments in extractive industries) is indicative of a possibility of receipt of quick profits for the foreign companies without costs for any needs of the regions. Adverse effect of direct foreign investment on the regional economy can be related to the specificity of Russia. As for negative influence of the indicator of density of motor hard-surfaced roads there is also a peculiarity of the Russian regions. Considering the length of country territory, it is also necessary to estimate influence of railway, air and water transport infrastructure.
The value of intellectual capital is assessed in the model of variable of total export and import of the region technologies (х 5 ). Among the Russian regions for the analyzed period there are such regions which did not export and import technologies at all. At large export and import of technologies have a positive effect on social and economic development of the region, which is to be expected.
The factors of foreign economic activity are relation of export to GRP (х 6 ) and import to GRP (х 7 ). The results of modeling show that domination of export-oriented productions in the regional economy has positive effect on its development at large and high level of import in relation to GRP has an adverse effect.
The market access is an important factor of the regional development, key one in positions of new economic geography (х 8 ). Taken for its calculation was a distance along the roads from the region (center) to the nearest big city with population over a million. It is obvious that the more the distance is between the region and the "center" (significant large-scale market), the less effective is development of such region.
The level of specialization of its economy (х 9 ) assessed by the corresponding P. Krugman's index has a positive effect on social and economic development. It is also important to consider the sphere of the region specialization. To this end it is necessary to calculate agglomerative effects occurring in agriculture (х 10 ), in extractive industry (х 11 ) and manufacturing industry (х 12 ) as well as in the sphere of services (х 13 ). Agglomerative effects are calculated as a sector size (that is number of the region employment) multiplied by its specialization index. Among the Russia's regions 45 are specialized in one or another sphere of agriculture. The analysis results showed that such specializa-tion is not profitable for the regionsit does not lead to growth of efficiency of social and economic development. 23 regions of Russia are specialized in extractive industry. Such specialization has a positive effect on economic development. Earlier in one of the studies [15] it was proven that for the Russian regions deep specialization is profitable only when it takes place in extractive industries of the economy. Only 6 regions of Russia have the highest indicators of the specialization in manufacturing as well as high level of agglomerative effects. Presence of agglomerative effects in this case positively influences on social and economic development of the region at large. The last factor included into the model is agglomerative effects in services positively influences on social and economic inequality of the regions.
Conclusion.
Thus, among the factors provided in the model the following shall be distinguished: regional share in the country employment, labor efficiency, density of public motor hard-surface roads, export quota, distance along motor roads to big cities. These factors are significant at pursuing the policy of social and economic development of the Russian regions. Reference | 4,338.6 | 2015-08-26T00:00:00.000 | [
"Economics"
] |
Transport evidence for decoupled nematic and magnetic criticality in iron chalcogenides
Electronic nematicity in correlated metals often occurs alongside another instability such as magnetism. The question thus remains whether nematicity alone can drive unconventional superconductivity or anomalous (quantum critical) transport in such systems. In FeSe, nematicity emerges in isolation, providing a unique opportunity to address this question. Studies to date, however, have proved inconclusive; while signatures of nematic criticality are observed upon sulfur substitution, they appear to be quenched by the emergent magnetism under the application of pressure. Here, we study the temperature and pressure dependence of the low-temperature resistivity of FeSe1-xSx crystals at x values beyond the nematic quantum critical point. Two distinct components to the resistivity are revealed; one that is suppressed with increasing pressure and one that grows upon approaching the magnetic state at higher pressures. These findings hint that nematic and magnetic critical fluctuations in FeSe1-xSx are completely decoupled, in marked contrast to other Fe-based superconductors. The role nematicity and magnetic fluctuations play in the manifestation of unconventional superconductivity for Fe-based superconductors is actively debated with, so far, no clear consensus. Here, the authors study the resistive properties of sulfur-doped FeSe under applied pressure finding evidence of two distinct contributions to the electrical resistivity, which suggest a decoupling of nematic and magnetic fluctuations in this system.
A common characteristic of unconventional superconductors is their proximity to another ground state of broken symmetry, fluctuations of which can both mediate superconductivity and drive non-Fermi-liquid (nFL) behavior in the vicinity of its associated quantum critical (QC) point. Nematicity-a lowering of rotational symmetry without breaking translational symmetry-is one form of order that has been observed in a variety of systems, including iron-based 1-3 , cuprate 4 , heavy fermion 5 , and Moiré 6 superconductors. The extent to which nematic order and its fluctuations are responsible for pairing and QC phenomena has proved a challenging question, however, largely due to the fact that nematicity often occurs in the vicinity of another, possible primary, instability. In ironpnictides, for example, nematicity is claimed to be a spin-driven effect 7 while QC phenomena observed in Sr 3 Ru 2 O 7 -initially attributed to a nematic quantum critical point (NQCP) 8 -were later found to arise in the presence of a field-tuned spin-density wave 9 .
FeSe is unusual in that nematic order stabilizes in the absence of static magnetism 3 . Below a tetragonal-to-orthorhombic distortion at T s = 90 K, both its normal 10 and superconducting (SC) 11 state properties exhibit marked two-fold anisotropy. Although widely believed to be electronic in origin 12 , it remains unclear whether the nematic transition is driven by charge 13 , orbital 14 , or magnetic 15 correlations. Nevertheless, its discovery offers a unique opportunity to test theoretical predictions for nFL or "strange metallic" behavior arising solely from critical nematic fluctuations [16][17][18][19][20][21][22] . To this end, a large effort has been made to elucidate the respective roles of nematic and magnetic fluctuations in shaping the normal and SC properties of FeSe [23][24][25][26] .
High-pressure studies on FeSe have proved to be highly instructive in this pursuit. As pressure increases, T s is suppressed (to T s = 0 K at p = p c ) but the SC transition temperature T c is not enhanced at p c 27 . Beyond the nematic state (p > p c ), however, there is a marked (four-fold) increase in T c 27-29 that has been naturally linked to strengthening magnetic interactions 30 . The role of nematicity in driving nFL/QC phenomena has proved more controversial. At p = p c , the critical nematic fluctuations in FeSe are quenched 13 , presumably due to the emergence of longrange magnetic order before the nematic phase terminates 31 . In FeSe 1-x S x , nematicity is also suppressed with increasing sulfur substitution, vanishing at a critical S concentration x c = 0.17 32 where the nematic susceptibility also diverges 12 and quantum critical transport is observed 33 . Since no magnetic order develops at any point across the substitution series (at ambient pressure), this divergence suggests that a genuine NQCP exists in FeSe 1-x S x .
The question remains, however, whether the emergent critical nematic fluctuations are responsible for the strange metal transport seen at ambient pressure in FeSe 1-x S x 33-38 . Although static magnetism is not stabilized at ambient pressure, low-energy spinfluctuations, for example, are known to persist to p = 0 at low T and low x 39 . Moreover, quantum oscillation studies indicating a lack of divergence in the effective mass m* on approaching the NQCP 40 have led to the suggestion that the critical nematic fluctuations may also be quenched at ambient pressure-in this case, due to nemato-elastic coupling or local strain effects 41,42the nFL transport then being attributed to scattering of the residual spin fluctuations. To date, however, the full evolution of m*(x) from x = 0 to x > x c is only known for a single oscillation frequency 40 leaving open the question of whether or not mass enhancement occurs at other locations on the Fermi surface.
With increasing x, p c falls while p m , the onset pressure for magnetic order, increases 43 , leading ultimately to a separation of the nematic and magnetic phases in the (p, T) plane at higher x. Previous NMR measurements appeared to confirm such a separation at x = 0.12 (<x c ) 44 . Detailed transport studies 45 on pressurized FeSe 1-x S x with x = 0.11 then revealed the absence of nFL transport or m* enhancement across p c , supporting the picture of quenched nematic criticality due to strong nematoelastic coupling 45 . A more recent µSR study, however, found that magnetism at x = 0.11 is stabilized before nematicity is destroyed (the discrepancy between µSR and NMR likely reflects the different timescales of the two probes) 46 . Hence, it is unclear whether the suppression of nematic criticality near x = 0.12 under pressure is due to coupling to the lattice or to slowly fluctuating moments. In order to determine whether critical nematic fluctuations alone can drive nFL transport in FeSe 1-x S x , pressure studies on samples with higher x values, where the nematic and magnetic phases are fully separated, are required.
Here, we study the low-T resistivity ρ(T) of FeSe 1-x S x with x = 0.18 and 0.20 (>x c ) under applied pressures up to 15 kbar (<p m ). Whilst the form of ρ(T) cannot differentiate easily between nematic and magnetic fluctuations, tracking its evolution with p may reveal an approach to or a retreat from a QCP associated with either order parameter. In this way, their respective influences can be disentangled. For both samples studied here, we find two distinct T 2 components in ρ(T) (due to quasiparticlequasiparticle scattering) which extend over different T ranges and whose coefficients show contrasting p-dependencies. The term that grows with increasing p is attributed to the dressing of quasiparticles by critical magnetic fluctuations that strengthen upon approach to the magnetic QCP 43,47 . Its coefficient at ambient pressure, however, is found to be negligible. This implies that the source of the large and strongly x-dependent T 2 coefficient observed at ambient pressure is the scattering of quasiparticles that are dressed purely by the orbital nematic fluctuations. Finally, this coexistence of two distinct components to ρ(T) also suggests that, in contrast to what is observed in the iron-pnictides, the critical nematic and magnetic fluctuations in FeSe 1-x S x are completely decoupled.
Results
Nematic quantum critical resistivity. Figures 1a, b show, respectively, the zero-field ρ(0, T) (pale) and high-field ρ(35 T, T) curves for samples with nominal x values of 0.18 and 0.20 oriented H//I//ab at various pressures 0 ≤ p ≤ 14.4 kbar. The suppression of superconductivity by the magnetic field is apparent in all data sets. For T > T c , there is almost complete overlap between ρ(0, T) and ρ(35 T, T), confirming that the magnetoresistance in this field orientation is negligible beyond x c 33,36 , in marked contrast to the large magnetoresistance seen for H//c 35,36 . The broadening and structure of the superconducting transitions in ρ(0, T) is highly reproducible between subsequent cooldowns at different pressures, between samples of similar dopings 48 , and between measurements performed by different groups 47,49 indicating that non-hydrostaticity is unlikely to be playing a role here. We also note that the transitions sharpen again at higher pressures (~3 GPa) 43 suggesting that this behavior is in fact intrinsic.
The corresponding derivatives dρ/dT(35 T) of the high-field curves, shown in panels c and d of Fig. 1, reveal a systematic evolution of ρ(T) under applied pressure. To better orientate our discussion, we focus initially on the form of dρ/dT at ambient pressure. For T < 10 K, ρ(35 T, T) = ρ 0 + A t T 2 with A t coefficients that are determined by fitting the dρ/dT traces at the lowest temperatures to a straight line through the origin (black lines in Figs. 1c, d). The temperature range fitted to was chosen in order to avoid being influenced by the clear crossover in behavior that occurs at~10 K and the pressure-induced enhancement of superconductivity at the lowest temperatures. We argue below that A t reflects the total quasiparticle-quasiparticle scattering cross-section enhanced by both magnetic and nematic critical fluctuations. Above the T 2 regime, dρ/dT is essentially flat, implying that ρ(T) becomes T-linear (with coefficient B). Such a T 2 to T-linear crossover is characteristic of a metallic system in the vicinity of a QCP 1,50-53 .
Pressure-induced growth of a purely T 2 component. A notable change in the derivative plots with increasing p is the emergence of a finite linear slope in dρ/dT at higher temperatures, indicative of a second T 2 component that (i) coexists with the T-linear term, (ii) has a coefficient A′ that is around one order of magnitude smaller than A t , and (iii) extends over a much broader temperature range. A′ and B are determined by fitting the dρ/dT data between 20 and 40 K to another straight line (high-T black lines in Fig. 1c, d). The fitting range was increased to 25 to 40 K for x = 0.18 at the highest pressures to again avoid being influenced by the onset of superconductivity. Whilst this second T 2 component is most evident in the derivative data at high T, the expectation is, as for a correlated Fermi liquid, that it extends down to the lowest temperatures. In this way, A t is most naturally interpreted as the sum of two T 2 components, i.e., A t = A + A′; the first component persisting up to~10 K, the second component up to the highest temperature measured in our study (~40 K).
The p-dependence of coefficients A (A t ), B, and ρ 0 (the latter obtained by extrapolating fits of the low-T ρ(T) curves at 35 to 0 K) is shown in Fig. 2a-c, respectively. It is immediately apparent that the relative slopes of all three quantities are the same, indicating that their p-dependencies share a common origin. The p-dependence of A′ and T c is shown in Fig. 2d, e respectively. The strong anticorrelation of A′(p) with A(p) and B(p) indicates that its origin is distinct. It, therefore, appears that there are two distinct components: one that crosses from T 2 (with coefficient A) to T-linear (with coefficient B) and a second that remains purely T 2 up to at least 40 K (with coefficient A′).
Carrier density inferred from the residual resistivity. The drop in A (A t ), B, and ρ 0 with increasing pressure could signify either a reduction in scattering or an increase in the plasma frequency ω p 2 (i.e., n/m*), or some combination thereof. In the first scenario, the fall in A (A t ), B, and ρ 0 with increasing p (depicted in Fig. 2) would be attributed directly to a reduction in the dressing of quasiparticles by the relevant critical fluctuations. While this interpretation can support a typical quantum critical scenario in which A(p) (and perhaps ρ 0 ) drops as the system is tuned away from the NQCP, the scattering rate associated with the linear-in-T coefficient is not expected to decrease too. Indeed, the T-linear resistivity inside of the quantum critical fan in FeSe 1-x S x at ambient pressure has been shown to be governed by a dopingindependent scattering rate 1/τ that is tied to the Planckian limit, i.e., ħ/τ = ak B T with 1 ≤ a ≤ 2 33 .
In the second scenario, the change in all three coefficients can be ascribed wholly to an increase in n/m*. A sizeable increase in n with pressure has been deduced in both FeSe 53 and FeSe 0.89 S 0. 11 45 from quantum oscillation studies. Indeed, for x = 0.11, six of the eight observed oscillation frequencies (corresponding to the largest Fermi pockets) increase appreciably (50-75%) between 0 and 17 kbar 45 . To account for this, a rescaling factor ρ 0 (0)/ρ 0 (p) (dashed lines in Fig. 2c) can be found that assumes the decrease in In panels c, d, the black lines are straight-line fits to the lowest temperature data chosen to both avoid the crossover to T + T 2 behavior at~10 K and the onset of superconductivity at low temperatures and to the higher temperature data above 20 K from which the resistivity coefficients A t (A), B, and A′ have been deduced. The enhancement of superconductivity prevents A t from being determined at the highest applied pressures. The increasingly broad superconducting transitions manifest themselves as shallow peaks in the derivatives that are most visible in dρ/ dT(x = 0.18) above 9.8 kbar (panel c) but may influence the data at lower pressures as well. The fits at low-T have been forced through the origin. The small finite intercepts due to superconductivity at intermediate pressures are accounted for in the errors. ρ 0 reflects a change in carrier density (and not a reduction in enhancement from the NQCP). Figure 3a (Fig. 3b) shows the pdependence of A*, B* (A′*), the coefficients A, B, and A′ rescaled by multiplying each quantity by ρ 0 (0)/ρ 0 (p). As can be seen, the resultant A* and B* coefficients are either p-independent (for x = 0.18) or fall slightly (for x = 0.20) (note, however, the large error bars for the data at highest pressures). The near-constancy and magnitude of B*(p) is then consistent with the notion that the effective scattering rate remains at the Planckian bound with increasing pressure, in agreement with what had been found at ambient pressures 33 . Within a QC scenario, the near-constancy of A*(p) is also consistent with the fact that the extent of the (low-T) T 2 regime in both samples does not vary with p. This is consistent with pressure tuning parallel to the nematic phase boundary in the p-T plane as indicated in Fig. 3c. By contrast, at ambient pressure A t exhibits a marked decrease with increasing x beyond the NQCP (see Fig. 3d) while the temperature of the T 2 to Tlinear crossover increases as the system is tuned away from the NQCP by chemical substitution 33,35 .
Discussion
Irrespective of which scenario is the most appropriate, the marked increase in A′ (or in A′*) with pressure, in both samples, is a robust observation. The order of magnitude change in A′*, in particular, is even greater than that seen in A t * upon approach to the NQCP at ambient pressure (Fig. 3d) and comparable to that observed in other quantum critical systems with well-established magnetic QCPs 50,54,55 . Moreover, the fact that A′ is anti-correlated with A and B implies that the former has a distinct origin. The marked rise in A′ is consistent with an enhancement in the quasiparticlequasiparticle scattering cross-section upon approach to a second, distinct QCP. The absolute magnitude of A′ over our experimental pressure range (~5 nΩ cm K −2 ), however, is much smaller than the value that A t reaches (>200 nΩ cm K −2 ) 33,35 upon approaching the ambient pressure NQCP, as shown in Fig. 3d. This, coupled with the more extended temperature range over which this T 2 term persists, suggests that the second QCP is likely to be situated at a critical pressure far beyond those accessible here. As illustrated in Fig. 2e, the approach to the second QCP also coincides with a marked (factor of 2) growth in T c for both samples, the growth in A′ and T c being largest for x = 0.18. As mentioned in the introduction, a marked increase in T c with pressure at lower sulfur concentrations has been linked previously to strengthening magnetic interaction 30 . Indeed, it has been suggested that T c is maximized at the magnetic QCP 43 and it is known that magnetism is stabilized at higher pressures 43,53 . Although there have been no reports to date confirming the presence of magnetic order with increasing pressure beyond x c = 0.17, resistivity data presented in Matsuura et al. 43 .
show that the magnetic ordering temperature at 5 GPa remains doping independent up to x c . Thus, it seems reasonable to expect magnetic order to be stabilized under pressure beyond x c and we associate this second QCP with the pressure-induced antiferromagnetic phase, and ascribe the p-dependence of the second T 2 component in ρ(T) to quasiparticle-quasiparticle dressing by critical spin fluctuations in the quantum disordered regime. Fig. 2 Pressure dependence of the resistivity coefficients and superconductivity. a Pressure p-dependence of the low-temperature T 2 coefficient A t (circles) obtained from linear fits of the derivative of the resistivity dρ/dT below 10 K (black lines in Fig. 1c, d). Also shown are the coefficients A = A t -A′ (diamonds), the component of A t attributed to electron-electron scattering dressed by critical nematic fluctuations. Dashed lines are linear fits to the data. The strengthening superconductivity prevents A t (and A) from being determined at the highest applied pressures. b Pressure dependence of the T-linear coefficient B was obtained by fitting dρ/dT measured between 20 and 40 K to a straight line. Dashed lines are linear fits to the data. c Pressure dependence of the residual resistivity ρ 0 was obtained by extrapolating the low-T ρ(T) curves at 35 T to 0 K. Values were only obtained up to the pressures at which superconducting fluctuations do not influence ρ(T). The dashed lines are extrapolations of straight-line fits to the data points. d Pressure dependence of the high-T T 2 coefficient A′ as obtained from straight-line fits to dρ/dT at 35 T and above 20 K. e Pressure dependence of the superconducting transition temperature T c defined as the temperature at which the zero-field resistivity reaches 90% of its value at 35 T. T c in both samples exhibits an enhancement by a factor of around two. The error bars in panels a-d are reflective of the variation of the obtained coefficients to details of the fitting procedure (principally the precise choice of temperature range being fitted to). We estimate there to be an additional 30-50% systematic error due to uncertainty in sample and contact geometry. The error in the obtained values in panel e are within the size of the data points.
Of course, there are other scattering mechanisms that are capable of generating T 2 resistivity with a variable coefficient, such as non-critical electron-electron scattering near a Mott metal-insulator transition 56 , electron-phonon scattering in disordered systems 57 or short-range spin fluctuation scattering 58 . However, the order-of-magnitude increase in A′* over a relatively narrow pressure range is difficult to reconcile with any of these mechanisms. One would need to invoke a pressure-induced suppression of disorder by one order of magnitude for electron-phonon scattering to be sufficient 57 , there is no evidence for Mottness and while spin fluctuations are found to be pressure independent in pure FeSe 39 , they become suppressed with pressure up to 2 GPa (the pressure range of this study) at x = 0.12 59 . Clearly, further studies will be required to definitively rule out these alternative explanations. However, given the known emergence of a magnetic phase boundary in FeSe 1-x S x at higher pressures as well as a precedent for magnetic quantum criticality in other Fe-based 55 or heavy fermion 51,54 systems manifesting in a divergent T 2 coefficient of the low-T resistivity, a magnetic QCP seems the most plausible.
These contrasting x-and p-dependencies (A t (x) and A*(p)) may be reconciled by considering the proposed T = 0 phase diagram shown schematically in Fig. 3c. The vertical solid-and open-headed arrows represent, respectively, the pressure tuning of the x = 0.18 and 0.20 samples, while the horizontal arrow represents tuning away from the NQCP with increasing x at ambient pressure. The near-constancy of A* (within the second scenario above) may indicate that p c (x)-the phase boundary for nematic order in the (p, x) plane-is very steep near x = x c . This seems plausible given the steepness of T s (x) near x c -see Fig. 1a in M. Čulo et al. 38 , for example. Consequently, with increasing p, samples with x > x c track effectively parallel to the nematic phase boundary, rather than away from it. At the same time, the application of pressure tunes each sample towards p m (x)-the magnetic phase boundary-resulting in a marked increase in A′. In this way, the contrasting variation in A(p) and A′(p) can be understood. The steepness of the p c (x) boundary might also indicate a crossover in the nematic phase transition from second-order to weakly first-order near x = x c . Such a crossover, intimated in Fig. 3c by the dashed nematic phase boundary, would lead to a cutoff in the nematic fluctuations, thereby providing an alternative explanation for the p-independence of A* and B*. It is noted that in pure FeSe, T s (p) terminates at a first-order structural and magnetic phase transition at~2 GPa (a divergence of 1/T 1 T at low T is lost) 60 . The T = 0 endpoint of the magnetic transition, however, appears to remain second-order 60 . Thus one anticipates that the magnetic phase boundary at the higher dopings measured in this study is also second-order and capable of hosting a QCP.
The presence of two anti-correlated but additive T 2 components in the low-T resistivity is unusual but implies the presence of two independent scattering channels of distinct origin. Given the correlation between A′ and T c at finite pressure and the anticorrelation between A′ and A, it seems very unlikely that spin fluctuations could be responsible for both. Indeed, while measurements of the spin-lattice relaxation rate in FeSe 1-x S x at ambient pressure indicate the emergence of low-lying spin fluctuations below T s , spin fluctuations are strongly suppressed for x > x c 39 . Moreover, as mentioned above, there is no evidence that such fluctuations go critical at x = x c . It would appear that spin fluctuations, as parameterized by A′ (~A t /10), play only a minor role in the overall low-T resistivity in FeSe 1-x S x at ambient pressure.
The measurements presented here imply that the nematic fluctuations anchored at the NQCP and the magnetic fluctuations anchored at the AFM QCP act as decoupled mechanisms for the enhancement of quasiparticle-quasiparticle scattering over most of the phase diagram of FeSe 1-x S x . One possible way to account for their distinct nature is to consider the particular Fermi surface topology of FeSe 1-x S x . Figure 4a shows a schematic projection of the Fermi surface of FeSe 1-x S x (x > x c ) at k z = 0 assuming only one hole pocket centered at Γ and two-electron pockets at X and Y. Since spin fluctuations in detwinned FeSe are peaked at Q = (π, 0) 15 , we also assume that in the tetragonal phase, critical spin fluctuations would enhance the quasiparticle-quasiparticle scattering cross-section predominantly at four 'hot-spots', as shown . Near x = x c , the nematic phase boundary is shown as a dashed line to reflect its putative weak first-order nature. d Variation of A t * , the total low-temperature T 2 coefficient at ambient pressure rescaled by the relative growth in the carrier density, and x beyond the nematic quantum critical point near x c~0 .17 (red dotted line) with data from this work and literature sources 33,36,37 . The error bars represent the estimated uncertainty in both x and the reported coefficients. We have estimated the error in the values reported in this work to be 50% due to the constraints on sample size in a pressure cell and 30% elsewhere. The uncertainty in x has been assumed to be ±0.015 and representative of the typical variation in x within an individual batch of samples 12 . See Supplementary Note 1 for details of the rescaling procedure. The dashed line is a guide to the eye.
in Fig. 4b. The precise symmetry of the nematic fluctuations in FeSe 1-x S x has not yet been confirmed. Raman spectroscopy studies have indicated the presence of a d-wave Pomeranchuk instability 61,62 while quasiparticle scattering interference experiments 63 have revealed a highly anisotropic spectral weight (of different orbital character) on both pockets with p-wave symmetry (lightly shaded sections in Fig. 4b). For the former, critical nematic fluctuations would dress the quasiparticle states everywhere except at the AFM hot-spots (the nodes of the d-wave Pomeranchuk deformation), while for the latter, these cold-spots would reside at the "bellies" of each pocket. Such considerations might then help us to envisage how the influence of the critical nematic or magnetic fluctuations manifests themselves as two distinct components of the T 2 resistivity. Intriguingly, the inplane magnetoresistance of FeSe 1-x S x (at ambient pressure) can also be decomposed into two components 35 ; a QC component that exhibits H/T scaling and is maximal near the NQCP and a second component that remains purely H 2 (up to 35 T) and shows conventional Kohler's scaling. It is tempting to attribute these two components as arising from these distinct nematic and spin interactions, only one of which goes critical at ambient pressure.
Finally, we turn to consider the evolution of the superconductivity in FeSe 1-x S x . While there is strong evidence to suggest that low-energy spin-fluctuations play a significant role in the pairing mechanism in FeSe 1-x S x 23-26 , the increase in T c (p) appears to be well correlated with A′(p) (panels d, e of Fig. 2), it is striking that A′~A t /10 at ambient pressure yet T c remains high (~8 K). This finding may suggest some role for nematicity in the pairing in FeSe 1-x S x but clearly, further work is required to confirm this. In pnictide superconductors, where nematicity and magnetism are strongly coupled, superconductivity is most likely driven by low-energy spin fluctuations, though T c could be enhanced by a reduction in the bare intra-pocket repulsion brought about by the nematic fluctuations 7 . In the case of FeSe 1-x S x , the decoupling of the nematic and the magnetic fluctuations means that this cooperative process is no longer viable and as a result, T c is not enhanced at the NQCP.
Previously, pressure tuning between two distinct QCPs was reported in the heavy-fermion compounds Ge-doped CeCu 2 Si 2 64,65 and YbRh 2 Si 2 with Ir and Co doping 66 . To the best of our knowledge, however, FeSe 1-x S x represents the first example of a correlated metal exhibiting an enhancement in the coefficient of the T 2 resistivity associated with two distinct QCPs. Clearly, the task is now to determine the universality classes associated with each criticality. In order to achieve this, however, it will be necessary to study a sample with a sulfur concentration even closer to the NQCP and to extend the pressure range (e.g., using an anvil cell) until the magnetic QCP itself is crossed. At the same time, determination of the evolution of complementary resistive properties (such as the Hall effect) with pressure may help elucidate further the nature of the two components.
Methods
Samples. Single crystals were grown via a KCl/AlCl 3 chemical vapor transport method. The growth was typically performed with a source temperature of 420 o C, a deposition zone temperature of 230 o C, and with a growth time of 200 h. Their nominal dopings are x = 0.18 and 0.20. The actual S content of crystals can often be lower than the nominal value 12 . For both of our samples, however, the zero-field ρ(T) curves (at ambient pressure) are found to agree well with previous reports on samples with similar dopings 33,36,37 . Specifically, there is no kink or minimum in the derivative dρ=dT that could be attributed to a finite T s , and the T 2 regime at low-T extends up to around 8-10 K with a coefficient A t~4 0-55 nΩ cm K −2 , compared with >200 nΩ cm K −2 for x ≤ 0.17 33,36 . In this work, there is heightened geometrical uncertainty associated with measuring small crystals inside a pressure cell. Whilst, the as-measured A t values are~25% lower than previous reports at the same nominal doping levels, as is evident from Fig. 3d, the values obtained are in good agreement with the general trend of A t (x) with data taken from multiple groups (see Supplementary Note 1 for details).
Resistivity measurement under pressure. Resistivity was measured using a standard ac lock-in technique. Electrical contact was made to the samples by first masking the samples and sputtering gold pads. Contact to the pads was made using gold wire and DuPont 4929 silver paint. Typical contact resistances were less than 1 Ω and stable over time. Both crystals were mounted together in a single pistoncylinder pressure cell and oriented such that H // I // ab. Daphne 7373, which is known to remain hydrostatic at room temperature up to 22 kbar 67 , was used as a pressure transmitting medium. Resistivity measurements were performed using a standard four-point ac lock-in technique in Cell 4 of the High Field Magnet Laboratory (Radboud University, Nijmegen, The Netherlands) where a maximum magnetic field of 35 T could be applied. Temperature sweeps were performed in both field orientations (positive and negative 35 T) such that the longitudinal component could be isolated from any Hall component present due to an offset in the voltage contacts (though it is noted that the Hall contribution was found to be a near-negligible part of the total signal).
Data availability
The data that support the plots within this paper and other findings of this study are available from the University of Bristol data repository, data.bris, at https://doi.org/10. 5523/bris.3spp0cgrmsam924e0xirqcikhf.
Received: 9 August 2021; Accepted: 10 February 2022; Fig. 4 Decoupling of the nematic and magnetic interactions and the Fermi surface of FeSe 1-x S x . a Schematic Fermi surface of FeSe 1-x S x outside of the nematic phase showing the Γ-centered hole pocket (α) and X, Y-centered electron pockets (ε and δ) at k z = 0. States on different pockets can be connected via finite-Q scattering as indicated by the gray arrows. b Schematic illustrating the distinct regions of quasiparticle dressing due to critical magnetic fluctuations (grey circles) arising from the translation of the pockets through Q = (π, 0), (0, π), and nematic (Pomeranchuk) fluctuations (lighter shaded regions on the electron/hole pockets where the quasiparticle spectral weight is reduced 63 ). | 7,463.4 | 2022-04-22T00:00:00.000 | [
"Physics"
] |
Characterization of Polyethylene Carrying Bags Before and After Isothermal Oxidative Aging in an Oven
Utility of polymeric material is a major contribution to the production of waste, particularly in Pakistan. An easy escape to it is the damping in the land which is not commendable for an environmental point of view. On the other hand, the aging of polymer is analogous to its burial conditions under the soil in the absence of light. Therefore, in this research report, two different brands of polyethylene carrying bags were investigated. One sample was obtained from Pakistan abbreviated as sample "Y" while the other from Canada abbreviated as "E". In order to accelerate the degradation process and to observe the impact of aging in a shorter span of time, these samples were heated at an elevated temperature (80°C) in an oven for the period of 20 days. The samples were characterized before and after aging with an interval of 2 days by applying different techniques like FT-IR, SEM, DSC, and thermogravimetric analysis (TGA). Carbonyl peak at 1715 cm was observed only in the case of sample "E" displaying carbonyl index value as 28.45 % after 20 days of aging. The SEM images before and after aging revealed that the degradation took place at preferential sites in case of sample "Y" and at numerous sites in case of sample "E". The results of percent crystallinity obtained by DSC showed an increasing pattern with aging for both the samples and was high in case of sample "E." The activation energy determined by using Flynn-Wall-Ozawa showed a decreasing pattern for both the samples with aging. It concluded that the thermal aging initiates the process of degradation which was then accelerated by heating in TGA oven. The order of reaction was slightly decreased after aging for both the samples and was found to be independent of the heating rate.
1.Introduction
The worldwide production of polymeric material is increasing with the passage of time due to its versatile properties and reached up to 322 Metric tons in 2015 [1]. On the other hand, their waste management has become one of the most concern issues-particularly in developing countries like Pakistan due to the long life of such material, population boom, and not having proper management and recycling facilities. Moreover, scientific development has provided several facilities to human beings for their ease, which may ultimately give birth to several issues like polymeric packing material and shopping bags. As such material has quite a long life and is not biodegradable, hence a noticeable amount of plastic waste is accumulated in the cities with time. Among this polyethylene is one of the most commonly used polymers (about 60%) in modern society [2]. Unfortunately, its recycling is neither always easy nor profitable, hence a significant quantity of its waste is dumped inland and in streets [3]. The literature reveals that about 65% of waste is land-filled, 25% is reused and 10% is recycled both chemically and mechanically [4]. This solid waste, which is even more than 65% in developing countries, is generating robust health and environmental impacts. Therefore, the scientists are trying hard to explore various ways and means to make use of this waste by recycling it, which is considered to be much beneficial method among all as one can get the product which can be useful. These can either be gas, liquid or solid or some petroleum products. However, to design a technology the information regarding the mechanism, thermodynamics and kinetics of the process are required [5]. The scientists are working over it by investigating the aging/degradation process under various conditions like aging under accelerated conditions [6,7], the impact of water over aging etc. [8]. However, increasing the number of parameters complicates the mechanism of degradation and becomes difficult to understand and conclude. Therefore, we have planned to thermally degrade the polyethylene shopping bags samples under air by heating the samples in the oven and characterized them by using various latest techniques. The kinetics of degradation before and after thermal aging was investigated by using thermogravimetric analysis.
Theoretical background
Generally, the decomposition reaction of the polymeric materials can be expressed as: Here, the symbols A, B, and C, represent the "initial", "residue" and "gaseous" materials, respectively. The data obtained from the thermogravimetric analysis can be utilized for the kinetic study and the "degree of conversion, α". The degree of conversion is defined as: Where "" is "degree of conversion", "Wo" is "initial weight", "Wt" is the "weight of sample available at time t" and "W∞" represents the "final weight of the sample".
The kinetic process may be expressed by a typical model: Where, "d/dt" , "k", and "f()" denote the "decomposition rate", "decomposition rate constant" and the "differential expression", respectively for a "kinetic model function".
Combining equations (3 and 4) gives Considering the variation in temperature is "β" (=dT/dt), a steady heating rate then we get: Therefore, equation (6) is the very basic equation for the investigation of the kinetics of degradation of material by using the results one gets from the thermogravimetric analysis [9]. A number of methods have been presented for the determination of the energy of activation of thermal degradation of polymers, however, we applied the Flynn-Wall-Ozawa method which is a wellaccepted method for the purpose and is an integral one [10][11][12]. The Flynn-Wall-Ozawa method is also entitled as a model-free method as it is based on the hypothesis that the rate of thermal degradation reaction depends on the temperature only for a particular degree of conversion and hence, it is considered as the most authentic method in this regard [13]. The activation energy "Ea" can be determined by using the Flynn-Wall-Ozawa method without bothering about "order of reaction". It can be expressed in terms of equation (7) = ( ) − . − .
Where "A" and "R" are constants for a particular conversion, "g ()".The activation energy (Ea) can be determined from the slope of the curve obtained from the plots of "log β" vs "1/T" at different heating rates for any particular "degree of conversion" ().
Determination of order of reaction
The "order of reaction," "n" and the "pre-exponential factor" "A" was obtained using equation (8) The "n" and "A" was obtained by plotting [ ]versus ln(1-α) in which slope gave the value of "n" and intercept was equal to ln A.
Materials
Two samples of polyethylene carrying bags of different brands were analyzed. Sample "Y" was made in Pakistan, which was widely used for grocery/carrying purposes. It was purchased from the local market sold under the brand name of "Special Yaadgar." It was having a light green color and was in the form of a film having a size of 25.40 x 33.02 cm. Its thickness was measured which was 0.01 mm. This sample was named as sample "Y" and "Yd" after oven aging. Here d stands for no of days for which it was aged in the oven. Sample "E" was supplied by "Econogreen Plastics," Canada. According to the producer, the sample was made from 100 % recycled plastic and was 100 % recyclable material and will be itself completely degraded in 2 years. It was further claimed that these bags were oxo-degradable and contained a unique agent that helps in breaking down carbon-carbon bonds in the material and reduces its strength when exposed to oxygen. These polyethylene bags were in the form of a film having black color, and its capacity was 127 L. Its thickness was 0.03 mm which was slightly greater than that of sample "Y". According to the requirement of analysis, this sample was abbreviated as "E" and "Ed" before and after aging (d= no of days) in the oven, respectively. The exact composition of both of these polyethylene samples was not disclosed by the suppliers, being a business secret and were used as received.
Oven aging
The polyethylene films of sample "Y" and "E" were cut into fine strips of dimensions 38x13 mm with the help of a blade. These strips were placed in Petri dishes without any lid and were kept in an electric oven in the air atmosphere. The Electric Oven used for the purpose was 5890A GC, Hewlett Packard, made in the USA. The temperature of the oven was raised at the heating rate of 20 °C/min from ambient to 80 °C and was kept constant for 20 days. The samples were subjected to characterization using various techniques after 20 days of oven aging with an interval of 2 days.
FT Infrared spectroscopic measurements
Tensor 27, FT-IR spectrophotometer supplied by Bruker, Germany, was used to measure the IR spectra of the samples. The interpretation of the spectra was made by using the software OPUS Version 4.2 Build.
Carbonyl index measurement
For this purpose, the same IR instrument was used. IR is considered to be very susceptible to chemical changes that may have taken place during the exposure to thermal aging [14]. The rate of formation of carbonyl groups of the samples during thermal treatment was calculated in terms of carbonyl index (CI), which gave a numerical value and estimation of the degree of oxidation for each polyethylene sample [15]. For the calculation of the carbonyl index, the area of the peak at 1715 cm -1 and a reference peak at 2923 cm -1 was considered using equation (9) [16,17].
Scanning electron microscopic measurement
For the investigation of the morphology of the samples, Carl Zeiss LEO 1530 scanning electron microscope, made in Germany, was used. The Gemini field emission columns (FESEM) and EDX/OIM PV9715/69 ME were also coupled with this microscope. The sample films after aging were dried and sputtered with a gold coating having a thickness of approximately 10 nm. During this procedure, Argon was used, being an inert gas. These sample films were fixed on an aluminum stub with the help of double-sided conductive tape.
Measurement of oxidative induction time
The OIT (oxidative induction time) was measured by using Differential Scanning Calorimeter, DSC Q2000 manufactured by TA Instruments. made in Canada. Throughout the procedure, ASTM No D-3895-07 was strictly followed. For the interpretation of data "Universal Analysis 2000 (TA Instruments, Version 4.5A Build 4.5.0.5)" software was used provided with the instrument. For this analysis, first of all, the sample films were converted into sheet formats (thickness of 200 ±15µm) by using conversion-molding. In order to get a required sample size, these sheets were cut into specimen disks with the help of a punch machine having a diameter of approximately 6.4 mm. The specimen disk of sample Y was placed in the DSC's sample container compartment without covering with the lid. The samples were heated with a heating rate of 20°C/min from 30 to 200°C (set-point temperature) under a nitrogen atmosphere with a constant flow rate of 50 mL/min. The heating was discontinued at the set-point temperature and the sample was allowed to equilibrate at this temperature for 5 min. After 5 min, the ambient atmosphere was shifted from nitrogen gas to oxygen gas with the same flow rate. The change-over point to oxygen gas was termed as zero time and isothermal heating (200 °C) was continued until an exotherm was observed. https://doi.org /10.37358/Rev. Chim.1949
Percent crystallinity measurement
The percent crystallinity of both the samples was determined before and after aging by using the same "DSC Q2000". The samples were heated at from 30 to 200 °C heating rate of 10 °C/min and were kept as an isothermal mode for five minutes and then cooled at the same rate from 200 to 30 °C. During this investigation, nitrogen gas was used as being inert in nature at a constant flow rate of 50 ml/min. The percent crystallinity was determined from the DSC curves by using the following relation: Here "∆H" is the enthalpy of fusion of the sample obtained from DSC results. The "∆Ho" was the enthalpy of fusion of polyethylene in 100 percent crystalline state and taken as equivalent to 290 J/g for polyethylene samples [18][19][20].
Thermogravimetric analysis
Thermogravimetric analysis (TGA) was carried out by using a Q500 TGA instrument supplied by TA Instruments, Canada. The samples were placed in the furnace using platinum pans of the TGA instrument, and nitrogen gas were pumped at a constant flow rate (50 ml/min) to have inert atmosphere and heated at various heating rates (5, 10, 15, and 20°C/min). The activation energy (Ea) of thermal degradation of the samples was obtained by employing the Flynn-Wall-Ozawa method [10][11][12][13] while the order of reaction "n" and the pre-exponential factor "A" was calculated by applying equation (8) [9].
FT Infrared spectroscopic analysis
The IR spectra of sample "Y" before and "Y20" after aging in the oven for 20 days almost overlapped upon each other ( Figure 1).The major characteristic peaks of polyethylene were observed at 2920, 2850, 1463 and 720 cm -1 , which were assigned to -C-H asymmetric stretching, symmetric stretching, -C-H bending, and -C-C rocking, respectively. Another small peak was observed for both the samples at 2369 cm -1 which might be due to the absorption of CO2 in this region. It has been reported that the FTIR spectra (from 1400 to 1300 cm -1 ) can be exploited to differentiate the polymer sample from low to high-density polyethylene [21]. This technique was applied to both the samples before aging concluding that the sample "Y" was "HDPE" and sample "E" was "LLDPE". The IR spectra of sample "Y20" concluded that the aging process had no effect in this region (1400 to 1300 cm -1 ); whereas the previous study revealed that it was changed from HDPE to LLDPE if it was exposed to accelerated weathering conditions in which UV radiations and humidity/ water contents were additional constraints [6].
The IR spectra of sample "E" and "E20" are depicted in Figure 2 showing the same characteristics peaks of polyethylene and CO2 along with the emergence of two new peaks at1170 and at 1715 cm -1 . The C-O-C stretching was observed at 1170 cm -1 and formation of carbonyl groups during the aging process as indicated by the peak observed at 1715 cm -1 . The sample films were analyzed by FT-IR after aging with an interval of 2 days and no carbonyl peak was observed even up to 12 days of aging. However, after 14 days a small carbonyl peak was observed whose intensity was gradually increased with the aging time. While after 20 days, its intensity was significantly enhanced. The carbonyl index was calculated by using equation (9). The value of the carbonyl index for sample "E" came out as 28.45 % which was quite significant. The formation of carbonyl species during the aging of polymer samples concluded the oxidation of polymer during processes. It has been reported that aldehyde and ketone carbonyl groups are commonly formed during the thermal aging process which is significantly important for further degradation of polymers [22]. Further, the IR spectra of sample "E20" was magnified in the range of 1400 to 1300 cm -1 and it was concluded that the "E20" sample was "LLDPE" https://doi.org /10.37358/Rev. Chim.1949 Rev. Chim., 71 (3) as no changes were observed after aging. Keeping in view the carbonyl index and IR spectra of both the samples before and after aging, it was concluded that the aging process introduced some chemical modification in the sample "E" and the formation of oxygenated products was formed. This observation was in accord with the literature and that the oxidation rate of LLDPE was slightly higher than that of HDPE, despite the thickness of the samples film which may limit oxygen diffusion [23][24][25]. Figure 1. IR spectra of sample "Y" (before) and "Ya20" (after aging for 20 days in an oven).
It was further presumed that the additives in the sample "E" were more susceptible to thermooxidative degradation while in the case of sample "Y" these were thermo-stabilizer in nature as sometimes shelf life is more important than anything else [26]. It is well recognized that if metals are added to polyolefin then the polymers can be easily thermally oxidized. For example, Manganese (Mn) is a suitable metal for pro-oxidant activity. Further to it, on thermal treatment of polymers in the presence of oxygen free radicals are produced which can further oxidize the polymer. Such a phenomenon results in variation in physical and mechanical properties [27]. It is also expected that this phenomenon may result in the formation of -COOH, -OH, C=O groups [28].
Scanning electron microscopic Analysis
The film of both the samples was taken out from the oven after an interval of 2 days and analyzed by scanning electron microscopy. The SEM micrographs did not show any detectable change in the morphology up to 12 days of aging for sample "Y". However, after 14 days of aging the wrinkles and etching process were noted (Figure 3a). The SEM images of sample "Y20" aged for 20 days indicated that the wrinkles became deep and converted to cracks, and ultimately the film was torn up at some specific sites (Figure 3b).
Similar morphological changes were observed in the sample "E14", aged for 14 days in the form of small cracks (Figure 4a). The SEM images of sample "Ea18", aged for 18 days demonstrated the initiation of degradation process (Figure 4b). Figure 4b displayed that the film was swollen up in the form of big flakes. It may be due to the bursting of additives because of prolonged exposure to heat and oxygen. The SEM image of sample "E20", aged for 20 days highlighted the formation of grooves, pitch, cracks, and flakes at numerous sites, which ultimately lead to the tearing up of the film ( Figure 4c). https://doi.org/10.37358/RC.20.3.8029 Figure 2. IR spectra of sample "E" (before) and "Ea20" (after aging for 20 days in an oven). T he emergence of carbonyl peak at 1715 cm -1 can be observed. SEM images concluded that the aging in the oven had less effect on sample "Y" as compared to sample "E. In case of sample "E", the oven aging showed a drastic effect by instigating the breaking of the film at numerous sites. The main factors influencing the process of degradation were the presence of pro-oxidant [23]. These observations were also supported by the results obtained through IR analysis.
Oxidative induction time
The oxidative induction time (OIT) of the material under investigation was measured prior to aging with the help of the DSC instrument. The oxidative induction time was 41 minutes and 5 minutes for sample "Y" and "E," respectively, concluding that sample "Y" was thermally more stable and contained a higher amount of antioxidants as compared to sample "E" [29].
Percent crystallinity measurement
Both the samples were analyzed before and after aging for percent crystallinity by using DSC. The 1st cycle of the temperature scan was considered as it enclosed the thermal history of the sample [30].The percent crystallinity of both the samples determined by applying equation (10). The values obtained prior to aging was49 % and 31 % for sample "Y" and "E", respectively. For the investigation of the impact of aging over percent crystallinity of the samples, the samples were taken out from the oven after an interval of two days and were analyzed by DSC and percent crystallinity was determined. It was noted that there was no significant change in the value up to 12 days of aging and was slowly increased after 14 days of aging and obtained the value 52 % and 43 % for sample "Y20" and "E20", respectively after 20 days of aging. The DSC curve of the sample "Y20" is shown in Figure 5. However, it was 63 % for sample "Y" and 47 % for sample "E" on their exposure to UV light, humidity and high temperature simultaneously, using the QUV chamber for 20 days [6]. It was concluded that these parameters had a significant impact as compared to oven aging. The increase in percent crystallinity was higher for sample "E" as compared to sample "Y". The reason behind the increase in the value of percent crystallinity may be attributed to the chain scission and also because of the combination of small entangled molecular fragments due to which the re-crystallization process may have taken place [31][32][33][34]. The melting point of sample "E" was decreased from 122.14 °C to 119.95 °C and that of sample "Y" remained constant at 127.02 °C after the aging of 20 days. The results of OIT, C.I, and SEM images concluded that the additives in the sample "E" were more vulnerable to thermo-oxidative degradation. https://doi.org /10.37358/Rev. Chim.1949 Rev. Chim., 71 (3
Thermogravimetric analysis
The thermogram of the sample "Y" before and after aging for 20 days was recorded at various (5, 10, 15 and 20 °C/min) heating rates showed a single DTG peak, indicating that the degradation was a single step process irrespective of aging ( Figure 6 (a = before aging, b = after aging of 20 days)). The onset temperature "To" for 5 °C/min heating was 427.80 °C prior to aging and 413.16 °C after 20 days of aging. The same decreasing trend was observed (460.90 to 413.57 °C) for the high heating rate (20 °C/min). As "To" is considered to be an indicator of thermal stability so a decrease in "To" with aging concluded a decrease in thermal stability of these samples due to prolonged heating and particularly in the air atmosphere [35].The thermogram of sample "E20" obtained at these four heating rates displayed also single DTG peak indicating a single step degradation mechanism as it was observed prior to aging. For instance, the thermogravimetric curves of sample "E" at a heating rate of 5°C/min are depicted in Figure 7 (a = before aging, b = after aging of 20 days). The onset temperature "To" calculated at 5 °C/min heating rate was decreased from 414.58°C to 402.86 °C due to aging. Similarly, the peak temperature "Tp" was also decreased from 465.38 to 457.81 °Cat the same heating rate due to aging. The broadening of DTG peak, lowering of "To" and "Tp" can be due to aging, which may have converted high molecular weight polymer material into smaller molecular fragments as these parameters are the molecular weight of polymer dependent [36]. Heating above 200 °C may lead to chain scission, and the nature of the product may be dependent on the impurities, presence of unsaturated sites and head-to-head units, etc. [37]. Further, the Poly-olefins may be susceptible to thermal oxidation and the impurity generated therein [38]. Keeping in view these results, it has been concluded that the thermal degradation mechanism of polyethylene under the nitrogen atmosphere before and after aging in an oven proceeded through a single step degradation. It has also been observed that the thickness of the film has a significant impact on the degradation during the thermogravimetric analysis as perceived by others [35]. It was reported that the high heating rate (20 °C/min) has little effect over the shifting of onset temperature that may be due to lagging behind the sample temperature as compared to the furnace temperature [39]. It has been further concluded that the onset temperature (To) played a key role in accessing the stability against the heat and aging phenomenon of the materials and that the aging has a pronounced effect on sample "E" as compared to sample "Y".
The activation energy of thermal degradation
The thermograms obtained at 5, 10, 15 & 20 °C/min heating rate under the atmosphere of nitrogen before and after aging of samples "Y" and "E" were utilized and activation energy (Ea) of thermal degradation was obtained by applying the Flynn-Wall-Ozawa method [10][11][12][13]. It is among the basic methods employed for the calculation of "Ea" without the knowledge of the order of the reaction. The "Ea" was obtained from d(log β)/d(1/T) obtained by plotting "log β" versus "1/T" (T = The corresponding temperature of each specific degree of conversion) (equation 7) for different heating rates (β) for any particular degree of conversion (α). The "Ea" obtained in this way for sample "Y" and "E" before and after aging (TGA performed under nitrogen) is displayed in Table 1. The "Ea" of sample "Y" before aging was 158 (kJ/mol) at α = 0.1 and was gradually increased up to 198 (kJ/mol) with the increase in the degree of conversion up to α = 0.9. The mean value of "Ea" was obtained as 179 (kJ/mol) as reported in the literature [6]. It is also reported in the literature, that the value of "Ea" for polyethylene gradually increases from 150 to 240 (kJ/mol) with the increase in the value of the degree of conversion (α) [40].
It was noted that after aging, the "Ea" was very low at α = 0.1(65 kJ/mol) and increased up to 186 (kJ/mol) at α = 0.4. The "Ea" became almost constant (177 kJ/mol) at α= 0.5 to 0.6.and then showed a decreasing trend from α = 0.7 to 0.9. The mean value of "Ea" was equal to 153 (kJ/mol) which was less than prior aging it may be due to change in the mechanism of degradation due to heating for longer and at high temperature [35].
The "Ea" of sample "E" before aging was 357 (kJ/mol) at α = 0.1 and was abruptly increased to 649 (kJ/mol) at α = 0.2. It was then decreased from 549 to 397 (kJ/mol) when α was varied from 0.3 to 0.9. It was observed that the mean values of "Ea" for sample "E" was relatively high(424 (kJ/mol)) [6]. The "Ea" of sample "E" after aging was very low (81 kJ/mol)at α = 0.1and was increased with the increase in the degree of conversion and attained the value up to 368 (kJ/mol) at α = 0.6 and then decreased with the increase in α. The mean "Ea" was equal to 219 (kJ/mol) which was about 50 % less than the "Ea" determined prior to aging (424 kJ/mol). However, it was greater than the mean "Ea" of sample "Y20" and has been well established that the thickness of the film of the material may influence over the TGA results and films of sample "E" were thicker than the film of sample "Y" [35].
Order of reaction
The order of reaction (n) and pre-exponential factor (A) for these samples were determined before and after aging by using equation (8) from the TGA curves obtained at a heating rate of 5, 10, 15, and 20 °C/min. Table 2. "Order of reaction" and "pre-exponential factor" obtained from TGA curves for sample "Y" before and after aging in an oven Table 3. "Order of reaction" and "pre-exponential factor" obtained from TGA curves for sample "E" before and after aging in an oven. It can be seen from table 2 and 3 that the order remains the same with the increase in heating rate from 15 to 20 °C/min. However, a slight change in its value was observed as we move from low to high heating rates. The mean value of the order of reaction for sample "Y" was 1.0 (± 0.04) and after aging, it was slightly decreased to 0.98 (± 0.07). The value of the pre-exponential factor was drastically decreased from 5.5x10 26 /s (±1.7) to 4.8x10 10 /s (±0.22) with aging. The order of reaction of sample "E" was 0.95 (±0.15) and was decreased to 0.87 (±0.10) with aging. The value of the preexponential factor was also decreased with aging from 4x10 18 /s (±0.94) to 4.22x10 07 /s (±0.75).
The results concluded that the order of reaction was almost independent of the heating rate as the degradation of polyethylene occurs through random chain scission [41]. The impact of aging was visible from the results of FT-IR spectroscopy, SEM, and DSC techniques.
Conclusions
Two samples of polyethylene carrying bags; one was made in Pakistan, symbolized as "Y" while the other was from Canada, symbolized as "E" were analyzed with reference to the impact of thermal aging in an oven. The interpretation of the IR spectra revealed that the sample "Y" was "HDPE" while the sample "E" was "LLDPE." IR spectroscopy also indicated the formation of carbonyl peak at 1715 cm -1 only in case of sample "E" during aging and its intensity was increased with the increase in aging time and attained the value of 28.45 % after the period of 20 days. The emergence of carbonyl peak manifests that the oxidative degradation has taken place in which the main contributors were the additives in sample "E" that were pro-oxidant in nature. The higher value of OIT for sample "Y" as compared to sample "E" concluded that the additives in sample "Y" were anti-oxidant in nature. It was confirmed from the SEM images that the oxidative degradation took place at preferential sites in case of sample "Y" while at numerous sites in sample "E". Aging processes encouraged the recrystallization process as the percent crystallinity of both the samples was increased during aging. The apparent activation energy and pre-exponential factor of both the samples were decreased after aging. A slight decrease in the value of the order of reaction was observed and it was concluded that the value of the order of the reaction is independent of heating rate. Further to it, the results obtained by all the techniques were consistent and supported each other. It is interesting to note that all these samples of polyethylene showed a similar trend and hence it can be concluded that the nature and concentration of the additives played a crucial role in controlling the degradation mechanism of the polymeric materials. https://doi.org /10.37358/Rev. Chim.1949 Rev. Chim., 71 (3) | 6,889.4 | 2001-01-01T00:00:00.000 | [
"Materials Science"
] |
Huldah ’ s oracle : The origin of the Chronicler ’ s typical style ?
How to cite this article: Jonker, L.C., 2012, ‘Huldah’s oracle: The origin of the Chronicler’s typical style?’, Verbum et Ecclesia 33(1), Art. #714, 7 pages. http://dx.doi. org/10.4102/ve.v33i1.714 Scholars of Chronicles normally emphasise that the Chronicler used typical words and phrases in those parts that belong to his Sondergut. Amongst these are phrases like ‘to humble yourself’, ‘to seek Yahweh’, and ‘not to forsake Yahweh’. The writer’s typical changes to the burial notices of the royal narratives also belong in this category. Something which is often overlooked, however, is that many of these features already occur in the narrative about Huldah’s oracle (2 Chr 34:19–28) which was taken over with only minor changes from the Deuteronomistic version (2 Ki 22:11–20). My paper investigates whether or not the Huldah oracle could have served as theological paradigm according to which the Chronicler developed his own unique style. If so, the investigation will prompt me to revisit the issue of how continuity and discontinuity, with the older historiographical tradition, characterise the identity negotiation process that we witness in this literature.
Introduction
Scholars of Chronicles normally emphasise that the Chronicler used typical words and phrases in those parts that belong to his Sondergut. 1These Sondergut usages, of typical words and phrases, are then considered to be very important in identifying the Chronicler's unique theology or ideology (McKenzie 2004:47-52;Dirksen 2005:14-20;Klein 2006:44-48;Gabriel 1990;Ruffing 1992).Amongst these are significant phrases like 'to humble oneself', 'to seek Yahweh', and 'not to forsake Yahweh'.These phrases, which occur at important theological junctions, particularly in the royal narratives about Judah's kings in 2 Chronicles, give expression to the ideal religious and cultic attitude which is presented by the writer as the hallmark of Israel's identity (Japhet 2009:194-208). 2 Through the usage of this terminology, in the royal narratives, the Chronicler was constructing the ideal prototype of religiosity towards which he wanted to encourage 'All-Israel' in his own days in the late Persian period Yehud.
Another feature which is emphasised by scholars of Chronicles, in their description of the Chronicler's unique style and ideology, is the writer's typical changes to the burial notices of the royal narratives (Jonker 2012b). 3The Chronicler clearly made slight changes to many of the royal burial notices in order to 'upgrade' or 'downgrade' the profile of the particular king.This tendency also feeds into the Chronicler's overall project of formulating those political and religious prototypes that could encourage the leaders and population of Yehud towards accepting a specific Yahwistic identity.
It is certainly important to study the occurrence of these words and phrases within the context of the Chronicler's Sondergut over-against the Deuteronomistic History.However, something which is often not discussed in scholarship on Chronicles is that many of these features are not all invented by the Chronicler.Some features which are often depicted as 'typically Chronistic' are taken from other existing traditions.What is often neglected in scholarship on Chronicles is that, although the Chronicler employs these words and phrases in his own special way, he does so in continuity with some earlier historical and theological traditions.
1.It is well-accepted in commentaries on Chronicles that the author(s) of Chronicles should be sought amongst the literati in Second Temple Jerusalem.Although it cannot be determined whether the book stems from a single author or a collective.In light of the fact that it is highly unlikely in this time period that the author(s) would have been female, the masculine singular pronoun is used here for practical reasons to refer to the Chronicler.See for example McKenzie (2004:56-58), Dirksen (2005:21-29) and Klein (2006:2-6).
3.For an elaborate discussion of this aspect, including references to other scholars' viewpoints, see the paper I delivered at the SBL International Meeting in London in 2011: Jonker (to be published 2012b).
The present contribution will therefore investigate this conspicuous feature of the narrative about Huldah's oracle. 4The investigation takes place within the broader framework of some recent studies which reminded scholars of the continuities between Chronicles and other preceding traditions, amidst all the discontinuities which often form the focus of our studies (Jonker 2012a;Ben Zvi 2009:86). 5 The present study will test the hypothesis that the Huldah oracle served as a theological paradigm according to which the Chronicler developed his own unique style, particularly in the Sondergut passages.The study will also attempt to show how the utilisation of the Huldah oracle contributed to the Chronicler's project of negotiating identity in the late Persian context.
The study starts on a descriptive level by exploring the terminological links between typical Chronistic expressions (which occur in the Sondergut passages) and terms used in the Huldah oracle. 6Thereafter, some studies will be introduced which discuss the role of Huldah's oracle in both the Deuteronomistic and Chronicler's versions.The study will then proceed to test the hypothesis formulated above, and in the last section I will revisit my understanding of the identity negotiation processes in Chronicles in the light of the insights gained from studying the Huldah oracle.
Some typical chronistic expressions
In another contribution I have given a summary of typical Chronistic expressions which particularly occur in the Chronicler's Sondergut, and which contribute significantly towards the characterisation of Judah's kings as good or bad kings (Jonker 2012b).Three of those terms also occur in the narrative about Huldah's oracle.For the sake of my present argument it is important to first provide a more general discussion on the incidence of these concepts.Ben Zvi (2009).Although Ben Zvi does not deny or ignore the clear differences in style and structure between the Deuteronomistic History and Chronicles, he comes to the following conclusion at the end of his investigation: 'All in all, this study demonstrates that the analysis of continuity and discontinuity between the Deuteronomistic History and Chronicles can profit much from taking into account that which goes beyond the surface differences between the two works.The categorical claims about their differences must not be rejected but set in proportion to their similarities' (2009:86).
6.Many of these terms occur very prominently in the Chronicler's own passages, and they play pivotal roles in the theological statements that are made in those passages.They are therefore regarded as typical of the Chronicler's style.However, some of these terms also occur in the Huldah oracle which was taken over fairly unchanged from the Deuteronomistic history.This fact should be taken into account when dealing with the uniqueness of the Chronicler's own materials, and how they relate to the earlier traditions.The present article wants to bring more sophistication into the assessment of the Chronicler's own material.
is 25% of all occurrences.Of the 41 times the verb is used in Chronicles, 35 of those are in Sondergut passages (which represents more than 85% of the occurrences in the book).In the majority of the Sondergut occurrences 'Yahweh' or 'God (of the father[s])' is the object of the verb.In three cases other deities are the object.This shows that the verb דרש [which is in some cases substituted with, or parallel to, the synonym ]בקש is mainly used in literary contexts where dedication and loyalty to the deity is the theme.Dirksen remarks about the usage of ,דרש after studying the Chronicler's specific use of the term, that '[f]or the Chronicler, 'to seek Yahweh' [ark or altar] is pre-eminently the term for a fundamental attitude of obedience and trust toward Yahweh' (Dirksen 2005:23).
In the majority of cases the expression is used as positive evaluation of a king who sought Yahweh.The statistics above (particularly the high incidence of these verbs in the Chronicler's Sondergut) show that these verbs, together with some others, play an important role in the Chronicler's literary construction.Together they serve to convey the strong conviction that Yahweh should be sought, that He should not be forsaken, and that one should humble oneself before this God.This is the basic religious inclination which is put forward by the Chronicler as a prototype of religious leadership in All-Israel.
Another peculiarity of Chronicles which could prove to be significant for our study of Huldah's oracle is the tendency, of the Chronicler, to alter several of the royal burial notices in his own version compared to those included in Kings.An overview of the burial notices shows that the Chronicler used it as an additional tool to enhance or downplay some of the kings' profiles.Those kings who received a darker treatment by the Chronicler are: • David (whose burial notice in the City of David is completely omitted) • Jehoram (who was buried in the City of David, but not in the tombs of the kings) • Ahaziah (who was demoted from the City of David to no mention of a burial place, although a positive comment is added that he was nevertheless buried on account of his father's righteousness) • Joash (who was buried in the City of David, but not in the tombs of the kings) • Uzziah (who was demoted from the City of David to the kings' burial field because of the king's skin disease) • Ahaz (who was buried in the City of David, but not in the tombs of the kings) • Amon (who was demoted from a tomb in the garden of Uzzah to no burial place mentioned).
Those who receive a more favourable burial place are: • Asa (whose tomb in the City of David was filled with spices etc.) • Hezekiah (promoted from no burial place to the tombs of the sons of David) • Manasseh (promoted from a burial place in the garden to a place in the house) • Josiah (promoted by great mourning after his death).
Surprisingly, the Chronicler also indicates that Jehoiada who reigned with Joash, when the latter was still young, received a burial in the City of David.
Josiah's death notice deserves special attention.Whereas 2 Kings 23:30 mentions that he was buried in his own tomb in Jerusalem, the Chronicler's version indicates in 2 Chronicles 35:24-25 that he was buried in the tombs of his fathers.Additional positive information is provided in the following words: So he died, and was buried in one of the tombs of his fathers.And all Judah and Jerusalem mourned for Josiah.Jeremiah also lamented for Josiah.And to this day all the singing men and the singing women speak of Josiah in their lamentations.They made it a custom in Israel; and indeed they are written in the Laments.( 2
The narrative about Hulda's oracle
As was indicated above, the Chronicler took over the narrative about Huldah's oracle in a fairly unchanged form from his Vorlage in 2 Kings 22:11-20.Only a few minor textual changes were made and the structure of the whole narrative, as well as the different levels of direct speech, was kept intact.However, the position of the Huldah oracle, in the overall construction of Josiah's history, is significantly different in the two versions (Jonker 2003). 7Whereas the oracle leads over in 2 Kings 22 to the king and people concluding a covenant and, as a result, performing various cultic reformation measures, the Chronicler has moved the cultic reformations to another position, at the beginning of his Josiah account.The result is that the Huldah oracle leads over in 2 Chronicles 34 to the covenant of the king and people, and particularly then to the celebration of the Passover.This change, in the macrostructure of the Josiah narrative, is quite significant (Jonker 2003), 8 but does not concern us here where the focus is more on the micro level, specifically on the usage of certain terminology in the Huldah oracle story.
In both versions the Huldah oracle follows after the finding of the Book of the Law (of the Lord) during the restoration of the Temple in Jerusalem.In both versions it is reported that Shaphan the scribe went to inform King Josiah about the find, and that he read to the king from the book.After the king has heard the content of the (book of the) Torah, he tore his clothes as an act of penitence.In his motivation to his officials about why they should go and 'enquire from Yahweh' he indicates that he understands the content of the book as an accusation against his ancestors who did not obey the words of the book.He acknowledges that the wrath of the Lord over them will be great.Within this context the officials are then sent to Hulda, the prophetess.
Many scholars have been puzzled by the content of this text (in its very similar forms in Kings and Chronicles).Römer, for example, expresses his amazement that the king sent a delegation to Hulda even though the king has already understood the content of the book (Römer 2009:181).Grohmann asks why the king did not send his delegation to Jeremiah who was actively working during the time of Josiah (Grohmann 2003:213;Handy 1994:40-53). 9Because the narrative is so problematic, it comes as no surprise that it has generated much scholarly discussion in the distant and recent past, with contributions ranging from scholars who are interested in the literary-historical features of the Deuteronomistic history (Grohmann 2003;Gerstenberger 2006;Römer 2009;Priest 1980;Deurloo 1993;Handy 1994;Glatt-Gilad 1996), 10 to scholars offering feministic readings of the Huldah oracle (Weems 2003;Wacker 1990).
7.For an elaborate discussion of this feature of the Josiah account, as well as for a diagram which explains the position of the Huldah oracle in the macro structure, see Jonker (2003, particularly ch. 3, and2011).
8.In Jonker (2003) I discuss in full this macro-structural feature.9. Handy (1994:40-53) asks an even more pertinent question, namely why did Josiah consult a prophet at all?He comes to the following conclusion: '[T]he character of Huldah in the literary narrative of Josiah's call to reform the cult of Judah conforms to the plot narratives found in Mesopotamian texts also dealing with cult reforms.She plays the part of the double-check on the will of the deity.Cult reforms were serious business and a single directive deriving from any god was simply not enough to cause a good ruler to begin changing the religious realm of the nation' (1994:52).This observation may be particularly true for the usage of the Huldah oracle in the Deuteronomistic version of the text.However, in Chronicles the oracle does not precede the cultic reforms as occurs in 2 Kings 22 (see discussion above), but it rather motivates the celebration of the Passover.Furthermore, in a study of Chronicles one will have to correlate this observation of Landy's with circumstances during the Persian period.
The fullest discussion of the text in a commentary can still be found in Sarah Japhet's work (1993;cf. Knoppers 2003;Dirksen 2005;Klein 2006). 11She emphasises two important points in her discussion: firstly, about the words of Josiah's commission, and secondly, about the Deuteronomistic character of the Hulda prophecy.
With reference to the first aspect she explains that the Qal plural imperative of :דרש '[i]nquiring of the Lord' by means of a prophet was originally the seeking of guidance, enabling the inquirer to take the right action in matters personal or public.... The same verbal root came to denote any 'seeking of the Lord', and in Chronicles it is used with the broadest connotations, expressing any form of religious loyalty and piety.(Japhet 1993(Japhet :1032) ) She continues then to make an important observation about the construction being used here: However, its use with the preposition b e ʿad ('for, on behalf of') is extremely rare and attested only in Jer.21.2.By contrast, b e ʿad is the common preposition attached to verbs of supplication, denoting the role of the prophet: praying on behalf of his people (I Sam.12.19; I Kings 19.4,very often in Jeremiah, etc.).
The idiom dārāš b e ʿad expresses the prophet's double role, and uses the conventional dārāš to denote 'pray for'.The scene described in Jer.21.2 from the days of Zedekiah shares this new coinage.Josiah sends the delegation not merely 'to inquire' but also to 'pray', that the imminent 'wrath of the Lord' be averted.(Japhet 1993(Japhet :1032) ) 12 The second aspect Japhet (1993) emphasises (with reference to Moshe Weinfeld's study) is the fact that: [t]he prophecy of Huldah is a characteristic Deuteronomistic speech, full of Deuteronomistic expressions ..., its main point being an outright rejection of Josiah's plea.His recognition of the book's authority and claim for obedience, and his wholehearted humility before the Lord, are met with the answer that the verdict is final and cannot be changed.....As the prophecy stands, Huldah does not answer Josiah's address: she does not tell him what to do, and does not demand anything from him.(p.1033) Sarah Japhet (1993) then continues to reflect on the unusual reaction contained in this text: The unusual reaction may be compared to the many inquiries by the kings and people of Judah of Jeremiah, who always responded with some pointer to the correct path that should be followed, whether or not the people were willing to take it (cf.Jer.37.7-10; 38.2; 42.1-22, etc.).(p.1033) Although many aspects of Japhet's views can and should be followed up in studies of the Huldah oracle (in both its version in the Hebrew Bible) I would like to focus on one 11.The major critical commentaries of recent years (such as those written by Knoppers (2003), Dirksen (2005) and Klein (2006) are only available on 1 Chronicles.Because the text under discussion occurs in 2 Chronicles one must still rely on the earlier critical commentary of Japhet.
12.Cf.also Rüterswörden (1995:238-239).Japhet also notes two interesting changes in the Chronicler's presentation: 'According to 2 Kings 22.13, Josiah seeks the Lord 'for me, and for the people, and for all Judah' -a somewhat conflated reading in which the last phrase (if original) may be seen as apposition: 'for the people, that is, for all Judah'.The Chronicler rephrases this address to become more comprehensive, and to include the two major components of Israel: 'for me, and for those who are left in Israel and Judah'....Secondly, the more neutral phrasing 'our fathers have not obeyed the words of this book, to do according to all that is written concerning us' is rephrased with a more explicit referent: 'our fathers have not kept the word of the Lord, to do according to all that is written in this book' (Japhet 1993(Japhet :1032)).
element, namely the link with Jeremiah, which has surfaced in the above discussion.Many of the more recent studies (of the Kings version of the Huldah oracle) to which we now turn, also concentrate on this aspect.Rüterswörden (1995) already suggested that: Although the Huldah oracle and Jeremiah 36 do not share the same fate for the respective kings, both of these texts stand parallel in terms of their subordination of prophets under the prophetic book.Gerstenberger therefore 13.In support of her statement Grohmann refers to Deurloo (1993).
With the completion of this overview of recent studies on the Huldah oracle (in Kings) we can now return to our original enquiry, namely, how the narrative about this oracle functions in the book of Chronicles and how it contributes to the Chronicler's own rhetorical fibre.The views expressed in the mentioned studies all refer to the functioning of Huldah's oracle in the Deuteronomistic version in 2 Kings 22.One should now proceed to ask what implications these views hold for our study of Chronicles.
A paradigm for the Chronicler's Sondergut?
We have seen above that the Chronicler takes over the Deuteronomistic version of this narrative with only a few minor changes.The duplication of this narrative with its included oracle is therefore not in itself particularly remarkable.In this way the Chronicler shows, such as in the greatest part of his literary work, respect for the transmitted historiographical tradition.However, his continuity with the traditions of the past does not preclude the writer from using these transmitted traditions for his own purpose.I would like to contend that this is also the case with regard to the Huldah oracle.Four points should be emphasised in this context, which will subsequently be discussed: • introducing the prophet Jeremiah • creating parallels with Jeremiah texts • categorising Josiah as a good king • creating a terminological junction between the Deuteronomistic and Chronicler's versions
Introducing the prophet Jeremiah
The Huldah oracle provided the Chronicler with a useful way of introducing the prophet Jeremiah into his account.
It is remarkable that the prophet Jeremiah does not occur at all in the Deuteronomistic History.The only presence of this prophet outside of the book of Jeremiah is in Chronicles! 14 At the end of the Chronicler's Josiah narrative (the same narrative in which the Huldah oracle was included) it is mentioned that Jeremiah wrote a lament on the king's death after the latter was killed in a battle with king Neco of Egypt (2 Chr 35:25), an aspect which will be taken up in a further point below.However, of great significance in Chronicles is, that the exile is indicated as a fulfillment of a prophecy of Jeremiah (2 Chr 36:21), as well as the proclamation by the Persian emperor Cyrus which started the return from exile (2 Chr 36:22).In these instances the Chronicler therefore deviates from his Deuteronomistic Vorlage which does 14.Why the Deuteronomistic History does not mention the prophet Jeremiah explicitly (the question which has already been raised by Römer, Grohmann and others) remains a mystery which cannot be solved here.One would have expected the Deuteronomists to make reference to him seeing that they also found his prophetic material valuable, and contributed significantly towards the final form of the prophetic book of Jeremiah.
not contain any explicit reference to Jeremiah.In another contribution I have reflected on the question, why did the Chronicler emphasise Jeremiah so much in the climax of his version of Judah's history?
I think the answer lies in the Chronicler's strong tendency to merge different traditions in his version of the past.The Book of Jeremiah provided the Chronicler with a useful way of merging the Priestly and Deuteronomistic traditions on this point.The prominent occurrence of ּמה ָ ׁש ַ ('desolation') in Jeremiah gave the Chronicler the bridge to get to the P-tradition in Leviticus 26 in order to render the exile as Sabbath.But Jeremiah, with its prominent Deuteronomistic content, provided the possibility of appending his other prominent Vorlage, the Deuteronomistic History.(Jonker 2008:292) Without going into the detail of that argument now the point is emphasised here that the inclusion of the Huldah oracle, from his Vorlage, helped the Chronicler prepare the way for the conclusion to his literary work only two chapters later.
Creating parallels with Jeremiah texts
Another aspect which would have been very useful for the Chronicler is the close parallel of the narrative about Huldah's oracle with certain texts from Jeremiah, with Jeremiah 36 being a very prominent case.We have seen above that various scholars have indicated the literary connection between the narrative about Huldah's oracle (where a prophetess confirms the content of the Torah) and the narrative about Jeremiah's scroll (being written by Baruch, burned by Jehoiakim, and rewritten and appended with similar words).It is wellknown that the Chronicler introduced numerous formerly unknown prophetic figures into his narratives, a fact which prompted scholars to revisit the thesis that the phenomenon of prophecy started disappearing shortly after the exile.It seems, however, that a consensus is slowly growing amongst scholars that prophecy probably did not end with the exile or early Second Temple period.It rather seems that prophecy was transformed and that it had another function in society to what it had before.In this regard the view of Gerstenberger is significant (2004:364).After his analysis of the prophetic occurrences in Chronicles, he comes to the conclusion that, according to the Chronicler's view, the Mosaic Torah and prophetic utterances were qualitatively the same.He states the following: Das Prophetische sollte weder formal noch inhaltlich, noch qualitativ von den Tora-Vorschriften Jahwes abgesetzt werden.Oder: Die Tora, die ja Mitteilung des Gotteswillens durch Mose, den Gottesmann, war, unterschied sich höchstens situativ von der aktuellen Prophetenrede.Qualitativ waren Mose-Tora und aktuelle Prophetenrede einander gleich: Sie waren autoritatives Jahwe-Wort.(Gerstenberger 2004:364) If this view of Gerstenberger's is correct, then the narrative about Huldah's oracle would have provided the Chronicler an excellent opportunity to make this point.Many scholars have indicated that Hulda does nothing else in her oracle than confirm the king's understanding, which he has already gained from reading the Book of the Torah.Although the function of this narrative, in the Deuteronomistic version in 2 Kings 22-23, was probably to contribute to the discussion about who was seen as a true prophet and who not, that specific nuance is no longer present in the Chronicler's version as a result of a change to the death and burial notice (which will be discussed below). 15The narrative and oracle therefore no longer serve the purpose of defining who is a true prophet, but rather serve to elevate the position of the Torah over-against prophecy.The Torah (which was probably understood in the Chronicler's days as the whole Torah, and not just the laws contained in the core part of the book Deuteronomy) becomes a revelatory medium of the Word of Yahweh alongside those prophets who confirm, and thereby acknowledge, the authority of the written religious traditions.
Categorising Josiah as a good king
We have already referred to the changes the Chronicler made to Josiah's burial notice.Whereas 2 Kings 23:30 mentions that he was buried in his own tomb in Jerusalem, the Chronicler's version indicates in 2 Chronicles 35:24-25 that he was buried in the tombs of his fathers.Additional positive information is provided in the following words: So he died, and was buried in one of the tombs of his fathers.And all Judah and Jerusalem mourned for Josiah.Jeremiah also lamented for Josiah.And to this day all the singing men and the singing women speak of Josiah in their lamentations.They made it a custom in Israel; and indeed they are written in the Laments.
(2 Chr 35:24-25 NKJV) With these changes the Chronicler irrevocably categorises Josiah as a good king, but he also brings the end of the narrative in line with Huldah's oracle concerning the king's plight.The 'upgrading' of Josiah's burial place, as well as the laments sung by Jeremiah and the people after the king's death, indicate that he was indeed gathered with his fathers 'in peace' (34:28).There is no longer a discrepancy between Huldah's oracle and the negative shadow cast by the Chronicler's version of Josiah's death; these additional elements of the death notice (compared to the Kings version) rather cooperate to emphasise the positive role of King Josiah (which particularly becomes clear in the very elaborate Passover account of Chronicles).
Creating a terminological junction between the Deuteronomistic and Chronicler's versions
Lastly, we come to the issue that has been raised at the start of this study, namely the peculiarity that some of the 'typically Chronistic' terms also occur in the narrative about Huldah's oracle in Kings.Can these terms really be described as 'typically Chronistic' if they already occurred in the Chronicler's Vorlage?We have indicated above that the terms 'to seek or enquire of Yahweh', 'to humble oneself' and 'to leave or forsake' occur overwhelmingly in Sondergut passages in Chronicles (35 out of 41 occurrences), and can therefore be seen as programmatic in the Chronicler's construction.All of these do occur in Samuel-Kings (which served as Vorlage for the Chronicler), but with a rather low incidence.However, one of the junctions between these traditions is the Huldah oracle.The Josiah narrative is very important for both the Deuteronomist and the Chronicler.It has been indicated in many studies on the Deuteronomistic History that in it Josiah is idealised (Jonker 2002). 16Josiah is also very central in the Chronicler's construction, although he rather becomes instrumentalised than idealised (Jonker 2003:86). 17 The Chronicler uses Josiah as an instrument to put emphasis on the celebration of the Passover, a cultic event which has become very important in the Chronicler's time.In both of these accounts Huldah's oracle functions pivotally.But it seems that the Chronicler then used the language of this oracle and spread it over his own work to such an extent that it becomes a stylistic trait of the new work.The Chronicler picked up on the Deuteronomistic terminology, but then developed it to its full consequences in his own construction of the past.In this sense, one could contend that the narrative about Huldah's oracle became programmatic for the Chronicler's typical style.
At this point we may proceed to reflect on the relationship between this creative way of using the Deuteronomistic Vorlage and the socio-historical circumstances of the Chronicler.
Constructing a prototype for All-Israel's identity negotiation
Elsewhere I have explained that the book of Chronicles should be viewed as a participant in the process of identity negotiation in the late Persian period Yehud (Jonker 2009).The text does not reflect static identities, but rather actively contributes towards the process of negotiating a new understanding of 'All-Israel' in the late Persian period.How does the Chronicler's usage of the Huldah oracle contribute towards this process?
Our analysis of the functioning of the narrative about Huldah's oracle in Chronicles provides insight into how this process of identity negotiation took place: • Firstly, it took place in continuity with the transmitted historiographical tradition.The Chronicler indicated, through the inclusion of this part of the Deuteronomistic account into his own version, that the new literary construction builds upon those perspectives and convictions that have been given expression in the earlier reconstruction of Israel's past.The community for whom the new work was constructed in the Persian period was therefore projected as a continuation of the pre-exilic community.The exile did not sever the link with the past.
16.For a summary of studies that give this interpretation of the Deuteronomistic Josiah account, see Jonker (2002).17.Cf.Jonker (2003:86): 'King Josiah is still being thought of as a good king -one of the best they had.However, their rewriting of this king's history within the new context assigned a new function to this king.He is no longer viewed, as was the case in their older tradition, as the one epitomising and legitimising the Deuteronomistic theological tradition.Rather, he now serves the role of accentuating the cultic tradition (the Passover, in particular).It is not kingship that is at stake in the new situation, but cult.Who they were no longer primarily depended on having a Davidic king, but on the presentation and observance of their cultic traditions.' • Secondly, by applying the terminology of the transmitted material in a unique way in his own construction, the Chronicler emphasised what religious inclination was needed in his own day.Although the older tradition already indicated the importance of 'seeking Yahweh' and 'humbling oneself' before him and not 'forsaking' him, the Chronicler amplified these values by applying them to a much broader scope in his construction.Whereas these values were used in the Deuteronomistic version to provide a measuring rod against which the kings of the past could be judged, the Chronicler applied these values to his own community.If the general scholarly view is correct, that Chronicles was addressed to the cultic community in Jerusalem in the late Persian period Yehud, then the Chronicler utilised the transmitted material to formulate a prototype of good religious conduct for his own day.The religious community in Yehud is encouraged to seek Yahweh and rely on him, and not to forsake him in their existence as part of the Persian empire.• Thirdly (and most importantly), the specific usage of the transmitted material in the Chronicler's literary construction also shows that the Chronicler wanted to emphasise the role of Torah in the new community.
Whereas the finding of the Book of the Law functions in the Deuteronomistic version as legitimisation for the theological perspective, which is taken on the pre-exilic past, and the Torah now becomes Yahweh's revelatory medium for a new phase in Israel's existence.The postexilic age necessitated a theological re-orientation for the community in Jerusalem.We know from other studies (such as Pentateuch research) that this was the age in which the Torah (understood as the Pentateuch, with Deuteronomy loosened from its original position as introduction to the former prophets and re-anchored as the closing of the Pentateuch) became the constitution for the reconstruction of the religious community in Yehud.
In this respect Huldah's oracle was a useful text for the Chronicler with which he could amplify the role of Torah, and in expressing it so, he could contribute to formulating the prototype towards which All-Israel was encouraged.
Conclusion
I certainly did not solve all the burning questions with regard to Huldah's oracle in this contribution.And I did not even touch on the intriguing fact that it is a female prophet who plays such an important role in both the Deuteronomistic and Chronicler's versions.However, this study confirms that the Chronicler's usage of his Vorlage certainly was no haphazard cut-and-paste exercise.It was, rather, a careful and deliberate process of establishing continuity with the past, but simultaneously it also encourages the present community to live with religious integrity in a new age.
Aknowledgements Competing interests
The author declares that he has not financial or personal relationship(s) which may have inappropriately influenced him in writing this paper. | 7,655.2 | 2012-10-08T00:00:00.000 | [
"Linguistics"
] |
Privacy-preserving Image Classification with Deep Learning and Double Random Phase Encoding
With the emergence of cloud computing, large amounts of private data are stored and processed in the cloud. On the other hand, data owners (users) may not want to reveal data information to cloud providers to protect their privacy. Therefore, users may upload encrypted data to the cloud or third-party platforms, such as Google Cloud, Amazon Web Service, and Microsoft Azure. Conventionally, data must be decrypted before being analyzed in the cloud, which raises privacy concerns. Moreover, decryption of big data such as images requires enormous computation resources, which is unsuitable for energy-constrained devices, particularly Internet of Things (IoT) devices. Data privacy in most popular applications, such as image query or classification, can be preserved if encrypted images can be directly classified on the cloud or IoT devices without decryption. This paper proposes a high-speed double random phase encoding (DRPE) technique of encrypting images into white-noise images. DRPE-encrypted images are then uploaded and stored in the cloud. Images that are encrypted without being decrypted are classified using deep convolutional neural networks in the cloud. The simulation results indicated the feasibility and good performance of the proposed approach. The proposed privacy-preserving image classification method can be useful in data-sensitive fields, such as medicine and transportation.
I. INTRODUCTION
With the development of the Internet of Things (IoT), several wearable devices, home appliances, agriculture and transportation tools, and other devices are connected to the Internet. These devices generate large amounts of data every day, which are primarily stored in the cloud [1][2][3][4][5]. These data are also mostly processed in the cloud using cloud computing services from third-party platforms, such as Google, Amazon, and Microsoft because they can provide sufficient storage space and processing power. Although IoT and cloud computing techniques have made it easier to work and live, studies have shown that most consumers lack confidence in the data security of IoT devices and cloud computing [6][7][8][9]. An individual may not want unauthorized entities to have access to their tour photos, including family information. A patient does not want their medical records and diagnostic reports to be shared with others. A driver may not want to leak their private information, such as license plate, location, and driving habits, to outsiders. Security is a significant concern for data generated by IoT devices or data stored in the cloud because there is a high probability that some personal information would be included in this digital information [6][7][8][9][10]. Therefore, many privacy-preservation approaches, such as data encryption, have been proposed to make data transmission, storage, and extraction safe [11][12][13][14]. On the other hand, data encryption in the cloud can decrease the data processing efficiency for some applications, such as image classification and retrieval, because decrypting and then classifying or querying a large number of encrypted images may necessitate considerable computation resources. In addition, decryption processing within the cloud will reveal original images to unauthorized parties, such as cloud provider companies. Consequently, data processing based on encrypted information is promising and has become critical [15][16][17][18][19][20][21][22][23][24]. Image data are typically large, and image encryption with methods, such as data encryption standard (DES) or advanced encryption standard (AES), may be timeconsuming and unsuitable for low-computing devices, with limited power and computation capability, in IoT systems. By contrast, the double random phase encoding (DRPE) algorithm, which was proposed in [25], is an optical encryption algorithm with inherent parallel computing ability that be implemented efficiently. The DRPE can encrypt an input image into a white noise image that will not reveal any information about the original data. The original DRPE technique is achieved in the Fourier domain, whereas several other variations are implemented in different domains, such as the Fresnel domain, fractional Fourier domain, and gyrator transform domain [26][27][28]. The DRPE algorithm has been studied extensively and used widely in image encryption, authentication, and watermarking [29][30][31][32]. A previous study [33] claimed that the DRPE algorithm could be a good encryption algorithm in an IoT system for energy-constrained devices, while a secure key exchange scheme is proposed for image cryptography. Therefore, the DRPE approach would be a suitable scheme for image encryption for large-scale image datasets. The efficiency of image encryption is crucial, particularly for large data transmitted between devices and cloud servers. With advances in computing power, such as graphic processing units, big datasets, such as ImageNet [34], and advanced training schemes, deep learning (DL) methods have become a hot research field in recent years [35]. Because AlexNet [36] achieved good performance in the ImageNet Large-Scale Recognition Challenge, several DL architecture variants have been proposed. For example, ResNet [37], DenseNet [38], and Inception Net [39], are good classification convolutional neural networks (CNNs). U-Net [40], DeepLab [41], and Gated-SCNN [42] are DL models proposed for semantic segmentation. The Faster R-CNN [43], single-shot detector [44], and You Only Look Once [45] are robust neural networks (NNs) for object detection. BERT [46] and GPT [47] are robust natural language processing (NPL) models. Traditional ML approaches typically require users to design and discover useful features themselves. Thus, domain knowledge may be critical for feature extraction. By contrast, DL methods can automatically extract the relevant features from data by optimizing a target function, e.g., minimizing a loss function [48]. Moreover, DL models are more robust to images with illumination variations, color differences, and target location offset. DL models are used widely in image analysis, audio processing, and NLP [48,49]. They are also used extensively in the classification of encrypted images because deep NNs have a great ability to automatically extract good features from data, even encrypted data, whereas manual extraction of the good features from encrypted data is difficult because encryption may cause the encrypted data to have few discriminative patterns to identify a specific category. This paper proposed an image classification method based on encrypted image data. The proposed method can be crucial for cloud computing and IoT systems. First, the images are encrypted using the DRPE method. The encrypted images are then transmitted to the cloud owned by a third-party company. DRPE-encrypted images are white-noise images that will not reveal the clients' sensitive information in the cloud. Without decrypting the images in the cloud to train an ML algorithm, DL models have been developed and trained directly using the encrypted data, increasing computation efficiency because the decryption of many images in the cloud is highly time and resource consuming. This study developed two types of DL models for encrypted data classification. One is a CNN, with a similar architecture to the conventional CNN, which is used mainly for classification. The other is an encoder-decoder structure with a CNN as an additional branch. The second model can achieve image classification and decryption simultaneously. The remaining part of this paper is organized as follows. Section II reports the related works. Section III describes DRPE. Section IV presents the procedure of the encrypted image classification. Section V presents experimental results, and the conclusions are drawn in Section VI.
II. RELATED WORKS
Data processing based on encrypted information provides a good way for privacy preservation. Several studies have evaluated information retrieval and classification using encrypted data stored in the cloud to protect sensitive information. In [17], images were captured using roadside units, and the vehicles in those images were segmented through edge detection. Furthermore, segmented images are encrypted using a suitable algorithm using a selected mode of operation, and the encrypted data were classified based on the convolutional neural networks. As a result, the encrypted image provides a way of protecting the drivers' sensitive information in intelligent transportation systems. In [22], a deep learning model was designed to automatically extract the useful features from encrypted traffic data for traffic identification and classification to preserve the users' privacy. In [18], they proposed a privacy-preserving algorithm for classifying images, which are encrypted using a pixel-based image encryption method. This algorithm could achieve image augmentation in the encryption domain during the algorithm-training phase. In [15], a block encryption algorithm, such as DES and AES [23], was used for image encryption, and encrypted images were classified using a trained multilayer extreme learning machine to achieve data security. In [24], the researchers proposed deep CNNs with a novel activation function to classify encrypted image data generated using a homomorphic encryption method. Their results highlighted the robustness of the approach proposed in the study over encrypted data. In [16], image data are also encrypted using a homomorphic encryption algorithm, and encrypted data were classified using a non-linear support vector machine. The original data were encrypted, and machine learning (ML) algorithms were trained and inferred based on encrypted images, providing a good way to protect sensitive information among these methods. On the other hand, encrypted data based on optical encryption approaches with inherent parallel properties have not been researched. This paper reports the feasibility of encrypted image classification based on DRPE-encrypted data.
III. DOUBLE RANDOM PHASE ENCODING
DRPE, as a popular optical security approach, has been researched extensively because of the easy configuration and parallel processing properties [26][27][28]. DRPE has also been widely used in image authentication, information hiding, and watermarking [29][30][31][32][33]. Figure 1 presents a graphic diagram of a DRPE method in the Fourier domain. In the DRPE scheme, the input image I(x,y) is encoded into a stationary white-noise image E(,) using two random phase masks, m1 = exp(j2πm(x,y)) and m2 = exp(j2πn(u,v)), where each element within m(x,y) and n(u,v) is distributed uniformly between 0 and 1. exp(·) represents the natural exponential function, and j represents the imaginary unit. For DRPE implementation in the Fourier domain, the first random phase mask m1 is located in the input image plane. The second random phase mask, m2, is positioned in the Fourier domain. For DRPE implemented in other domains, such as the Fresnel, Gyrator, and Fractional Fourier domains, the random phase mask m1 remains in the input image plane, whereas the second random phase mask, m2, is placed at the corresponding domain [30]. The computational implementation of DRPE in the Fourier domain can also be written mathematically as where FT and FT -1 represent the 2D Fourier and inverse Fourier transforms, respectively. By contrast, the implementation of DRPE decryption in the Fourier domain is the inverse operation of DRPE in encryption processing. DRPE decryption can be expressed mathematically as follows: where D(x,y) is the decrypted image from DRPE, and |·| is the modulus operation, conj is the complex conjugate.
Equation (2) does not include the first random phase mask, m1, because the intensity of the decrypted image D(x,y) will not be affected because of the modulus operation [25].
IV. CLASSIFICATION OF DOUBLE RANDOM PHASE ENCRYPTED IMAGES
The first step in the encrypted image classification process is to encrypt input images before transmitting them to the cloud. Consequently, in the cloud, there are only encrypted images, and no image information is revealed. In this study, all images were encrypted with DRPE algorithms. The elements of the DRPE-encrypted images are complex values, and both real and imaginary parts are used as inputs in training a DL model. Figure 2 presents the procedure for DRPE-encrypted image classification. As illustrated in Figure 2, the trained model was tested for encrypted image classification after training the DL model. In the testing phase, the same random phase mask keys as those used in the training step were adopted in the DRPE algorithm, and the encrypted images with both real and imaginary parts are the inputs of the trained model for prediction. This paper proposed two DL models for DRPE-encrypted image classification. The first is a CNN that is referred to as the CNN algorithm in the following description. The second one is a fully connected CNN with an auxiliary branch for encrypted image classification. The second DL structure is expressed as FCNAux in the following context. Figure 3 shows the CNN method used here. In this CNN approach, the input image size was 32 × 32 × 2, and there were 10 categories in the output layer. In Figure 3, the Conv 3 × 3 is a convolutional operation with a kernel size of 3 × 3. BN represents batch normalization. ReLu is the rectified linear unit activation function. Max pooling 2 × 2 means max-pooling operation with a stride value of 2 in both the x and y directions in the feature map. The max-pooling layer may extract the largest element from the 2 × 2 regions in the feature map, and the resolution was reduced by a factor of 2 × 2 at each max-pooling 2 × 2 block. FC denotes the fully connected operation, whereas Softmax is a normalized exponential function that can normalize the output of an NN to a probability distribution over the predicted output classes [49,50]. The number above each block in Figure 3 is the size of the feature map of that layer. Typically, the convolutional operation in NNs can preserve the spatial relationship among pixels and learn the image features within the receptive field [49,50]. An activation function, such as ReLu, can introduce non-linearity into the models and enhance the ability to extract complex features [49,50]. Max pooling can reduce feature map dimensionality and is beneficial to the models' computation. This also makes the DL models more robust to target translation within the images [49,50]. BN can mitigate the internal covariate shift problem and reduce the gradient vanishing problem during algorithm training [51]. Figure 4 presents the FCNAux architecture. Compared to the DL structure in Figure 3, there is an additional decoder part in the FCNAux structure in Figure 4. The decoder operation in FCNAux can be considered a decryption operation and can help modulate the NNs to learn features that are helpful for both classification and decryption. In Figure 4, the Up-Conv 2 × 2 means an up-pooling operation, which can increase the feature map resolution by a factor of 2 × 2. The Sigmoid function is an activation function that can scale the output values to a range of 0-1. For both CNN and FCNAux models, the cross-entropy (CE) loss is used for encrypted image classification and is expressed as the following equation [ (3) where N is the size of the dataset, and M is the number of classes; yoc is the binary indicator (0 or 1), which is 1 when the class label c is the correct classification for the observation sample o; otherwise, yoc is 0. poc is the predicted probability for observation o with class c. The mean absolute error (MAE) is used as the regression loss function for the decoder branch in the FCNAux model, as shown in Figure 4, and it is mathematically given as follows: 11 (4) where M and N are the sizes of an image along the x and y directions, respectively. G(i,j) is the ground truth pixel value at the image location (i,j), and P(i,j) is the predicted image pixel value at the image location (i,j). The |·| denotes the absolute operation.
V. EXPERIMENTAL RESULTS
The Fashion-MNIST dataset [52] is used to demonstrate the proposed approach. The dataset contained 70,000 images, of which 50,000 are used for training; 10,000, for validation; and 10,000, for testing. The original size of the images in the Fashion-MNIST dataset was 28 × 28, and they were resized to 32 × 32, and then encrypted using the DRPE algorithm. There are 10 categories within this dataset: T-shirt, Trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. The models were developed on a server with 48 central processing units (CPUs) (Inter(R) Xeon(R) CPU E5-2650, Ubuntu 16.04 operating system, and one P100 NVIDIA GPU). The DL algorithms were implemented with PyTorch, a Python DL framework [53]. Both training and testing were performed through GPU parallel computing. Figure 5 presents a DRPE-encrypted image. There are real and imaginary parts of the DRPE-encrypted image in the figure because the element of the encrypted image is a complex number. The real and imaginary parts of the encrypted image were concatenated to form a 32 × 32 × 2 tensor, which serves as the input to the DL models. To train the CNN algorithm, the batch size was set to 64; the learning rate was given as 0.01, which decreased by a factor of 10 every 40 epochs; the number of epochs was set to 120. L2 regularization [50] was used in this study, and its value was given as 0.0005. The momentum gradient descent method was used as the optimization approach, which was set to 0.9. An image augmentation technique was used in algorithm training, including image flipping, random image rotation, random noise addition, and random pixel exclusion. The overfitting problem was mitigated by performing the early-stop technique on the training and validation datasets. Figure 6 presents the epoch number versus loss value curve on the training and validation datasets. As shown in Figure 6, the loss difference between the training and validation datasets begins to enlarge at the epoch number of 82. Therefore, the trained model at epoch number 82 was used for testing. After training the CNN algorithm, the trained model was evaluated using 10,000 test images. The accuracy of the CNN algorithm for DRPE-encrypted image classification was 0.8972. Table I lists the corresponding confusion matrix. T-shirt 843 1 18 17 3 0 112 0 6 0 Trouser 3 983 0 For comparison, the same CNN structure was used to classify the original images without DRPE encryption. That is, the CNN algorithm was trained and tested using the original images without applying any encryption algorithm. In this case, the input image size for the CNN algorithm was 32 × 32 × 1. An accuracy of 0.9098 was obtained on the 10,000 test images. Table I lists the corresponding prediction confusion matrix. Prediction results in Tables I and II have indicated that the trained CNN algorithm obtained similar classification accuracy for both DRPE-encrypted images and the original images (i.e., without DRPE encryption). Some attacks were used to assess the robustness of the CNN algorithms. Data can easily suffer from noise contamination or pixel value loss during data transmission on the Internet. Therefore, the ability of the proposed CNN architecture for encrypted image classification was tested in terms of partial pixel loss and noise attack. Partial pixel loss attack was simulated by randomly excluding 20%, 40%, and 60% of pixels from an encrypted image. Figure 7 presents the encrypted image with some pixels being excluded randomly.
Only the real part image in the encrypted image is presented in Figure 7, whereas the same operation was performed on the imaginary part. The excluded pixel locations were given a value of zero in the partial pixel loss attack. Table III lists the encrypted image classification accuracy of the proposed CNN model for images with some pixels excluded randomly. The simulation for the noise attack is based on additive noise. The adding noise operation is expressed in the following equation [54]: (1 ), where E and Enoise are the DRPE-encrypted image and DRPE-encrypted image with additive noise, respectively; w is the weight of noise; m is the matrix with the same size as the encrypted image E, with the element value chosen at random between 0 and 1. The additive noise attack test was performed with three different weight values: 0.25, 0.5, and 1. Figure 8 illustrates the encrypted image with the addition of the noise attack. Similarly, Figure 8 shows only the real part of the DRPE-encrypted image. The CNN classification results for DRPE-encrypted images under the noise attack are presented in Table III. As shown in Table III, the proposed CNN algorithm can achieve good DRPE-encrypted image classification performance under partial pixel loss and noise attacks. Figure 10 presents the regressed images generated by FCNAux. As illustrated in Figure 10, the FCNAux model can generate images similar to the original images, even though some details in the regressed images are missing. The same attacks, partial pixel exclusion and noise attack, are applied to the FCNAux to test its robustness. For partial pixel loss attack, the exclusion pixel percentages were set to 0.2, 0.4, and 0.6. The weight value for the noise attack was 0.25, 0.5, and 1.0. Figure 11 shows the predicted images from the decoder branch in the FCNAux algorithm under the partial pixel loss attack, whereas those under the noise attack are given in Figure 12. The corresponding prediction accuracy values are included in Table V. Table V indicates that the FCNAux model is robust to partial pixel loss and noise attacks. The proposed CNN and FCNAux algorithms were also trained and tested on the MNIST dataset [55], which was encrypted using the DRPE technique. The accuracy for the proposed CNN and FCNAux methods based on 10,000 MNIST test images was 0.9237 and 0.9278, respectively. The accuracy of the CNN algorithm using the original images (i.e., without encryption) in the MNIST dataset is calculated to be 0.9303. These results using the MNIST dataset also confirmed that the proposed CNN and FCNAux models could perform well for DRPE-encrypted image classification. The proposed methods were also compared with those in [15,16,18], where images are encrypted with AES [15], homomorphic encryption [16], and a pixel-based encryption approach [18]. Here, 16 pixels which have a total of 128 bits are viewed as a block to be used as the input of AES and homomorphic encryption. Table VI lists the encrypted image classification results from these approaches based on 10000 testing Fashion-MNIST datasets. The method in [18] showed similar classification accuracy in the present and much better than those in [15] and [16]. This study did not focus on the classification accuracy and mainly demonstrated that encrypted data from optical encryption methods, such as DRPE, can be classified using a deep learning approach so that privacy can be preserved in the cloud. Compared to other encryption methods, such as AES in [15], homomorphic encryption in [16], and pixel-based encryption in [18], DRPE as an optical encryption method has a potential advantage in parallel computing. The computational complexity of the optical encryption algorithm was O(1) in Big-O notation [33]. They were O(n 2 ) for AES in [15], homomorphic encryption in [16], and pixel-based encryption in [18] for encrypting an n×n image when only "+","-","×","÷", and modulo operations were considered.
VI. CONCLUSIONS
This paper proposed a privacy-preserving image classification method based on DRPE and DL. Unstructured data, such as images, were first encrypted using the DRPE method. The encrypted images have a white noise appearance, which reveals no information to third parties. The parallel computing ability of the DRPE approach improved the speed of big data encryption, which is important for edge devices, such as smartphones, sensors, and other portable devices. Image classification is a popular ML task that has always been performed on original images. On the other hand, this task raises data privacy/security concerns for users in cloud computing and IoT systems. In this study, the classification approach was established, whereby DL models were applied to DRPE-encrypted and unencrypted images. No sensitive information was leaked in training because the models were trained using encrypted images. In addition, the performance of the models in two cases (i.e., DRPE-encrypted and unencrypted images) were compared. The results suggested that the DL models achieved good performance on the DRPE-encrypted images. The proposed method can be useful for storing and processing data in the cloud because it ensures data privacy. This method is also useful in IoT systems. | 5,460 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Digital business models and company growth opportunities in the energy market
. We are currently witnessing significant changes in the global energy market. The energy industry is entering the stage of the 4th energy transition, which is characterized by an increasingly large-scale increase in the use of renewable energy sources and a decrease in the share of fossil fuels. The energy sector is also strongly influenced by the trend of digital transformation and the use of digital technologies. The aim of the work is to study the possibility and conditions of using digital business models by companies in the energy sector to improve competitiveness and market growth. To achieve the goal of the study, the following tasks were set: to analyze the main trends and the state of digitalization of the energy industry; analyze digital business models and assess the possibilities of their use by energy companies; to formulate approaches to transforming the activities of energy companies in the transition to a digital business model. The authors hypothesized that the use of digital business models will allow energy companies to remain competitive and gain access to new markets by introducing new technologies. To conduct the study, the methods of microeconomic and industry analysis, systemic and comparative analysis, and analysis of the organizational behavior of the company were used. The results of the analysis showed that the global energy market is characterized by growth in dynamics and volatility. In order to adapt to changing conditions and maintain competitiveness, energy companies need to take advantage of digitalization and shape a digital strategy. One of the basic elements of an organization's digital strategy is a successful digital business model. The article discusses the types and features of digital business models, and also formulates approaches to transforming the activities of energy companies in the transition to a digital business model.
Introduction
Currently, digital technologies (social, mobile, analytical, cloud, Internet of things, etc.) are penetrating all spheres of society [1]. Artificial intelligence and robots are changing the look of industries [2,3] and the way organizations operate. The digitalization processes are *Corresponding author<EMAIL_ADDRESS>actively developing in the energy industry. Digitization enables companies to expand technical capabilities to improve customer engagement and satisfaction, streamline business processes, and increase employee engagement [4][5][6].
The results of the activities of companies in different industries show that for business differentiation, it is necessary to provide customers not only with better quality or lower prices, but to a greater extent offer a better business model. Thus, in order to succeed in the competitive struggle, companies radically change their business models [7]. The purpose of the article is to analyze digital business models that companies use to increase competitiveness in the market and identify approaches to transforming the activities of energy companies in the transition to a digital business model.
Digital technologies and the energy market
The energy market is increasingly influenced by factors such as the decarbonization of the world economy, such as the response to climate change and global warming; distributed generation; implementation of projects aimed at energy saving and energy efficiency growth; an increase in the number of electric vehicles; use of energy storage systems (batteries and fuel cells) [8][9][10][11][12].
Thus, according to Navigation Research, Bloomberg Energy Finance [4], more than 200 MW of new generating capacities were commissioned in the world in 2017. If we consider the structure, then almost equal amounts of centralized generation and distributed generation capacities were introduced. Distributed generation is expected to reach more than 250 MW by 2025, while centralized generation will be even lower than the 2017 level (less than 100 MW).
The energy sector is also strongly influenced by the trend associated with digital transformation and the use of digital technologies (cloud, big data, mobile, Internet of things, artificial intelligence, etc.) in the economy. Energy companies are already actively using digital technologies to automate all stages of the value chain (Table 1). The total volume of the global market for digital technologies in the energy sector in 2017 amounted to $ 52 billion. The market is expected to grow to $ 64 billion by 2025. Areas of automation include maintenance of power plants, installation and operation of "smart" meters, automation of distribution networks, operation of home energy management systems, ensuring the stability of the operation of energy facilities, etc. [4].
The analysis shows that digital technologies are already successfully used in the energy sector to automate tasks for managing the current operating activities of enterprises: asset management, maintenance and repair, procurement, billing and other basic business processes [6].
In addition, digitalization contributes to the development of new energy technologies, such as distributed integration, and makes it possible to coordinate the work of complex energy systems.
According to Kearney [13], there is a trend towards decentralizing the value chain of energy enterprises and expanding the list of services that they can offer to consumers. Personalization of customer service and provision of services together with partners from other industries is considered as a strategic goal.
To successfully adapt to the ongoing changes, as well as to use the opportunities provided by digitalization, it is advisable for companies to form a corporate strategy that allows them to offer new customer value that meets or anticipates customer expectations. In this case, it becomes necessary to adjust the current business model and transform it into a digital business model.
Features of digital business models
There is a plethora of definitions of the concept of "business model". We will use the definition formulated by Osterwalder and Pigne [14] that a business model serves to describe the basic principles of the creation, development and successful operation of an organization.
The results of the analysis show that business models generated by digital technologies can be classified according to different criteria. Consider some of these criteria and their associated business models. There is an approach that involves two possible types of business models: a customer acquisition model or a digital decision model [7].
The business model of attraction involves a comprehensive, individualized approach that ensures customer loyalty. This business model focuses on building trust and customer loyalty in order to achieve high brand loyalty. The implementation of this model involves multichannel interaction with customers, a deep understanding of their needs and rapid response to changes in these needs (Kaiser Permanente). The business model of digital solutions is since customers are provided with products or services with the addition of information, thereby creating a new value.
This business model means the integrated nature of the products and services that are offered to customers in order to meet their needs. Moreover, the strategy is aimed at gradually moving away from the sale of individual products to more complex products with additional value (Shindler Group, Apple). The successful implementation of such a business model requires a digital operating platform that combines all corporate functions in a single unit. The goal is to ensure access for all corporate units to a single database of customers in order to increase their satisfaction and loyalty. In addition, the presence of a corporate culture loyal to digital transformation is important.
Following a dynamic approach, when any kind of business model can be the starting position for the company, and the other kind can be a guideline, we can consider the following typologies of business models. For example, Weill and Woerner [15] offer an approach based on changing the business model in two directions: i) movement in the direction from control in the value chain to participation in the ecosystem; and ii) movement from low customer awareness to full knowledge (purchase history, goals, etc.). As a result of the intersection of these areas, four digital business models can be distinguished: According to the approach proposed by Linz et al. [16], two megatrends are currently influencing the activities of organizations -digitalization and service orientation -a servicesoriented. Digitalization enhances the technical capabilities of firms to develop, produce, deliver their proposals and manage large-scale interactions with customers, the servicesoriented approach represents a fundamental change in the field of value proposition for customers towards the joint creation of value and individualization.
The influence of the aforementioned mega-trends guides the decision-making process on choosing a business model in two dimensions: transaction coverage and proposal customization.
Moving along measurement data shows the capabilities that a company can use to develop or modify a business model. Following along the axis -transaction coverage shows the movement of the company from single and independent transactions (single products) to complex and interconnected transactions (related products) with repeated transactions.
Proposal customization is the movement from standardized, batch and automated offers to individualized, created jointly by the company and the client. As noted by Loucks et al. [17], there is an approach that involves the classification of business models depending on the type of value for the client that they provide: i) the value of the value; ii) the value of the interaction; iii) the value of the platform. Let us consider the features of these business models.
The value of the value: The most intense competition between companies occurs within the framework of this strategy. Companies greatly reduce the cost of the product for customers. Great opportunities for lowering prices are provided by product virtualization. In the framework of this approach, there are such business models as: -Price transparency; -Free / very low price; -Reverse auction; -Buyers association; -Consumption-based pricing.
The value of interaction.
This approach is based on the fact that customers are offered a more convenient way to obtain a product or service. As digitalization intensifies, the value of interaction increases. This is due to the fact that now in the value chain you can select only those elements that attract the buyer and transfer them to digital devices.
The most successful digital companies are disaggregating the products of companies from different industries. They provide an opportunity for customers to select (and pay) only those products and services that are valuable to them, excluding other elements included in the chain and leading to an increase in prices.
Digitalization enables new companies to personalize their products and services and provide them at lower prices than traditional companies.
This approach complicates the activities of established companies that are accustomed to competing either through differentiation or brand. Examples of business models based on the value of interaction are: -Empowerment of customers; -Individualization; -Receiving results in real time; -Reduction of disagreements; -Digital technology. The value of the platform. The value of the platform is a new form of competition, which arose precisely in the context of digital transformation. The business platform is non-linear and is determined by network effects.
Network effects are determined by Metcalfe's law, according to which the value of the network grows in proportion to the square of the number of users. These effects arise due to various forms of interaction between network participants: -Dependence on each other; -Equal relations with each other; -Interconnection in the process of game models; -The presence of feedback between the participants Due to the existence of such forms, the overall effect of network activity exceeds the results of interaction within the framework of individual areas of its functioning. Thus, the presence of the platform allows the client to get additional value associated with connecting him to the network. For companies, such a business model is very attractive.
In the event that the platform is effective, then it has a very high degree of competitiveness. The result of such an activity is the approach when "the winner gets everything." The owners of the platform earn the bulk of the profit in individual markets. Examples of such platforms are Apple, iTunes, Facebook, and other business models. Within this framework, the following business models are distinguished: -Crowdsourcing; -Ecosystem; -Communities; -Digital market; -Data orchestrator. The analysis shows that at present, companies use a variety of business models to maintain and enhance their competitiveness. For the energy companies, the problem arises of choosing the most relevant digital business model and developing a strategy for the transition from the current business model to a digital one.
Approaches to transformation the activities of energy companies in the transition to a digital business model
The choice of an acceptable digital model should be determined by the real capabilities of the company and how it wants to compete. In what directions do the leaders and organizations of the energy industry need to decide in order to move to the transformation of the business model in the direction of its digitalization? The analysis shows that, first of all, it is necessary to assess organizational competencies in the following areas, the characteristics of which are presented in Table 3 that follows. Table 3. Directions for assessing the organizational competencies of energy companies for the transition to a digital business model Organizational competencies
Directions for assessing
The impact of digitalization on company products /services -A service or product is provided using the Internet; Whether there is an influence of companies in other industries on the product / service; -Is there a possibility of replacing the product / service with another digital product; Possible financial losses on the company's products / services from digitalization Competitiveness The quality and relevance of the information that the company places for customers; The possibility of cross-selling and organizing multi-channel services for customers; - The quality of the internal platform of the company; The ability to connect external suppliers, partners, customers to the platform Current business model The assessment will allow leaders to understand which business model is current and what level of digital culture the company has in order to begin the transformation towards a digital business model.
Once the assessment has been completed, a vision for a new digital business model, a digital strategy and an implementation plan must be formulated. An analysis of the experience of companies that have carried out the transformation of business models shows that a digital strategy is not just measures to automate individual functions. Digital transformation must be part of the corporate strategy and requires the active involvement of the company's top management.
When developing a digital strategy, it is necessary to determine the main directions of change and the necessary resources (financial, technical, human, etc.) for the transition to a new business model. It is important to form a cross-functional team of specialists who will launch change projects. In addition, it is necessary to develop the digital skills and competencies of all employees in the company.
Many companies introduce additional levels of hierarchy for technicians: at one level, there are specialists who perform standard tasks, and at a higher level, those who perform more complex creative tasks.
For the implementation of projects related to the expansion of the list of services of energy enterprises, it is advisable to interact with technology companies and start-ups. These joint ventures enable energy companies to leverage digital skills and know-how that they lack.
Conclusions
Overall, it appears to us that the success of energy companies in the digital environment presupposes not a simple use of digital technologies to automate certain functional areas, but a change in production and management business processes, value chains, and ways of doing business.
It becomes apparent that digitalization provides energy companies and their leaders with opportunities to qualitatively change their business and increase its competitiveness through the introduction of digital business models. These models improve customer service and reduce costs through standardization and streamlining of operations.
Finally, we can conclude by stating that digital business models allow in the short term to improve the efficiency of current operating activities, and in the long term -to offer customers new customer value. | 3,637.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Ultrafast one‐pass FASTQ data preprocessing, quality control, and deduplication using fastp
Abstract A large amount of sequencing data is generated and processed every day with the continuous evolution of sequencing technology and the expansion of sequencing applications. One consequence of such sequencing data explosion is the increasing cost and complexity of data processing. The preprocessing of FASTQ data, which means removing adapter contamination, filtering low‐quality reads, and correcting wrongly represented bases, is an indispensable but resource intensive part of sequencing data analysis. Therefore, although a lot of software applications have been developed to solve this problem, bioinformatics scientists and engineers are still pursuing faster, simpler, and more energy‐efficient software. Several years ago, the author developed fastp, which is an ultrafast all‐in‐one FASTQ data preprocessor with many modern features. This software has been approved by many bioinformatics users and has been continuously maintained and updated. Since the first publication on fastp, it has been greatly improved, making it even faster and more powerful. For instance, the duplication evaluation module has been improved, and a new deduplication module has been added. This study aimed to introduce the new features of fastp and demonstrate how it was designed and implemented.
bioinformatics users and has been continuously maintained and updated.Since the first publication on fastp, it has been greatly improved, making it even faster and more powerful.For instance, the duplication evaluation module has been improved, and a new deduplication module has been added.This study aimed to introduce the new features of fastp and demonstrate how it was designed and implemented.
K E Y W O R D S
adapter, duplication, FASTQ, filtering, preprocessing, quality control Highlights • Fastp is an ultrafast tool that processes FASTQ data in a single pass.
• Fastp has been redesigned to make it faster and generate reproducible results.
INTRODUCTION
High-throughput sequencing technology has developed rapidly for nearly 20 years.Every year, various new sequencing platforms are launched, and the sequencing throughput continues to increase.Regardless of the sequencing platform and the sequencing throughput, FASTQ is adopted as the standard for the generated data of most high-throughput sequencing platforms.These FASTQ data need to go through quality control and a series of preprocessing steps before they can enter the downstream analysis to ensure the cleanness and accuracy of the data.In almost all application scenarios of sequencing, the effectiveness of data preprocessing usually greatly impacts the final analysis results [1].For example, circulating tumor DNA sequencing can be used for finding personalized therapy and detecting minimal residual disease.However, its result is seriously affected by sequencing data quality, as adapter contamination, sequencing noises, and other artifacts can impact the analysis accuracy, leading to incorrect treatment decisions [2].Many tools have been developed to address the problem of FASTQ data preprocessing and quality control.For example, Cutadapt [3] and Trimmomatic [4] have been widely used for adapter trimming and quality pruning.Many tools, such as FQC Dashboard [5] and NGS QC Toolkit [6], were developed for FASTQ data quality control.However, these tools have two major problems.One is that they are not efficient enough or consume too much memory.The other is that their features are not comprehensive enough, resulting in the need to go through the data multiple times with different software modules to complete the whole preprocessing and QC process.Fastp [7], which is an ultrafast all-in-one FASTQ data preprocessor with many modern features, was developed to solve these problems.Fastp can perform adapter removal, global or quality trimming, read filtering, unique molecular identifier processing, base correction, and many other actions within a single pass of data scanning.Since the first publication of fastp, it has been adopted by many community users.However, the earlier versions of fastp have some problems; for example, the execution results cannot be reproduced, the data compression speed is not ideal, and so on.The new fastp was developed by reconstructing the multithreaded computing architecture of fastp software and introducing a more efficient data compression and decompression algorithm, which is based on the highly optimized compression library igzip, to solve the aforementioned key problems.Besides architecture optimization, some new features have also been added to the new fastp, such as rapid deduplication.Fastp outputs an interactive HyperText Markup Language (HTML) report for manual checking and an informative JSON report for automatic quality control.Figure 1 shows a part of fastp's HTML report.
METHODS
Fastp is a multithreaded multifunctional preprocessor for FASTQ streams.It accepts single-end or paired-end FASTQ data as inputs and outputs the processed data along with the QC metric reports.Figure 2 shows how fastp processes paired-end FASTQ data.
Two worker threads are used for demonstration, but usually, much more worker threads (usually 3-16) are used to make preprocessing faster.The classical producer/ consumer thread model is applied, and specifically, the input and output read packs are stored in a singleproducer-single-consumer (SPSC) list for thread-safe communications.This SPSC list is implemented without any thread locks to support high-performance interthread communication.As shown in Figure 2, for a certain read pack, it is fixed that in which worker thread the read pack will be processed.This feature keeps the output and input data in the same order, making the output completely reproducible, which means the resulting output files will be identical if the command is run twice.
Most features shown in Figure 2 were introduced in the first publication on fastp.Some features, such as paired-end merging and deduplication, have been recently introduced.Applying paired-end merging is relatively simple after the overlapping analysis is complete.
Removing redundant reads is a necessary step for the NGS data analysis pipeline.Previous deduplication tools typically require the reads to be mapped to the reference genome first, which makes them inefficient and unsuitable for some applications that do not invoke sequence alignment.The new fastp implements a fast, accurate, and memory-efficient FASTQ-level deduplication. Figure 3 briefly illustrates the method by which fastp removes duplicated reads.
As shown in Figure 3, many bloom filter arrays (e.g., three) are used, and each has L bits.A hash function is defined accordingly for each array.A hash function maps a read sequence into an integer number p ∈ [0, L); therefore, a read R will be mapped to p 1 , p 2 , and p 3 .If Array1[p 1 ], Array2[p 2 ], and Array3[p 3 ] are all positive, then R is marked as duplicated; otherwise, Array1[p 1 ], Array2[p 2 ], and Array3[p 3 ] are set to be positive.For paired-end reads, the read pairs are combined first and then treated as same as singleend reads.
F I G U R E 2 Paired-end data processing workflow of fastp.The workflow can be simply divided into a decompressor, a preprocessor, and a compressor.The input-paired FASTQ files are decompressed individually to read packs, and each pack consists of fixed read records.Each worker thread picks the odd or even read packs one by one, processes the reads, makes some statistics, and outputs the clean data to the compressor in the same order.
RESULTS AND DISCUSSIONS
Compared with earlier versions, the new fastp provided more powerful performance and generated reproducible results.Although some new features have been added and speed has been improved, fastp still maintains a very small memory requirement.Typically, it requires only 4GB or less memory to run fastp, which makes it very suitable for cloud-based applications.Table 1 shows the performance comparison between Trimmomatic-0.39, fastp 0.20.0, and fastp 0.23.2.
A total of 11 paired-end sequencing data, which were generated from Illumina or MGI platforms, were evaluated on a typical computing server (CPU, 64-cores 2.5 GHz; RAM, 1024 GB).All experiments were conducted using eight worker threads with the default or recommended F I G U R E 3 How fastp determines whether a read is unique or duplicated.
CONCLUSION
The new architecture significantly enhanced the performance of fastp, making the results of fastp reproducible.In addition, fastp was extremely easy to get started with and could be easily obtained by downloading the prebuilt binaries or installing it via BioConda [8].
Considering its ultrahigh performance, rich functions, and simple usage, fastp should be one of the best choices for FASTQ data preprocessing and quality control.
AUTHOR CONTRIBUTIONS Shifu Chen developed the software and wrote the manuscript.
F I G U R E 1
Part of interactive statistical plots of fastp.(A) The per-cycle quality curves, and (B) the per-cycle base content curves.(C) The distribution of evaluated insert size, with a small portion of reads remaining unknown due to their paired reads that are not overlapped, which is usually due to the fragments being too long.(D) The statistics of overrepresented sequences, including their per-cycle distribution.NEW FASTP: FASTER AND MORE POWERFUL FOR FASTQ PREPROCESSING | 3 of 5 As shown in Table1, fastp 0.23.2 was ~9× faster than Trimmomatic-0.39, and ~1.8× faster than fastp 0.20.0.This result indicated that the new fastp took only 25 min to perform preprocessing and QC of paired-end data of 100 billion bases, which was usually the amount of whole-genome sequencing data. | 2,056.6 | 2023-05-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Computing with Words in Decision support Systems: An overview on Models and Applications
Decision making is inherent to mankind, as human beings daily face situations in which they should choose among different alternatives by means of reasoning and mental processes. Many of these decision problems are under uncertain environments with vague and imprecise information. This type of information is usually modelled by linguistic information because of the common use of language by the experts involved in the given decision situations, originating linguistic decision making . The use of linguistic information in decision making demands processes of Computing with Words to solve the related decision problems. Different methodologies and approaches have been proposed to accomplish such processes in an accurate and interpretable way. The good performance of linguistic computing dealing with uncertainty has caused a spread use of it in different types of decision based applications. This paper overviews the more significant and extended linguistic computing models due to its key role in linguistic decision making and a wide range of the most recent applications of linguistic decision support models.
Introduction
Human activities are very diverse and it is fairly common the necessity in many of them of decision making processes.Decision making can be seen as a process composed of different phases such as information gathering, analysis and selection based on different mental and reasoning processes that led to choose a suitable alternative among a set of possible alternatives in a given activity 24,54 .
Remarkably, decision making is a core area in a wide range of disciplines such as engineering, psychology, operations research, artificial intelligence, etc.Because of this variety of disciplines, decision problems have been classified in decision theory attending to their framework and elements 23 .Sometimes the solving process of a decision making problem is straightforward by using an algorithmic approach, these situations are so-called well-structured problems.However many decision problems cannot be solved in this way because decisions might be related to changing environments, the existence of vagueness and uncertainty in the decision framework, and so on.The latter problems, so-called illstructured problems 114 , are quite common in real problems of the aforementioned disciplines.
In this paper we focus on ill-structured decision problems dealing with vague and imprecise information, i.e., decision making under uncertainty.Classical decision theory provides probabilistic models to manage uncertainty in decision problems but in many of them it is easy to observe that a lot of aspects of these uncertainties have a non-probabilistic character since they are related to imprecision and vagueness of meanings 64 .Linguistic descriptors are often used by experts in such a type of problems.Therefore, taking into account that linguistic terms are fuzzy judgments rather than probabilistic values among the appropriate tools to overcome these difficulties of managing and modelling this type of uncertainties, fuzzy logic and fuzzy set theory 45,107 arise to facilitate the managing of uncertainty in decision processes 9,54 and the fuzzy linguistic approach 108,109,110 provides a direct way to represent the linguistic information by means of linguistic variables.The use of linguistic information thus enhances the reliability and flexibility of classical decision models 66 .
The use of linguistic information plays a key role not only in linguistic decision making 33,35,63 but also in other fields 2,43,44,75,85 that need to operate with linguistic information.Computing with words (CW) has recently become an important research topic in which different methodologies and approaches have been proposed.Since CW deals with words or sentences defined in a natural or artificial language instead of numbers, it emulates human cognitive processes to improve solving processes of problems dealing with uncertainty.Consequently, CW has been applied as computational basis to lin-guistic decision making 35 , because it provides tools close to human beings reasoning processes related to decision making, which improve the resolution of decision making under uncertainty as linguistic decision making.
This paper overviews the most wide-spread methodologies of CW used in linguistic decision making 16,35,37,89,97 , including a short list of those 5,47,84,87,88 that are interesting for specific decision situations but they have not been intensively used yet.It further presents in depth the most recent decision applications based on CW over the last years regarding real world applications.
The paper is structured as follows, Section 2 overviews CW and its use in decision making.Section 3 reviews both linguistic modelling and computing, specially the computing models most wideused in linguistic decision making.Section 4 lists recent applications based on linguistic decision making.And Section 5 concludes the paper.
Computing with Words in Decision Making
In many real decision situations is straightforward the use of linguistic information due to the nature of different aspects of the decision problem.In such situations one common approach to model the linguistic information is the fuzzy linguistic approach 108,109,110 that uses the fuzzy set theory 107 to manage the uncertainty and model the information.
Zadeh 108 introduced the concept of linguistic variable as "a variable whose values are not numbers but words or sentences in a natural or artificial language".A linguistic value is less precise than a number but it is closer to human cognitive processes used to solve successfully problems dealing with uncertainty.Formally a linguistic variable is defined as follows.
Definition 1 108 : A linguistic variable is characterized by a quintuple (H,T(H),U,G,M) in which H is the name of the variable; T(H) (or simply T) denotes the term set of H, i.e., the set of names of linguistic values of H, with each value being a fuzzy variable denoted generically by X and ranging across a universe of discourse U which is associated with the base variable u; G is a syntactic rule (which usu-ally takes the form of a grammar) for generating the names of values of H; and M is a semantic rule for associating its meaning with each H, M(X), which is a fuzzy subset of U.
Fig. 1 shows a linguistic term set with the syntax and semantics of their terms.One crucial aspect to determine the validity of a CW approach is the selection of the membership functions for the linguistic term set.There exist different approaches to choose the linguistic descriptors and different ways to define their semantics 101,109,110 .
It is necessary to analyze the phases of a linguistic decision scheme as long as the linguistic information is formally modelled.A common decision resolution scheme consists of two main phases 76 : 1.An aggregation phase that aggregates the values provided by the experts to obtain a collective assessment for the alternatives.
2. An exploitation phase of the collective assessments to rank, sort or choose the best one/s among the alternatives.
Herrera and Herrera-Viedma 35 analyzed how should the previous scheme change in linguistic decision making?They pointed out the necessity of introducing two new steps previously to the application of both the aggregation and exploitation phases by the following resolution scheme: 1.The choice of the linguistic term set with its semantics.It establishes the linguistic expression domain in which experts provide their linguistic assessments about alternatives according to their knowledge.
2. The choice of the aggregation operator of linguistic information.An appropriate aggregation operator of linguistic information is chosen for aggregating the linguistic assessments.
The appropriateness of the operator depends on each single decision problem.
3. The choice of the best alternatives.The best alternative/s are chosen according to the linguistic assessments provided by the experts.
It is carried out by the two phases of the common resolution scheme: (a) Aggregation phase of linguistic information: It obtains a linguistic collective assessment for each alternative by aggregating the experts linguistic assessments under the chosen linguistic aggregation operator.
(b) Exploitation phase: It ranks the alternatives by using the collective linguistic assessment obtained in the previous phase in order to choose the best alternative/s.
Looking at this linguistic solution scheme, it is clear the necessity of linguistic computational models that allow computations with linguistic information in order to obtain accurate results and provide a representation that facilitates the interpretability of them.
Linguistic Computational Models
Due to the relevance of linguistic decision making in real problems and the necessity of methodologies for CW, there exist different linguistic computational models.We shall pay more attention to those that have been wide-used in linguistic decision making.We consider the analysis not only of the computational model but also of its linguistic representation utilized to represent the results.
A) Representation
This computational model is based on the fuzzy linguistic approach and represents the linguistic information according to Definition 1 (See Fig. 1).
B) Computation
This computational model makes the computations directly on the membership functions of the linguistic terms by using the Extension Principle 45 .The fuzzy arithmetic provides as result of a computation, F, regarding a set of n linguistic labels in the term set, T (H), a fuzzy number, F(R), that usually does not match any linguistic label in T (H).From these results we have: i) In those problems that accuracy outweighs interpretability (ranking purposes).The results are expressed by the fuzzy numbers themselves using fuzzy ranking procedures to obtain a final order of the alternatives 1,27 .
ii) If an interpretable and linguistic result is demanded then an approximation function app 1 (•) is applied to associate the fuzzy result F(R) to a label in T (H) 16,58,102 : The approximation process implies a loss of information and lack of accuracy of the results.
A) Representation
This computational model makes use of type-2 fuzzy sets (see Fig. 2) to model the linguistic assessments 67,86,87 .
Fig. 2. Linguistic terms represented by Type-2 fuzzy sets
The use of type-2 fuzzy sets has been justified in different ways: • In 86 : "Type-1 representation is a 'reductionist' approach for it discards the spread of membership values by averaging or curve fitting techniques and hence, camouflages the 'uncertainty' embedded in the spread of membership values." • In 68 "Words mean different things to different people and so are uncertain.We therefore need a fuzzy set model for a word that has the potential to capture its uncertainties, and an interval type-2 fuzzy set should be used as a fuzzy set model of a word."
B) Computation
The majority of the contributions in the field use interval type-2 fuzzy sets (a particular kind of type-2 fuzzy sets) which maintain the uncertainty modelling properties of general type-2 fuzzy sets but reducing the computational efforts that are needed to operate with them.In 20,113 the Linguistic Weighted Average and the Linguistic OWA operators based on the type-2 representation are presented.They can be seen as respective extensions of the Fuzzy Weighted Aggregation and OWA operators where both weights and attributes are words modelled by interval type-2 fuzzy sets.
As the previous linguistic model revised, this type-2 fuzzy sets based model needs to approximate the resulting type-2 fuzzy set from a linguistic operation by mapping the result into a linguistic assessment producing a loss of information.
Linguistic symbolic computational models based on ordinal scales
Symbolic models have been widely used in CW because of their simple computational processes and high interpretability.The initial proposal for a symbolic model 99 uses max-min operators, and new symbolic proposals 17,97 introduce aggregation based symbolic models.We shall review different linguistic symbolic computational models based on ordinal scales.
A) Representation
This model 99 represents the information according to the fuzzy linguistic approach (See Fig. 1) but imposes a linear ordering to the linguistic term set S = {s 1 , s 2 , . . ., s g } such that s i s j ⇔ i j.
B) Computation
It uses the ordered structure of the linguistic term set to accomplish symbolic computations in such ordered linguistic scales that the classical operators Max, Min and Neg are proposed: where g is the cardinality of S.
More operators have been proposed for this model, Yager 100,103 studied several aggregation operators for ordinal information such as weighted norm operators, uninorm operators and ordinal mean type operators.Buckley 3 proposed different variations of the median, max and min operators to aggregate linguistic opinions and criteria.
A) Representation
This model 17 is an extension of the previous one that is based on the same representation basis.
B) Computation
It provides a wider range of aggregation operators by using a convex combination of linguistic labels 17 , which directly acts over the label indexes, {0, .., g}, of the linguistic term set, S = {s 0 , . . ., s g }, in a recursive way producing a real value on the granularity interval, [0, g], of the linguistic term set S. Note that this model usually assumes that the cardinality of the linguistic term set is odd and that linguistic labels are symmetrically placed around a middle term.As a result of a convex combination aggregation does not match usually with a term of the label set S, it is also necessary to introduce an approximation function app 2 (•) to obtain a solution in the term set S: Similarly to the model presented in Section 3.1, the approximation process produces a loss of information in the final results.
Aggregation operators based on this linguistic model are the Linguistic Ordered Weighted Averaging (LOWA) operator 36 (based on the OWA operator and the convex combination of linguistic labels), the Linguistic Weighted Disjunction (LWD), Linguistic Weighted Conjunction (LWC), the Linguistic Weighted Averaging (LWA) 34 , the Linguistic Aggregation of Majority Additive (LAMA) operator 73 and the Majority Guided Induced Linguistic Aggregation Operators 41 .
A) Representation
Xu 97 introduced this model to increase the accuracy and operators in processes of CW.To do so, the discrete term set S = {s − g 2 , . . ., s 0 , . . ., s g 2 }, with g + 1 being the cardinality of S, is extended into a continuous term set S = {s α |α ∈ [−t,t]}, where t (t >> g/2) is a sufficiently large positive integer.If s α ∈ S, then s α is called an original linguistic term, otherwise, s α is called a virtual linguistic term.Fig. 3 shows a discrete term set S = {s −3 , . . ., s 3 } (original linguistic terms) that is extended to a continuous term set in which virtual linguistic terms as s −0.3 ∈ [−3, 3] can be obtained and manipulated to avoid loss of information.According to Xu this extension allows to preserve all given information in the problem (avoiding the loss of information presents in classical linguistic symbolic computational models).Xu stated that, "in general, the decision maker uses the original linguistic terms to evaluate alternatives, and the virtual linguistic terms only appear in operation".
B) Computation
Let s α , s β ∈ S, be any two linguistic terms and µ, µ 1 , µ 2 ∈ [0, 1].Then to accomplish processes of CW with this representation model, Xu introduced the following operational laws 96,98 : Importantly, this symbolic computational model uses a term set that changes during the computations as new virtual terms are created in the computing processes.The appearance of virtual terms without syntax either semantics limits the interpretability of the results of this computational model.Therefore, this model also needs an approximation process, implying lack of accuracy, if the results of the operations are virtual linguistic terms (and they will usually be virtual ones) and the problem looks for interpretable final results in the original linguistic term set.Otherwise they can be used for ranking purposes.
A 2-tuple Linguistic computational model: A symbolic model extending the use of indexes
The 2-tuple linguistic model 37 is a symbolic computational one introduced by Herrera and Martínez in order to improve the accuracy and facilitate the processes of CW by treating the linguistic domain as continuous but keeping the linguistic basis (syntax and semantics).To do so, this model extends the fuzzy linguistic representation adding a new parameter.
A) Representation
The modelling of the linguistic information is based on the concept of symbolic translation and uses it for representing the linguistic information by means of a pair of values, so-called linguistic 2tuple, (s i , α) where s i is a linguistic term and α is a numerical value representing the Symbolic Translation.Definition 2 37 : Let β be the result of symbolic aggregation over a set of labels {s k ∈ S, k = {1, ..., n}} assessed in the linguistic term set S = {s 0 , . . ., s g }, hence β ∈ [0, g].Let i = round(β ) and α = β − i be two values such that i represents a term index in the interval of granularity {0, 1, ..., g} and α ∈ [−0.5, 0.5) is the "difference of information" between β and the index of the closest linguistic term s i in S. α is then so-called a Symbolic Translation.
This representation model defines a set of transformation functions between numeric values and linguistic 2-tuples to facilitate linguistic computational processes 37 .Definition 3 37 : Let S = {s 0 , . . ., s g } and β ∈ [0, g] be a set of linguistic terms and the result of a symbolic aggregation operation respectively.The 2tuple associated with β is then obtained by the function ∆ : [0, g] −→ S × [−0.5, 0.5) defined as: where round assigns to β the integer number, i ∈ {0, 1, . . ., g}, closest to β .
Obviously the conversion between a linguistic term into a linguistic 2-tuple consists of adding a value 0 as symbolic translation: s i ∈ S =⇒ (s i , 0).
B) Computation
Together the representation model, a linguistic computational approach based on the functions ∆ and ∆ −1 was also defined in 37 , where some classical aggregation operators as the Arithmetic Mean, the Weighted Average Operator, the Ordered Weighted Aggregation (OWA) operator, the LOWA operator were extended for the linguistic 2-tuple.Other aggregation operators for the linguistic 2-tuple were later defined as the Lattice-based Linguistic-Valued Weighted Aggregation (LVWA) 51 and the LAMA operator 73 .
Proportional 2-tuple linguistic computational model: An extension of the linguistic 2-tuple model
Even though the 2-tuple is a quite recent model, it has attracted attention in the specialized literature and some extensions to the 2-tuple linguistic model have been developed.The proportional 2-tuple introduced by Wang and Hao 89 develops a new way to represent the linguistic information that is a generalization and extension of 2-tuple linguistic representation model 37 .
A) Representation
This model represents the linguistic informa-tion by means of proportional 2-tuple, such as (0.2A, 0.8B) for the case when someone's grades in the answer scripts of a whole course are distributed as 20%A and 80%B.The authors pointed out that if B were used as the approximative grade then some performance information would be lost.This proportional 2-tuple model is based on the concept of symbolic proportion 89 .Definition 4. Let S = {s 0 , s 1 , ..., s g } be an ordinal term set, I = [0, 1] and ) where S is the ordered set of g + 1 ordinal terms {s 0 , ..., s g }.Given a pair (s i , s i+1 ) of two successive ordinal terms of S, any two elements (α, s i ), (β , s i+1 ) of IS is so-called a symbolic proportion pair and α, β are a pair of symbolic proportions of the pair and the set of all the symbolic proportion pairs is denoted by S, i.e., S = {(αs i , (1 − α)s i+1 ) : α ∈ [0, 1] and i = 0, 1, . . ., g − 1}.
S is called the ordinal proportional 2-tuple set generated by S and the members of S, ordinal proportional 2-tuple, which are used to represent the ordinal information for CW.
In a similar way to the symbolic 2-tuple 37 , Wang and Hao introduced functions to facilitate the computations with this type of representation.
Definition 5. Let S = {s 0 , s 1 , . . ., s g } be an ordinal term set and S be the ordinal proportional 2tuple set generated by S. The function π : S → [0, g] is defined by where i = {0, 1, . . ., g − 1}, α ∈ [0, 1] and π is called the position index function of ordinal 2-tuples Note that, under the identification convention by Eq. ( 2), the position index function π becomes a one-to-one mapping from S to [0, g] and its inverse π −1 : [0, g] → S is defined by where i = E(x), E is the integer part function, β = x − i.
B) Computation
To operate with linguistic information under proportional 2-tuple contexts, Wang and Hao expanded the computational techniques for symbolic information to proportional 2-tuple and underlying definitions of linguistic labels and linguistic variables are taken into account in the process of aggregating linguistic information by assigning canonical characteristic values of the corresponding linguistic labels 89,90 .
Others 2-tuple based linguistic computational models
Quite recently two new linguistic computational models based on extensions and/or hybridizing with the 2-tuple linguistic representation model have been presented in 19,49 .
• An extended 2-tuple fuzzy linguistic representation that fuses the virtual linguistic terms 97 (see Section 3.3.3)and the linguistic 2-tuple model 37 is presented by Deng-Feng 49 that transforms virtual terms into original linguistic values by using a representation based on the 2-tuple so-called extended 2-tuple.This representation and the computational model based on virtual linguistic terms are used to introduce a Multi-attribute Group Decision Making method based on the generalized induced OWA operators.
• Dong et al. 19 introduced the concept of numerical scale, which extends the linguistic 2-tuple 37 and the proportional 2-tuple models 89 , together with the concepts of transitive calibration matrix its consistent index and an optimization model to compute the numerical scale of the linguistic term set from the previous matrix.With the aim to complete the 2-tuple based models for CW and make the information of the decision maker more consistent in different decision situations.
Others linguistic computational models
As it was aforementioned, because of the high attention that CW has received in the last years additionally to the previous wide-used models in linguistic decision making, other new approaches and methodologies for CW have been introduced in the specialized literature: • Lawry presents both an alternative approach to CW based on mass assignment theory and a new framework for linguistic modelling that avoid some of the complexity problems that arise by the use of the extension principle in Zadeh's CW methodology 46,47,48 .
• Rubin defines CW as a symbolic generalization of fuzzy logic 77 .
• Ying et al. propose a new formal model for CW based on fuzzy automatas whose inputs are strings of fuzzy subsets of the input alphabet 5,106 .
• Wang et al. extend Ying's work considering CW via a different computational model, in particular, Turing machines 88 .
• Tang et al. present a new linguistic modelling that can be applied in CW which does not directly rely on fuzzy sets to model the meaning of natural language terms but uses some fuzzy relations between the linguistic labels to model their semantics 84 .
• Türks ¸en proposes the use of meta-linguistic axioms as a foundation for CW as an extension of fuzzy sets and logic theory 87 .
Finally, we remark that the management of perceptions is also highly related to linguistic information and to human cognitive processes, a historical review of computing with perceptions was introduced by Mendel 69 .
Recent Applications of CW in Decision Making
Once we have reviewed the preponderant position that the linguistic information plays in decision making under uncertainty and the different computing models proposed in the literature to manage such information.In this section we review recent decision applications (published in the specialized literature in 2007-2010) based on linguistic models.Despite the wide range of applications in which linguistic decision based models have been applied, we have organized the application papers according to the following areas: • Industrial Applications: Different key strategic selection industrial processes that are complex to solve due to their uncertain environments have been considered under linguistic decision models.Chuu 13
2009
• Internet based services: The viral growth of Internet has provoked the necessity of solving different problems related to its services, such as, to retrieve customized products or information from huge data bases or to manage social networks issues in the web 2.0.For all these problems different linguistic decision based solutions have been proposed.• Evaluation: Decision analysis has been widely used in evaluation processes.The existence of real evaluation problems dealing with uncertain, vague and imprecise information that fits pretty well linguistic decision analysis has derived in many linguistic evaluation proposals.010
Conclusions
The frequency that human beings face decision making problems defined under uncertain situations, in which the use of linguistic information to describe such uncertainty has produced that linguistic decision making became a common process in real world applications.The modelling and treatment of linguistic information for necessary computing with words processes are crucial.Therefore in this paper we have reviewed different linguistic computing models with their respective linguistic representations paying more attention to those ones that have been widely used in linguistic decision making.We have not described the decision models based on CW, which can be found in the review presented by Herrera et al. 33 .Eventually to show the usability and advantages that the linguistic information produces in decision making, we have presented a not exhaustive but rather a wide and recent list of applications.
An associated website at http://sci2s.ugr.es/CWDM/ includes a more exhaustive list of most publications in the specialized literature about the topic.
Fig. 1 .
Fig. 1.A seven-term set with its semantics
Fig. 3 .
Fig. 3. Example of the linguistic model proposed by Xu 97
Fig. 4 .
Fig. 4. A linguistic 2-tuple representation Figure 4 shows an example of a 2-tuple linguistic label that expresses the equivalent information of the result of a symbolic aggregation operation.Let us suppose that β = 3.25 is a value representing the result of a symbolic aggregation operation on the set of labels, S = {s 0 : Nothing, . . ., s 6 : Per f ect}, then the 2-tuple that expresses the equivalent information to β is (Medium, .25).
Table 1 .
Industrial applications
Table 2 .
Internet based linguistic applications • Resource management: The management of resources is a really complex task.Moreover, if the
Table 3 .
Resource management linguistic based applications
Table 5 .
Other Applications | 6,181 | 2010-10-01T00:00:00.000 | [
"Computer Science"
] |
Dense Short Solution Segments from Monotonic Delayed Arguments
We construct a delay functional d on an open subset of the space Cr1=C1([-r,0],R)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^1_r=C^1([-r,0],\mathbb {R})$$\end{document} and find h∈(0,r)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h\in (0,r)$$\end{document} so that the equation x′(t)=-x(t-d(xt))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} x'(t)=-x(t-d(x_t)) \end{aligned}$$\end{document}defines a continuous semiflow of continuously differentiable solution operators on the solution manifold X={ϕ∈Cr1:ϕ′(0)=-ϕ(-d(ϕ))},\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} X=\{\phi \in C^1_r:\phi '(0)=-\phi (-d(\phi ))\}, \end{aligned}$$\end{document}and along each solution the delayed argument t-d(xt)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t-d(x_t)$$\end{document} is strictly increasing, and there exists a solution whose short segmentsxt,short=x(t+·)∈Ch2,t≥0,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} x_{t,short}=x(t+\cdot )\in C^2_h,\quad t\ge 0, \end{aligned}$$\end{document}are dense in an infinite-dimensional subset of the space Ch2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^2_h$$\end{document}. The result supplements earlier work on complicated motion caused by state-dependent delay with oscillatory delayed arguments.
with α > 0 and constant time lag r > 0. This is the simplest delay differential equation modelling negative feedback with respect to the zero solution. Let C 0 r denote the Banach space of continuous functions [−r , 0] → R with the maximum norm, |φ| 0,r = max −r ≤t≤0 |φ(t)|.
The results in [13,14] established another kind of complicated solution behaviour, namely, the existence of delay functionals d and parameters α > 0 so that for a positive number h < r there are solutions whose short solution segments are dense in open subsets of the space C 1 h . In [13] density of short segments in the whole space C 1 h was achieved for a continuous delay functional on a set Y ⊂ C 1 r which is large in some sense but not open, nor a differentiable submanifold. Because of this lack of regularity results from [8,9] on well-posedness of initial value problems and on differentiability of solutions with respect to initial data do not apply.
In [14] we constructed a continuously differentiable delay functional d : U → [0, r ], U ⊂ C 1 r open, so that the results from [8] apply, and found h ∈ (0, r ) so that the previous equation with α = 1, namely, has a solution x : [−r , ∞) → R whose short segments are dense in an open subset of the space C 1 h . The construction involves that the delayed argument function along the solution x is not monotonic, and this oscillatory behaviour seems crucial for density of short segments in an open subset of the space C 1 h . Before stating the result of the present paper let us mention that equations with nonconstant, state-dependent delay are not covered by the theory with state space C 0 r which is familiar from monographs on delay differential equations [1][2][3]. We recall what was shown in [8] for delay differential equations in the general form under hypotheses designed for applications to examples with state-dependent delay. Let C 0 r ,n and C 1 r ,n denote the analogues of the spaces C 0 r and C 1 r , for maps [−r , 0] → R n . Assume f : U → R n , U ⊂ C 1 r ,n open, is continuously differentiable so that (e) each derivative D f (φ) : C 1 r ,n → R n , φ ∈ U , has a linear extension D e f (φ) : C 0 r ,n → R n and the map U × C 0 r ,n (φ, χ) → D e f (φ)χ ∈ R n is continuous.
The extension property (e) is a variant of the notion of being almost Fréchet differentiable for maps C 0 r ,n ⊃ V → R n which was introduced in [7]. Suppose also there exists φ ∈ U with φ (0) = f (φ). Then the nonempty set is a continuously differentiable submanifold with codimension n in C 1 r ,n , and each initial value problem with the said maximal solution x = x φ , defines a continuous semiflow of continuously differentiable solution operators In the present paper we prove the following result on complicated motion caused by a delay functional so that the delayed argument functions along solutions of Eq. (1.1) are monotonic.
) ∈ R is continuously differentiable and has property (e), and for each φ ∈ X f the delayed argument function
is strictly increasing.
Here C 2 h denotes the Banach space of twice continuously differentiable functions ψ : [−h, 0] → R, with the norm given by |ψ| 2,r = 2 k=0 max −r ≤t≤0 |ψ (k) (t)|. A different result on complicated motion caused by state-dependent delay with monotonic delayed argument functions has recently been obtained in [5].
The proof of Theorem -the delayed argument function [0, t 5 ] t → t − n (t) ∈ R along the delay function n is strictly increasing, -on some subinterval of length h in [0, t 5 ] the function x (n) coincides with a translate of a member p n of a sequence which is dense in A, -on some subinterval of length 2s in [0, t 5 ] the function x (n) coincides with a translate of κ n = κ s,n .
In Sect. 5 shifted copies of the functions n and of the functions ±x (n) are concatenated, respectively, and yield a twice continuously differentiable function x : [t b , ∞) → R and a continuously differentiable delay function on [0, ∞) which is bounded by some r > max{h, −t b }. A twice continuously differentiable extension of the function x to the ray [−r , ∞) → R satisfies the linear equation converts the delay function into a delay functional d on the trace {x t ∈ C 2 r : t ≥ r }. Sections 6, 7, and 8 prepare the extension of this functional to an open neighbourhood N of the trace {x t ∈ C 2 r : ( j r − 1)t 5 ≤ t} in the space C 1 r , with an integer j r ≥ 2 so that r < ( j r − 1)t 5 . Section 6 contains an ingredient of the construction which will be used in the final Sect. 9, namely, separation of nonadjacent arcs The separation result is based on the properties of the functions κ s,n from Sect. 3 whose translates appear as restrictions of x on a sequence of mutually disjoint intervals tending to infinity.
The constructions in Sects. 2, 3, 4, 5, and 6 are to some extent parallel to constructions in [14]. The next steps in Sects. 7 and 8 are rather different from their counterparts in [14]. The new tool, introduced in Sect. 7, is a bundle of transversal hyperplanes K t , t > 0, along the curve (0, ∞) t → x t ∈ C 0 r . Working with the bundle allows for an extension of the delay functional from an arc {x t ∈ C 2 r : (k − 1)t 5 ≤ t ≤ kt 5 }, j r ≤ k ∈ N, to a kind of tubular neighbourhood U k ⊂ C 0 r (Sect. 8), and for the arrangement of compatibility relations on overlapping domains U k ∩ U k+1 , in ways which are simpler than corresponding procedures in [14].
Section 9 begins with the definition of the domain N ⊂ C 1 r and the functional d : N → (0, r ), and completes the proof of Theorem 1.1. The verification that the functional f : N → R in Theorem 1.1 has property (e) uses that the delay functional d : N → (0, r ) has property (e). The latter is achieved by means of the following proposition whose statement involves the injective linear continuous inclusion map Notation, preliminaries. A sequence in a metric space is called dense if each point of the metric space is an accumulation point of the sequence. A metric space is called separable if it contains a dense sequence.
For In case a = −r and b = 0, the abbreviations C j r = C j −r ,0 and | · | j,r = | · | j,−r ,0 are used. If functions φ ∈ C 2 r and φ ∈ C 1 r are considered as elements of the ambient space C 0 r then we use φ ∈ C 0 r or J φ ∈ C 0 r , depending on which form makes an argument more transparent.
For r > 0 the evaluation map is continuous but not locally Lipschitz continuous, and the evaluation map , see e. g. [4,8].
In Sect. 8 below the following is used.
Proof By continuity there exists t a ∈ (a, t) with c([t a , t]) ⊂ U /2 (c(t)). The compact sets c([a, t a ]) and c ([t, b]) are disjoint, which gives Choose ρ ∈ 0, 2 with The assumption u a < t a yields a contradiction to the inequality which means z ∈ U (c(t)).
Separability
Let h > 0 be given. The restrictions of polynomials R → R to the interval [−h, 0] are dense in C 2 h , which is an easy consequence of the Weierstraß approximation theorem. Let P 5 ⊂ C 2 h denote the subspace of restrictions of polynomials of degree not larger than 5 and let C 2 h−0 ⊂ C 2 h denote the closed subspace given by the equations Then dim P 5 = 6 and which follows from the fact that given φ ∈ C 2 h there exists a unique p ∈ P 5 satisfying
Proposition 2.1 Let an open set U
Proof The restricted polynomials with rational coefficients form a sequence which is dense in C 2 h . Projection along P 5 onto C 2 h−0 yields a sequence which is dense in C 2 h−0 , and translation by adding p * results in a sequence which is dense in p * +C 2 h−0 . The members of this sequence which belong to U form a sequence which is dense in A.
Example 2.2 For given reals
Notice that We add the obvious fact that the dense sequence provided by Proposition 2.1 is dense in
Differentiable Functions with Separated Shifted Copies
Let s > 0 be given. We construct a sequence of functions κ n ∈ C 2 −s,s , n ∈ N, so that shifted copies of these functions keep a positive minimal distance from each other with espect to the norm | · | 1,−s,s .
Let also positive reals a, ξ, η be given and choose ∈ 0, a 4 . There exists Proof Let positive integers n = k and t ∈ − s 2 , 0 be given. In case n > k consider In case k > n set u = −t + s 2 n+1 . Then and observe that Using Proposition 3.1 and < a 4 we get the following result.
The Delay Function on a Compact Interval
In this section we find which in the next section will be used to form a solution of Eq. (1.2) whose short segments are dense in the set A ∪ (−A). Choose reals such that there exists t 2 > 1 with bt 2 > ξ > at 2 , Choose we can choose v in such a way that also b + Fig. 2 The function x ∈ C 2 The equation so that x(t 2 ) = −ξ , and let t 1 ∈ (0, t 2 ) be given by The functions in A are negative and strictly decreasing, with the derivative strictly increasing. Proposition 2.1 guarantees a sequence ( in case n odd, Obviously, It follows that the equation In particular, .
Fix some s > 0 and recall κ n ∈ C 2 −s,s from Sect. 3, with a, ξ, η from the present section. Then and define an extension of x (n) to a map in By the symmetry of κ n , It follows that the equation Setting we get an extension of n ∈ C 1 0,t 2 to a nonnegative map in C 1 0,t 3 , with for example, t 4 = t 3 + 1.
Proof Consider the discontinuous function g 0 : [t 3 , t 4 ] → R given by g 0 (t 3 ) = t 1 and g 0 (t) = t 3 for t 3 < t ≤ t 4 . There is a sequence of functions g j ∈ C 1 which converge pointwise to g 0 . For every j ∈ N, , and the Lebesgue dominated convergence theorem yields Similarly there is a sequence of functions h j ∈ C 1 t 3 ,t 4 with the same properties as g j which converge pointwise to h 0 : [t 3 , t 4 ] → R given by h 0 (t 4 ) = t 3 and h 0 (t) = t 1 for t 3 ≤ t < t 4 , and The limits satisfy due to the choice of t 4 . So there exists j ∈ N with The function is continuous. Using the intermediate value theorem we find some θ ∈ (0, 1) with Notice that the convex combination k(θ, ·) ∈ C 1 t 3 ,t 4 shares the properties of g j and h j . Define δ n * by t − δ n * (t) = k(θ, t).
It follows that the equation defines a continuation of n ∈ C 1 0,t 4 to a nonnegative function in C 1 0,t 5 so that we have and
Concatenation
All functions x (n) ∈ C 2 t b ,t 5 , n ∈ N, coincide on the set we have t 4 = t 5 + t b , and for every n ∈ N, Moreover, for every n ∈ N the nonnegative function n ∈ C 1 0,t 5 satisfies and we have Therefore the relations The short segments x (n−1)t 5 +t d ,short = p n+1 2 ∈ C 2 h , n ∈ N odd, which are given by for each n ∈ N and set The curvex r for all t > 0, compare [13, Proposition 4.1]. As t 2 +t 3 2 is the only zero of (x (n) ) : [t b , t 5 ] → R, for any n ∈ N, we have
Proposition 5.1 The restriction of the curvex to the ray [r , ∞) is injective.
Proof Assume r ≤ t ≤ u andx(t) =x(u). Then There are n ∈ N and k ∈ N with (n − 1)t 5 ≤ t < nt 5 and (k − 1)t 5 ≤ u < kt 5 .
From t 5 < r ≤ t we have n ≥ 2, and from t ≤ u we have n ≤ k.
As the interval (u − t 5 , u] contains exactly one zero of x, situated at (k − 1)t 5 , we get u + w = (k − 1)t 5 , hence 2. The case (n − 1)t 5 + t 3 ≤ t (< nt 5 ). Using Part 1 of the proof we get For every w ∈ [−s, s] we obtain and it follows that n = k. By Part 1, t = u.
Delay Functionals on C 0 r -Neighbourhoods of Compact Arcs
The curvex We have with the projections onto the first and second component, respectively, with the continuous linear evaluation maps and with the multiplication m : which follows from the fact that the zeros of x in [t b , ∞) are given by 1 In the sequel we show that every compact arc Jx ([u, v]) ⊂ C 0 r , r < u < v, has a neighbourhood U in C 0 r on which the representation φ = x t + κ with κ ∈ K t , t close to [u, v], and κ = φ − x t small in C 0 r is unique. Knowing this we shall define a delay functional d U : Then d is constant along each fibre (x t + K t ) ∩ U , with t close to [u, v]. Obviously, for all φ ∈ C 0 r and all σ > 0.
For every φ ∈ U (x t ) and for σ = τ (φ), Proof Let t > 0 be given. The map is continuously differentiable and satisfies f (t, x t ) = 0. Using the formula defining the map L we infer hence Apply the Implicit Function Theorem and obtain δ ∈ (0, t), > 0, and a continuously differentiable map τ with the properties stated in the first sentence of the proposition. Notice that one can achieve ≤ δ. For φ ∈ U (x t ) and σ = τ (φ) we get Proposition 7.2 (Fibre representation along compact arcs) Let reals u < v in (r , ∞) and n ∈ N be given. There exist positive ρ = ρ(u, v, n) ≤ 1 n so that for every φ ∈ U ρ (Jx ([u, v])) there is one and only one |x w | 0,r .
2. Apply Proposition 7.1 to each w ∈ [u, v], and obtain = w and δ = δ w and τ = τ w according to Proposition 7.1. Notice that one my assume Using the compactness of Jx ([u, v]) ⊂ C 0 r one finds a strictly increasing finite sequence (w j )¯j 1 in [u, v] so that the associated neighbourhoods U w j (x(w j )), j ∈ {1, . . . ,j}, form a covering of Jx ([u, v]). There exists a positive real number Notice that For every φ ∈ U ρ (Jx ([u, v]) we obtain (at least one) Or, the set R n ⊂ (0, ∞) of all ρ ∈ 0, 1 n such that for every φ ∈ U ρ (Jx ([u, v])) there exist and |φ − x σ | 0,r ≤ 1 n is unbounded. We derive a contradiction. The elements of I form a strictly increasing sequence (n k ) ∞ 1 . For every k ∈ N select some φ k in U ρ (Jx ([u, v])) with ρ = ρ n k and σ (1) k < σ (2) Using the compactness of, say, [0, v + 1], and successively choosing subsequences we find a strictly increasing sequence (k κ ) ∞ 1 so that the equations define two sequences which converge to z (1) ≤ z (2) in [0, v + 1], respectively. Necessarily, (2) . Apply Proposition 7.1 to t = z (1) = z (2) and choose positive ≤ δ according to this proposition. For κ ∈ N sufficiently large we have κ belong to (t − δ, t + δ), and This yields a contradiction to the first part of Proposition 7.1. 4. Combining the results of Parts 1 and 2 we obtain n(u, v) ∈ N such that for every integer n ≥ n(u, v) and for every φ ∈ U ρ n (Jx ([u, v])) there exists one and only one n . Now the assertion of Proposition 7.2 follows easily. Proposition 7.2 yields that for u < v in (r , ∞) and n ∈ N there exists ρ ≤ 1 n so that the relations (Jx([u, v])) and for every σ ∈ u − 1 n , v + 1 n ∩ (0, ∞), The construction is iterative. We carry out the initial step and the step thereafter. This second step is the model for the step from statements for general k ≥ j to statements for k + 1. 1. The initial step for k = j. 1.1. Apply Proposition 7.1 with t = jt 5 atx(t), choose δ = δ( j) > 0, = ( j) ∈ (0, δ], and a map τ = τ j from U (x(t)) ⊂ C 0 r into (t − δ, t + δ) accordingly. By continuity there are n = n( j) ∈ N witĥ n .
An application of Proposition 1.
Set U j = U η (J X j ) and s j = s u,v,η .
Proof We have which shows that f is continuously differentiable. Recall D 1 ev 1 r (φ, t)φ =φ(t) and D 2 ev 1 r (φ, t)t =tφ (t). The chain rule yields For φ ∈ N the equation defines a linear extension D e f (φ) : C 0 r → R of the derivative D f (φ) : C 1 r → R. Using the continuity of the evaluation map C 0 r × [−r , 0] (χ, t) → χ(t) ∈ R and property (e) of d one finds that the map N × C 0 r (φ, χ) → D e f (φ)χ ∈ R is continuous.
For t ≥ j r t 5 we have x t ∈ N and, due to Eq. (9.1), This implies that the twice continuously differentiable function Finally we show that for each φ ∈ X f the delayed argument function is strictly increasing. Let φ ∈ X f and t ∈ (0, t φ ) be given and set y = x φ . As y : [−r , t φ ) → R is continuously differentiable the curveỹ : [0, t φ ) t → J y t ∈ C 0 r is continuously differentiable with Dỹ(u)1 = y u for all u > 0, compare [13,Proposition 4.1]. The segment y t ∈ X f ⊂ N is contained in N k for some integer k ≥ j r . By continuity of the flowline [0, t φ ) u → y u ∈ X f ⊂ N ⊂ C 1 r , there is > 0 with y u ∈ N k for all u ∈ (t − , t + ). Then d(y u ) = d k (J y u ) = d k (ỹ(u)) on (t − , t + ). It follows that the curve is differentiable with derivatives given by Dd k (J y u )y u < 1. This implies that on (0, t φ ) the delayed argument function is differentiable with positive derivative, from which the assertion follows.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 5,511.8 | 2021-06-01T00:00:00.000 | [
"Mathematics"
] |
A Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Un- certainties on Material and Dimensional Properties
It is a well-known fact that, in a real engineering situation, fixtures are not ideally stiff, so numerical simulations using them are unlikely to present results that are consistent with the experimental ones. The present paper intends to describe a model updating methodology inserting translational and rotational springs in order to better represent the real clamping. For that purpose, the PSO stochastic optimization method will be used to determine the spring stiffness in an iterative way. In addition, uncertainties regarding the material properties, such as density and Young’s Modulus, as well as workpiece dimensions, will also be taken into account in the optimization algorithm. Once the experimental natural frequencies and the geometry of the studied parts are known, the algorithm automatically updates the model, approximating the natural frequencies obtained from the numerical model to the experimentally obtained ones as closely as possible. In addition, the modal shapes of the updated simulation will be compared to the experimental data and to a rigid boundary simulation. Results will demonstrate that the proposed methodology efficiently represents the fixturing flexibility: both natural frequencies and mode shapes found were close to the real dynamic system.
INTRODUCTION
The boundary conditions are an important issue regarding any numerical or experimental engineering problem.To solve a real problem, a model is created, simplifying it by making hypotheses and assumptions.However, some assumptions are occasionally weak or ultimately not true, such as the clamping of a workpiece in a fixture system, in which the stiffness is considered infinite.Some discrepancies between the model and the real situation are originated from structural geometry differences, material properties and inaccurate boundary conditions (Jaishi & Ren, 2007).Hence, the finite element model is not able to accurately predict the dynamic responses of structures and the model must be updated (Sehgal and Kumar 2016).
The model updating method is a strong tool to adjust the model to the empirical results.Basically, it changes some parameters of the numerical or analytical model to match the experimental data.In this regard, Kabe (1984) used the model updating technique to adjust a mass-spring system, changing the stiffness matrix directly to approximate the simulated natural frequencies to the experimental ones.Using a finite element model, Mottershead et al. (1996) applied the model updating method to properly model a welded joint on a plate, once the plate support had some flexibility.To achieve the flexibility of the practical experiment, the chosen updating parameter was the effective length of the plate.Ahmadian et al. (1998) performed a study on a frame structure where the joints were the major source of discrepancies between the model and the test results.The beam elements' stiffness was updated to adjust the model's natural frequencies.Zhang et al. ( 2014) also used the model updating technique to adjust the deflection of a truss.The stiffness of the joints was determined as the correction factors and the deflection of the updated model, applying stepwise uniform design, was compared to the experimental data.Jaishi & Ren (2007) applied a multiobjective optimization technique, using model updating to adjust eigenvalue and strain energy residuals.The first ten natural frequencies were adjusted using a finite beam element model to fit the experimental ones and, in addition, the natural modes obtained from the updated model were compared to the experimental data.
Those two model updating techniques can be classified as non-iterative and iterative.The first one directly updates the mass and stiffness matrices and, as a model updating method, the computationally obtained parameters tend to coincide with the real ones (Berman and Nagy, 1983), but they are usually not physically meaningful, once the structural connectivity and parameters are not maintained (Baruch and Bar Itzhack, 1978).The second one (iterative method) consists on solutions from several loops, and usually needs optimization formulations.Errors between numerical and experimental data are set as an objective function, and a pre-selected set of physical parameters are changed to minimize it (Friswell and Mottershead, 2013).The iterative model updating is the most common and robust method, since when the correct optimization parameters and methods are chosen the solution tends to maintain its physical foundation and achieve acceptable correlations.
A problem arises because of the non-linear relationship between the vibration data and the physical parameters, making the solution more complex (Bussetta et al., 2017).To overcome such problem, the application of the Particle Swarm Optimization (PSO) has been growing in recent years.The PSO is based on the artificial intelligence evolutionary algorithm proposed by (Eberchart and Kennedy, 1995), inspired by the observation of birds flocking and searching for food, which represents a form of social intelligence.This technique is easy to implement and suited for many different classes of problems, such as optimum structural design (Plevris and Papadrakakis, 2011), structural damage detection (Seyedpoor 2012), parameter identification (Galewski, 2016) and finite element updating (Moradi et al. 2010).Marwala (2007), applying PSO in FE model updating, observed that the updated natural frequencies and modal shapes were more accurate and faster than using simulated annealing and genetic algorithms.
The object of this study is a cantilever plate, clamped at a fixture system.An efficient fixturing system must be very rigid, hold the workpiece in place and accurately maintain a precise position (Boyle et al., 2011).
Since the real fixation system is far from being rigid, the workpiece can translate and rotate at the interface.Hence, rotational and translational springs can be applied to the boundary to approximate the real physical situation.The stiffness of two springs were chosen to be the updating parameters for the optimization model.Choosing the right updating parameters is one of the most important steps, once they have to maintain the model's physical significance.If other parameters were selected, such as the element's stiffness, the model's natural frequencies would probably converge to the experimental ones, but the model would not properly represent the physical phenomenon.
This paper focuses on the problem of a rigid clamping system hypothesis and how the model updating method may approximate the numerical model to the real system.The simulations were performed using plate elements.Three approaches are taken.The first one consists in updating the translational and rotational spring's stiffness.In the second updating procedure, the material properties (Young's modulus and density) are included as parameters to be optimized.Workpiece dimensions (free length, width and thickness) are added at the last simulation.For the optimization method, the Particle Swarm Optimization (PSO) is applied (Eberchart and Kennedy, 1995).The main reasons for this decision is that the method has an easy implementation and because of the non-linear relationship between the vibration data and the physical parameters (Jaishi and Ren, 2007).
The updated model's natural frequencies and mode shapes are compared to the experimental data and to a model using a rigid boundary condition.This work aims to show how the hypothesis of a zero-displacement boundary may not be a reasonable consideration, and how model updating can improve the numerical formulation.Also, this study investigates which design parameters are more suitable to describe the workpiece's dynamics.An impact hammer modal test is performed on a clamped plate, and the first four natural frequencies are obtained.In addition, the frequency responses are measured at a single point, while several points were impacted along the plate.Such procedure is performed in order to construct the modal shape of the experimental data.
To evaluate how close the experimental modal shapes are to the adjusted ones, the Modal Assurance Criterion (MAC), which basically correlates two modal shapes to a scalar number, is calculated.If both modes are strongly related, the MAC number is close to one.If both modes are non-related, it is close to zero (Allemang and Brown, 1982).
Originality of this Work
The originality of this work lies on developing a model updating methodology with which, using an optimization routine, the updating parameters necessary to adjust the numerical model of a cantilever plate are investigated.Three different approaches are proposed, the first focusing on the fixturing system, the second on adding the material properties uncertainty, and the last on including the two characteristics cited above and the dimensional Uncertainties on Material and Dimensional Properties Latin American Journal of Solids and Structures, 2018, 15(10 Thematic Section), e80 3/18 tolerances of the workpiece.The optimization routines focus on the eigenvalues problem, but the modal shape of the optimized models are evaluated too.
Outline
The present work is composed of 6 sections, the first being related to the performed experimental analysis, followed by the finite element model (section 3).In sequence, a model updating method is proposed, using three different approaches.The following section explains the optimization process used for this paper (section 5).After those formulations, the results are presented and discussed (section 6).Finally, some observations and final remarks are elucidated in the 'conclusions' section.
EXPERIMENTAL ANALYSIS
In practical engineering situations, experimental data is commonly used to validate numerical or analytical solutions of a problem.In this study, both experimental natural frequencies and natural mode shapes are measured to provide information to the model updating analysis.
The SAE 1045 steel plate is one of the most used materials in engineering and, hence, is used as a workpiece material.(Davim and Maranhao, 2009).The material properties and the plate dimensions are presented in Table 1.An 80-mm-long plate is clamped, in a way that 55 mm are in balance.The dimensions and material properties are discussed later, and they are used as design variables on some model updating approaches.So, the values presented in Table 1 will not be used in every situation covered in the present work.The fixturing system is composed of two square bars with five bolts, holding the plate in the vertical position, forming a cantilever plate.The bottom bar is directly fixed to the vise.To ensure the same clamping conditions and, therefore, repeatability, a torque wrench is used, applying 29 N.m to each bolt.The applied torque is high enough to prevent the workpiece from moving, but it is not too high, in order not to cause plastic deformation for the bolts.This clamping system is designed to hold the plate during a milling process.Figure 1 shows the experimental setup.To perform the modal analysis of the cantilever plate, an impact hammer modal testing is used, mainly due to its simplicity, and because the impact hammer is able to excite a wide spectrum of frequencies.Furthermore, when computing the eigenvalues, a large number of points must be measured, and the hammer offers the possibility of exciting several points quickly.
The experimental setup consists of the impact hammer, and a single axis accelerometer (oriented on the z-axis, normal to the plate's plane), as shown in Figure 1.The signal is acquired with a SCXI-1531 signal conditioning module, from National Instruments, connected to a personal computer.A Brüel & Kjaer 8206-003 impact hammer, with a sensitivity of 1.05 mV/N, is used.The accelerometer is a Brüel & Kjaer 4397, whose sensitivity is 0.9931 mV/(m/s2).The signals measured in the time domain are converted to the frequency domain, using a Fourier transformation.
Figure 2: Determination of experimental points
To measure the natural frequencies, only one experimental point would be enough, if such point is not coincident with any vibrational nodes.The procedure is basically hitting a determined point with the impact hammer and measuring the vibration on another point with the accelerometer.However, to acquire the eigenvectors' experimental values, several points must be impacted or measured.The signal measured at the different points of the cantilever plate forms a symmetric matrix.A very useful physical meaning can be applied here: as that matrix is the same as its transpose, hitting a point and measuring another will generate the same results as hitting the second one and acquiring the data on the first one.This is called the reciprocity principle of the Frequency Response Function (FRF).It can save a great amount of time, once fewer impact points are required (Ewins 1984).
There is still the need to determine the number and location of experimental points.The procedure applied here is based on a numerical simulation of a thin cantilever plate with the previously stated dimensions and material properties.A numerical modal analysis is performed using FEM.The modal displacement at the maximum free and Structures, 2018, 15(10 Thematic Section), e80 5/18 length (x-direction) for the tenth modal shape is shown in Figure 2(a).The same procedure is performed in the ydirection at two different x-positions.
After that, based on the nodes of the numerical mesh, the position and number of experimental points which are capable of representing that modal shape are chosen.The points used to perform the modal experimental analysis are plotted in red.A total of 55 points are then used and their coordinates are presented in Table 2.The test procedure consists in attaching the accelerometer to point 1 and impacting all 55 points.An exponential window is applied to the measured vibration signal.A frequency range from 1 Hz to 10 kHz, with a resolution of 1 Hz, is obtained from the FFT calculation.The first four natural frequencies are obtained from the frequency response functions, and the average of the 55 FRFs is calculated.The values of each mode measured on the 55 points did not present any variations greater than 1 Hz.
Experimental Eigenvectors Extraction
Once the experimental natural frequencies are known, the procedure to determine the modal shape can be performed.The eigenvectors provide important information about the dynamic behavior of the structure, and they can be combined to compose a matrix, in which each column is an eigenvector associated to an eigenvalue.
The eigenvectors are obtained for the first four natural frequencies.The experimental FRF for each of the 55 experimental points are represented in Table 3, where Ha,b is the response of the point a for the frequency b.The method used to identify the modal parameter is the peak picking using the real and imaginary parts of the plate's FRFs.This approach works well if the modes are not too closely spaced.The experimental eigenvectors are obtained from the imaginary part of the FRF at the corresponding natural frequencies for each of the 55 measured points.
3 FINITE ELEMENT ANALYSIS Two simple finite element models of a cantilever plate are built in the CAE (Computer Aided Engineering) module of the software ABAQUS 6.12.In the first one, the fixture system used on experimental analysis is considered to represent a perfect clamping system, which produces a null displacement on the nodes of the plate/clamping interface, according to the model in Figure 3.In the second model, translational springs in the z-direction (kUz) and rotational springs in the x-direction (kRx) were added at the 41 nodes of the plate/clamping interface.That is demonstrated in Figure 4.The springs' stiffness will be determined through the PSO method, which will be discussed later.
Figure 4: Boundary condition modelling using a rotational and a translational spring For simplicity, the plate element of 4 nodes and the reduced integration (S4R) are adopted.It is important to notice that the selected element type is proper for thin plates.Therefore, the present work could be easily applied to thinner plates.However, if thicker plates were studied, there would be the need to use another element approach, such as Timoshenko thick plate theory, for example (Timoshenko and Woinowsky-Krieger, 1959).
To determine the mesh size, a refining analysis is performed.A number of 440 elements is found, 11 elements being along the y-axis and 40 elements distributed on the x-axis, as it can be seen in Figure 3.With such mesh configuration, square elements with an aspect ratio of 1 and side dimensions of 5 mm could be obtained.For the convergence analysis, the first four natural frequencies were monitored.This mesh configuration resulted in a number of 492 nodes and 2952 degrees of freedom, and the mesh was applied to all models.One important part of the mesh definition is that every experimental point must coincide with a numerical node.
A modal analysis is performed in order to identify the natural frequencies and vibration modes, using the implicit module of ABAQUS 6.12.To protect the interest modes (the first four) from suffering variations caused by numerical error, the solution of the ten first natural frequencies and mode shapes (bigger than twice the interest frequency) was defined, ensuring a higher quality of the obtained response.
Modal Assurance Criterion
After performing the modal analysis, eigenvalues and eigenvectors are extracted from the experimental data.They are used to calculate the Modal Assurance Criterion (MAC), according to Equation (1), where the index a represents the experimental modes, which are compared to b, which are optimized numerical modes or rigid numerical modes.The index m is the m th mode.The Φ is the eigenvector associated with the m th mode, and is the complex conjugate transpose (Hermitian operator) of a vector.
MODEL UPDATING
The model updating is an important tool in numerical simulations, used to create or modify a model, which can better represent a real situation.The focus of this paper is to simulate a real system, in order to properly represent the fixturing system of the cantilever plate in terms of its natural frequencies and modal shapes.Three different approaches were proposed and carried out, as shown in Table 4.It also shows the design variables used in each case, related to the clamping system, material and dimensional properties.
Support Stiffness Updating Approach
As discussed above, the fixture system, illustrated in Figure 5(a), does not provide a perfect encastre condition for the cantilever plate.Hence, the plate can move and rotate around the clamping region.
A model updating is applied using translational springs in the z-direction with stiffness kUz and rotational springs in the x-direction with stiffness kRx.They are positioned on the nodes where the rigid clamping system is supposedly located, in the plate/clamping interface, shown in Figure 5 The contribution of the fixture stiffness is applied directly to the system's stiffness matrix, which can be summarized by where K is the local 24x24 stiffness matrix, having 4 nodes with 6 degrees of freedom each.The three first degrees of freedom of a node are about the translation in x,y and z.The three last are related to the rotation of the same respective axis.This example is about the node 1 of the local coordinated system.This node is in the boundary zone, so the support's stiffness is added to the diagonal of the stiffness matrix.The values are added to the main diagonal, once that is the position which relates a given degree of freedom for the force vector to the same degree of freedom for the displacement vector.The term about the translational stiffness in the z-direction is described as where k3,3 is the former term of the local stiffness matrix.The term related to the rotational stiffness in the xdirection is The complete procedure adopted to the model updating approach is presented in Figure 6.The stiffness values for the springs kUz and kRx are generated according to the PSO algorithm (this optimization method will be discussed later on section 5).After that, the finite element analysis is carried out using the software ABAQUS 6.12.The eigenvalues and eigenvectors obtained through the simulation are compared to the ones obtained experimentally.This procedure is repeated until the stop condition is reached, resulting in the best configuration of stiffness found.The PSO algorithm is implemented in Python language.The simulation is performed using an Intel i5-2450M processor at the speed of 2.5 GHz.This second approach is an extended version of the first one.It also focuses on finding a support stiffness which correctly represents the real fixturing system.In addition, it includes a contribution of the material properties.Both Young's modulus and density of the workpiece are added as design variables as well, along with the previous used kUz and kRx.This new approach is important, once the material properties are not the same every time, and they may fluctuate within certain limits, as a function of several factors, especially the manufacturing process which generated the workpiece.This approach is more time consuming than the first one, once it has four design variables, instead of two.It also needs more iterations to find the final design variables.However, it might improve the model, the ability to find closer natural frequencies and natural modes if compared to the experimental results.
Support Stiffness, Material Properties and Workpiece Dimensions Updating Approach
The third approach is the most comprehensive, and includes all the design variables presented on the second approach, adding three more.The new design variables are related to the workpiece's dimensional tolerances, taking variations on length, width and thickness into account.The workpiece's dimensions are governed by the manufacturing process and their limits are set by technical standards.The simulation sequence is the same as the one presented in Figure 6, but seven design variables are updated instead of the spring's stiffness only.
OPTIMIZATION
The particle swarm optimization method was originally developed by Eberchart and Kennedy (1995), consisting of an initial population, whose members locally interact with each other and are governed by global rules.The present papers uses the totally connected topology (gbest), which means a particle takes the whole population as its topological neighbors.
Initially, each particle has a random position and velocity.The particles interact with each other, informing the best position.With the data, the velocities and positions are adjusted for each particle.The velocity of a particle i for the next increment (k+1) is given as where N is the total number of particles, R1 and R2 are random values from 0 to 1, pbest is the best position for that particle, and gbest is the best global position.w, C1 and C2 are chosen parameters: the first one is an inertial component of the particle, and the last ones are terms of "reliability" between the particles in the group (Perez & Behdinan, 2006).The position at the next iteration can be determined as x is the position at the present iteration.Figure 7 shows the position and velocity of a particle in a step, and how the best global position and the best particle position will influence the next particle position.Ideally, a study about the best values of w, C1 and C2 must be performed, but there are standard suggested values that work properly in most situations (Eberchart and Kennedy, 1995).Values between 0.8 and 1.4 are proposed for w, and 2 is proposed for C1 and C2.This work used a value 0.8 for the inertial component and the recommended value for C1 and C2.The next steps consist on defining the optimization problem, developing the objective function and setting the parameters' limits.
Updating Parameters' Limits
The limits of the design variables are an important part of the optimization solution, and must be carefully defined.The present work uses up to 7 design variables, which can be divided into three groups: support stiffness, material properties and workpiece dimensions.Regarding the latter, the workpiece is a steel flat plate, and its thickness must comply with some tolerances, which are specified by a technical standard for a given manufacturing process (ASTM, 2017).
The two remaining dimensions, free length and width, have their limits as functions of their manufacturing processes as well.However, the cutting tolerance depends on the kind of process and is not defined by standard specifications.Instead, it is based on the manufacturer's capacity.In this work, those tolerances were defined by measuring the workpiece.From such measurements, the standard deviation (based on a normal distribution) was calculated, and a 95,45% confidence level was chosen, resulting in where μ is the mean, σ is the standard deviation, and X is a random variable which, from Equation ( 7), has a 95.45% chance of being inside the given interval.So, for the design's width and length parameters, limits are set to be the mean values from the measurements, with a deviation of +2σ the upper limits and -2σ for the lower limits.
The material properties elasticity modulus and density were chosen to be design variables from approaches 2 and 3 of the model updating modes.Those properties as commonly set as constant, but there is some variation, mainly due to the steel's composition and its manufacturing process.The limits for the elasticity modulus were set based on experimental data from cold formed steel plates (Bernard et al., 1992).Variations of up to 5% from the standard Young's Modulus were found (Mahendran, 1996), which can be considered small, but not negligible.The carbon steel's density limits were determined by an analogue approach.Values as low as 7650 kg/m 3 (Budynas & Nisbett, 2011) and as high as 7950 kg/m 3 (Taylor, 2005) were found in the literature, and were set as the lower and upper limits for those design variables.
Finally, the two last design variables must be set: the translational springs in the z-direction (kUz) and the rotational springs in the x-direction (kRx).Once they are unique for each support, they could not be set based on previous works.To find their limits, several optimizations were performed, using a wide range of stiffness for both springs.After some optimization using different stiffness limits, the best value found for each simulation was computed, and limits were set.They include all best stiffness configurations found previously.Table 5 shows the limits used in the PSO algorithm.
Optimization Problem
To determine the updating parameters, for example, the stiffness of the spring for the first approach, a Particle Swarm Optimization algorithm (PSO) was applied.As the objective function in the optimization process, the Root Mean Square Error (RMSE) of each iteration was calculated, according to Equation (8).The objective was to approximate the values of the four first natural frequencies obtained numerically with those obtained experimentally, resulting in where fni exp is the natural frequency obtained experimentally for mode i, (reference value) and fni num is the natural frequency for mode i obtained through numerical simulation.
The stiffness values of the translational kUz and the rotational kRx springs are adopted as design variables for the first approach.They are inserted at the 41 nodes of the plate/clamping interface, modeled according to Figure 5(b).For this analysis, all translational springs are considered to have the same stiffness.The same is applied to the rotational springs.The minimum number of particles for each approach follows the criterion of the ten times the number of design variables.Four more experiments are performed, increasing the particle number by ten for each simulation.Thereby, five different numbers of particles are used in each approach, as shown in Table 6.For each number of particles, three cases are simulated, to ensure repeatability.The optimization problem, representing the third approach, can be seen in Equation ( 9), .
The objective function can be seen on this equation, which must be minimized, as well as the limits to which the problem is subject (S.T.), consisting of the lower and upper limits of the design variables, 7 in this case.Those limits are specified in Table 5 for the three different cases.The present study used the standard PSO method (without any modifications).
Finally, to completely define the optimization problem, the stopping criteria must be defined.For the present work, three stopping criteria are set: the maximum number of iterations, the best position's minimum step size and the best objective value's minimum change.The first one is necessary to limit the simulation's running time.If the intended minimum error is not met, the optimization loop will stop when it reaches the chosen maximum number of iterations.The second one basically monitors the best found positions and, if after some point, the best found position of the particle in a certain step is too close to the previous steps, meaning that the best particle is almost static, the stopping criterion is met.The last one is very similar to the previous, but instead of looking at the particle's position, it monitors the objective function.If the objective of the best value is sufficiently close to the last step, then it also means that, probably, the local or global minimum is reached, and the stopping criteria is met.
RESULTS
The results obtained through the presented methodology were divided into four sections.The first one exposes only the experimental data, providing information for the next section, which will show the performed optimization and how the results were extracted.Finally, the model updating results will be compared to the experimental ones and to a simulation applying a rigid boundary condition.This last step is very important, once a rigid boundary condition is a pretty standard hypothesis, which is often applied without testing its validity.
Experimental Modes and Frequencies
The experimental results for the modal analysis consist of the natural frequencies and their respective modal forms.Table 7 presents the natural frequencies obtained through the experiments.It is important to emphasize that the experimental analysis is able to detect only the first four natural frequencies, because the impact hammer test is able to excite only the workpiece's frequencies below 4 kHz.Each measured experimental natural frequency has a modal shape associated with it (eigenvectors), as shown in Table 3.The workpiece's mode shapes are plotted in Figure 8.It can be noticed that, close to the encastre region (55 mm on y-coordinate), the displacement is supposed to be null on a perfectly clamped plate.However, the workpiece has significant displacements on all modes in this region, showing that the real fixturing system has a very low stiffness.Therefore, every numerical model involving a zero-displacement boundary condition would yield erroneous results.As described in Table 6, each of the three approaches is performed using 5 different numbers of particles.Three cases of each approach for a different number of particles were simulated and the best result was chosen.The minimum objective function error and the processing time are presented in Table 8.The best result in terms of minimal RMSE is obtained for the third approach (0.13), followed by the second (0.49) and the first (2.90).It is also important to observe that, for the first approach, increasing the number of particles does not reduce the error.For the second approach, the error decreases from 0.79 to 0.49 when the number of particles increases by 30.The same tendency is observed for the third approach, in which the error decreases from 0.18 to 0.13.However, the CPU processing time increases considerably with the number of particles.
The time increases at rate of 9.8%, 13.1% and 18.1% as the number of particles increases at a rate of 10 particles, for the first, second and third approach respectively.Taking the influence of the design variables and the number of particles on the error and time processing into account, it can be stated that the increase in design variables is more effective than the increase in the number of particles for all simulation results.
Figure 9 presents one convergence curve from the optimization process.The data are relative to one simulation of the third approach, using 100 particles.The iterations and the Root Mean Square Errors are shown.It can be seen that, as expected, after each iteration, the error tends to be reduced and, after a certain number of iterations, it tends to stabilize, which means that the optimization loop is close to a local solution and a stop criterion might be reached soon.It can also be noticed that the variation of the particle tends to be smaller after every loop, mainly because the particles are closer to a local or global minimum.It is important to emphasize that the PSO does not ensure a global minimum.The best results (minimum error on the objective function) found for each of the three model updating approaches are presented in Table 9.It also shows the updated parameters, material and dimensional properties for each case.The results are also compared to a numerical simulation using a rigid boundary, as previously described.Analyzing the Root Mean Square Error (RMSE), the results obtained from the rigid boundary simulation can be seen to be far from the real situation, with an error of 49.33%.The smallest error found is 0.13% for the third approach.However, all three model updating approaches are able to accurately model the experimental natural frequencies.It is important to notice that the optimization routine for the third approach found greater values for translational spring stiffness (almost 50 times greater than the 1st approach).However, a smaller thickness is obtained, which means that the third approach has a more rigid fixturing system and a more flexible plate.Figure 10 shows the natural frequencies found for each approach and compares them to the experimental results, which are the reference values.A 10% tolerance is set (dashed lines).
It can be noticed that, for the rigid boundary approach, the natural frequencies are off limits, while for the other approaches the frequencies are very close to the experimental results.
Updated Model Modal Shapes
Even though the optimization routine only took the comparison between the experimental natural frequencies and the numerical ones into account, the modal shapes of the models are an important part of the problem.A good model must be able to represent both natural frequencies and modal shapes properly.Figure 11 shows the Modal Assurance Criterion, which compares the experimental modal form (the reference value) to the numerical ones.It can be seem that, if a rigid boundary is applied to model the real problem, the modal shapes will be poorly represented, far from the real condition.The best results on MAC are found using the first model updating approach, which uses two springs as design parameters, as shown in Table 4.This is mainly due to the low translational stiffness spring, which better representes the real fixture system's flexibility.The third approach represents the modal form better than the simulation using a rigid boundary, but worse than the second approach.
Another important piece of information can be found looking at the nodal displacements close to the clamping system, and comparing this data to the experimental one.Figure 12 shows the modal displacement at the nodes 5 mm away from the clamping system, associated, for example, to the 4th natural frequency.Ideally, the displacement on the clamping system should have been measured, which is experimentally impossible due to the impact hammer's size.So, the experimental displacements at the closest possible position are measured.The results are very similar to those found in the MAC analysis, in which, again, the 1st approach is the closest to the experimental data,
Figure 7 :
Figure 7: Particle present and future position and velocities
Figure 9 :
Figure 9: Convergence of the PSO
Figure 10 :
Figure 10: Frequency comparison between experimental, rigid and optimized
Figure 11 :
Figure 11: MAC number of numerical modes compared to experimental ones
Table 1 :
Plate dimensions and material properties
A
Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Uncertainties on Material and Dimensional Properties Latin American Journal of Solids
Table 2 :
Experimental points position
Table 3 :
Experimental response for the 55 points on frequency domain Uncertainties on Material and Dimensional Properties Latin American Journal of Solids and Structures, 2018, 15(10 Thematic Section), e80 6/18 A Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Uncertainties on Material and Dimensional Properties Latin American Journal of Solids and Structures, 2018, 15(10 Thematic Section), e80 7/18
Table 4 :
Design variables for different model updating approaches
A
Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Uncertainties on Material and Dimensional Properties
A
Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Uncertainties on Material and Dimensional Properties
Table 6 :
Number of particles for each approach
A
Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Uncertainties on Material and Dimensional Properties Latin American Journal of Solids and Structures, 2018, 15(10 Thematic Section), e80 12/18
Table 7 :
Experimental natural frequencies
A
Model Updating Method for Plate Elements Using Particle Swarm Optimization (PSO), Modeling the Boundary Flexibility, Including Uncertainties on Material and Dimensional Properties Latin American Journal of Solids and Structures, 2018, 15(10 Thematic Section), e80 13/18 6.2 Model Updating Analysis and Convergence of the PSO Simulations
Table 8 :
Minimum error for each approach and number of particles
Table 9 :
Updated parameters and natural frequencies | 8,302 | 2018-10-22T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Flow Field Simulation and Noise Control of a Twin-Screw Engine-Driven
With the advantages of good low-speed torque capability and excellent instant response performance, twin-screw superchargers have great potential in the automobile market, but the noise of these superchargers is the main factor that discourages their use. Therefore, it is important to study their noise mechanism and methods of reducing it. This study included a transient numerical simulation of a twin-screw supercharger flow field with computational fluid dynamics software and an analysis of the pressure field of the running rotor. The results showed that overcompression was significant in the compression end stage of the supercharger, resulting in a surge in airflow to a supersonic speed and the production of shock waves that resulted in loud noise. On the basis of these findings, optimization of the supercharger is proposed, including expansion of the supercharger exhaust orifice and creation of a slot along the direction of the rotor spiral normal line at the exhaust port, so as to reduce the compression end pressure, improve the exhaust flow channel, andweaken the source of the noise. Experimental results showed that the noise level value of the improved twin-screw supercharger was significantly lower at the same speed than the original model, with an average decrease of about 5 dB (A).
Introduction
In the decades before the 21st century, turbo supercharging technology played a large role in the market.However, due to the slow speeds of vehicles in modern cities, long periods spent idling, and frequently alternating engine conditions, the requirements for engines have changed, and their lowspeed torque characteristics and transient response capability have become common concerns.Because of the simultaneous rapid development of design and manufacturing standards, increased requirements of users for engine power and efficiency, and international emission regulations, engine-driven supercharging technology has again captured the automotive industry's attention.
The screw supercharger is a typical application of the screw compressor, in which the difference between the exhaust ports (gas pressure within the element volume within the compression pressure) and the external pressure (exhaust pipes of gas pressure) causes the element volume and the exhaust hole to communicate, with instantaneous, constant volume gas expansion or compression.Communication of this periodic discharge pressure pulsation in the flow loss increases, which leads to additional energy loss.The gas pulsation in the exhaust pipes always achieves the noise of compressor and the power consumed by the compressor greatly increasing.Therefore, it is important to study the flow characteristics of the exhaust process of screw compressors.Koai et al. studied the twin-screw compressor airflow ripple according to the geometric parameters of screw compressor effect analysis of the periodic excitation source and different load of pressure fluctuation; at the same time, they performed four-end parameter analysis of exhaust gas pulsation of the piping system [1,2].Wu et al. performed a theoretical and experimental study of the screw compressor exhaust airflow ripple considering the effects of condition and speed on discharge pressure pulsation [3].Huang et al. studied gas pulsation and proposed the pulsation trap method to reduce compressor airflow ripple and noise [4].Mujic et al. proposed that the discharge port was an important parameter influencing the gas pulsations and the gas pulsations amplitude can be reduced by optimization of the port shape [5].Mahendra and Olsen analyzed the unsteady velocity field at the outlet of three-lobe supercharger using particle image velocimetry (PIV) and showed that the flow exits the supercharger as a high-speed jet that varies not only in the parallel plane but also in the perpendicular plane, generating a complex three-dimensional flow [6].However, there is little research or information on the gas pulsation and noise of screw superchargers.With the development of computer technology, the application of computational fluid dynamics (CFD) technology provides a powerful tool to analyze the flow field of the screw compressor and to obtain the function of the excitation source [7][8][9][10][11].Kovacevic et al. described some aspects of an advanced grid generation method used, with CFD procedures, to model three-dimensional flow through screw compressors [12], and how CFD is merged with other design software by means of an integral management system to obtain interactive control of the entire design process of screw compressors [13].Huang and Liu visually analyzed the evolutionary characteristics of flow for the positive displacement blower by performing numerical simulation of unsteady compressible flow in a three-lobe positive discharge blower using computational fluid dynamics and showed that the differences of result between the simulation s and the semiempirical method were due to pressure fluctuations and the existence of a vortex in the mixing zone [14].
Compared with traditional turbochargers, twin-screw engine-driven superchargers have great market potential because of their better low-speed torque characteristics, good transient response capability, and lower exhaust pressure [15].However, the noise of these superchargers is the main factor that discourages their use.Therefore, it is important to study their noise mechanism and methods of reducing it.The noise problem of twin-screw superchargers has affected their use and promotion, so research into twin-screw superchargers' noise mechanism and their optimization has important applications.
In this paper, the mathematical model and calculation method of the screw supercharger are established, and the flow characteristics of the exhaust process of the screw supercharger are simulated.The optimization scheme of the supercharger is presented, and the theoretical basis is provided for the optimization design of the structure and the vibration and noise reduction.
Numerical Simulation and Optimization
The fluid flow in a screw supercharger must follow the law of physical conservation, which basically includes the law of mass conservation, the law of momentum conservation, and the law of energy conservation.The governing equations describe these conservation laws in mathematics.
(1) Mass Conservation Equation .Consider (2) Momentum Conservation Equation .Consider (3) Energy Conservation Equation .Consider According to the model's characteristics, the large eddy simulation approach is applied in this paper.The basic assumptions of the large eddy simulation method are as follows [16]: (1) Various types of scalar in the flow field are transported by large eddy.
(2) The characteristics of the flow field shown by the large eddy model and the characteristics of a large eddy are determined by the geometry and boundary conditions of the flow field.
(3) The small eddy is less affected by the geometry and boundary conditions and is regarded as isotropic.
On this basis, this paper includes a transient numerical simulation of a twin-screw supercharger.The basic structure is as shown in Figure 1.
Figure 2 shows the meshing of the shell chamber.Figures 3 and 4 the meshing of the female and male rotors, respectively.
Before optimization, the pressure field in original model was analyzed, the pressure distribution of the central cross section of the female and male rotors was specifically examined, and the flow fields were analyzed at three different moments.The position of the central section is shown in Figure 5, and the three moments correspond to the three positions of the rotor during the compression process, at the compression end stage, and after the exhaust stage.
The central cross section pressure distributions of the twin-screw engine-driven supercharger's different rotor positions are shown in Figures 6(a Figure 6 shows that the pressure in the compression chamber increased gradually during the rotating compression process of the rotor.It reached its maximum at the compression end stage, at about 0.16 MPa or more, after the rotor entered the exhaust position, and the chamber pressure quickly dropped to a value in line with the back pressure.Because the maximum pressure of 0.16 MPa or more was much higher than the back pressure of 0.1349 MPa, it was in an overcompressed state, which not only produced severe noise but also wasted the energy required by the compression process; therefore, it is necessary to take measures to reduce the compression end pressure.Based on the analysis of the original model flow field, a program has been proposed to lower the compression end pressure, that is, to expand the exhaust orifice and create a slot along the direction of the rotor spiral normal line on the exhaust orifice. To verify the new scheme, the pressure distributions in the central section at five different moments were selected for analysis of the changes in flow field pressure in the compression chamber.The five moments were (1) the moment before the rotor makes contact with the slot, (2) the moment at which the rotor begins to contact the slot, (3) the moment during which the rotor contacts the slot, (4) the end of the stage at which the rotor contacts the slot (i.e., when the rotor reaches the exhaust orifice), and (5) the stage after exhaust, as shown in Figure 7.
It can be seen from Figure 7 that before the rotor makes contact with the slot, the compression chamber pressure is about 0.13 MPa, which is slightly lower than the back pressure of 0.1394 MPa.At this time, the twin-screw engine-driven supercharger is in an undercompressed state, and when the rotor makes full contact with the slot, the compression chamber pressure soon reaches the back pressure and does not significantly change until the rotor reaches the exhaust position.
To further analyze the effect of pressure pulsation on the noise value in the simulation process, two control points were set up in the flow field at the compression end positions of the male and female rotors near the exhaust orifice or the slot.The corresponding pressure fluctuation is shown in Figure 8.
Before optimization, the compression chamber is overcompressed, with a pulsation peak of about 0.162 MPa, and, after optimization, the compression chamber is undercompressed, with a pulsation peak of about 0.118 MPa.According to the SPL calculation formula, where 0 is the reference sound pressure, defined as 2 × 10 −5 Pa, is the sound pressure level value, and is taken as the difference between the pulse peak value and the back pressure of 0.135 MPa; the calculation results are listed in Table 1.
It can be seen from the calculation results that, after lowering the compression end pressure, the compression state changes from an overcompressed state to an undercompressed state and the compression end pressure is closer to the back pressure (differing by about 0.01 MPa); therefore, the noise value of the new model is about 4 dB lower than that of the old model.
Experimental Program and Analysis of Results
Experiments of noise signal acquisition were carried out before and after improvement of the twin-screw supercharger.The experimental setup is built in the engine laboratory.The engine drives the twin-screw machinery through the belt pulley.The CoCo-80 signal analyzer probe holder was placed close to the engine, and the signal acquisition time was set to 8 seconds.The experimental apparatus is shown in Figure 9.During the experiment, as the speed ratio between the engine and the supercharger was 1 : 3, when the engine speed changed from 1200 r/min to 4000 r/min, the appropriate supercharger speed range was 3600 to 12000 r/min.With the engine speed changed, the noise signal was recorded when the speed was stable.The same signal acquisition was conducted of the superchargers before and after the improvement, respectively, and a spectral analysis of the data was performed.The two spectrograms at the same rotating speed have been compared to determine the noise reduction.
To compare and analyze the noise situations of the two superchargers before and after improvement, the spectrograms at five different speeds were selected for comparison: 3600 r/min, 6600 r/min, 8400 r/min, 10,200 r/min, and 12,000 r/min.From the corresponding spectral data, the changes in the noise of the original model and the improved model with changes in speed are shown in Table 2.
It can be seen from the data in Table 2 that the main noise frequencies of the twin-screw engine-driven superchargers before and after the improvement were around 1000 Hz.With an increase in speed, both the basic frequency sound pressure level and the A-weighting total sound pressure value gradually increased, and those of the improved model averaged about 5 dB lower than those of the original model at the same speed, thereby proving that a reduction in the exhaust pressure can effectively reduce noise of twin-screw engine-driven superchargers.
Conclusions
Transient numerical simulations of the flow fields of twinscrew engine-driven superchargers were conducted with CFD software, and the pressure field was analyzed during rotor operation.On this basis, an optimization program for the supercharger was proposed: to expand the supercharger's exhaust orifice and create a slot on the exhaust orifice along the direction of helical normal line of the rotor to reduce the compression end pressure and improve the exhaust flow channel, thereby reducing the possibility of generating a shock wave during the exhaust process and reducing the source of noise.The following conclusions were drawn: (1) Exhaust noise is a major noise source of twin-screw engine-driven superchargers.Overcompression can easily produce shock waves during the exhaust process and generate intense noise; therefore, the pressure difference at the exhaust moment should be minimized.
(2) After the improvement, the noise produced by the twin-screw supercharger was significantly reduced, and the exhaust process of undercompression verified the correctness of the theory of gas pulsation.In addition, it also proved that the noise produced by the expansion wave is less than that of the compression wave, providing a reference for noise study.
(3) The experimental results show that the noise level can be effectively reduced by an average of about 5 dB (A) by enlarging the exhaust orifice and improving the exhaust gas channel; however, because of significant changes in the supercharger working conditions, it is necessary to propose more ways of eliminating the generation of shock waves to reduce the noise over a wide range (e.g., the pulsation trap proposed by Huang [17,18]).
Figure 1 :
Figure 1: Overall view and internal view of a twin-screw supercharger.
Figure 2 :
Figure 2: Mesh of a twin-screw engine-driven supercharger shell chamber.
Figure 6 :
Figure 6: (a) Rotor during the compression process.(b) Rotor at the compression end stage position.(c) Rotor position after the exhaust stage.
Figure 7 :
Figure 7: (a) Rotor before contact with the slot.(b) Rotor at the beginning of contact with the slot.(c) Rotor during contact with the slot.(d) Rotor at the end of contact with the slot.(e) Rotor after the exhaust stage.
Figure 8 :
Figure 8: Pressure fluctuation at the monitoring point.
Table 1 :
Calculation of the pressure pulsation peak SPL value.
Table 2 :
Spectrogram data summary sheet.Signal basic frequency (Hz) Basic frequency sound pressure level (dB) A-weighting total sound pressure level (dB) | 3,350.2 | 2016-03-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Investigation of a non-invasive method of assessing the equine circadian clock using hair follicle cells
Background A comprehensive understanding of the equine circadian clock involves the evaluation of circadian clock gene expression. A non-invasive and effective method for detecting equine clock gene expression has yet to be established. Currently, research surrounding this area has relied on collecting tissue biopsies or blood samples that can often be costly, time consuming and uncomfortable for the animal. Methods Five mares were individually stabled under a light–dark (LD) cycle that mimicked the external environmental photoperiod during a time of year corresponding with the vernal equinox. Hair follicles were collected every 4 h over a 24-h period by plucking hairs from the mane. RNA was extracted and quantitative (q) PCR assays were performed to determine temporal expression patterns for the core clock genes; ARNTL, CRY1, PER1, PER2, NR1D2 and the clock controlled gene, DBP. Results Repeated measures ANOVA for the clock gene transcripts PER1 and PER2 and the clock controlled gene, DBP, revealed significant variation in expression over time (p < .05, respectively). Cosinor analysis confirmed a significant 24-h temporal component for PER1 (p = .002) and DBP (p = .0033) and also detected rhythmicity for NR1D2 (p = .0331). Conclusions We show that the extraction of RNA from equine hair follicle cells can identify the circadian 24 h oscillations of specific clock genes and a clock-controlled gene and therefore provide a valuable non-invasive method for evaluating the equine peripheral circadian clock. This method will serve as a useful tool for future evaluations of equine circadian rhythms and their response to environmental changes.
Background
The circadian system supplies organisms with a means to adapt their internal physiology to the continuously changing environmental stimuli that exist on a rotating planet [1]. The central pacemaker is located in the suprachiasmatic nucleus (SCN) of the hypothalamus and coordinates, via neural and humoral signals, multiple peripheral clocks situated in tissues throughout the animal [2]. These peripheral clocks consist of a group of highly conserved 'clock' genes and their protein products functioning within tightly controlled autoregulatory transcription-translation feedback loops [3] that ultimately give rise to 24-h alterations in gene expression and behavioural outputs.
The positive axis of the loop is created by transcription factors CLOCK and ARNTL as they undergo transcription and translation [3]. CLOCK and ARNTL proteins heterodimerize and bind to E-box enhancers upstream of PER and CRY genes in order to trigger transcription [5]. Following this, a complex is formed by PER and CRY proteins that relocates to the nucleus to inhibit CLOCK/ARNTL activity. This leads to the repression of their own transcription completing the negative axis of the feedback loop [2]. RORA, (RAR-related orphan receptor A) NR1D1 (nuclear receptor subfamily 1, group D, member 1), and NR1D2 (nuclear receptor subfamily 1, group D, member 2) are orphan nuclear receptors that make up a secondary feedback loop. RORA instigates ARNTL transcription whilst NR1D1 and NR1D2 repress its expression [6]. Each cycle of the molecular clock within a tissue gives rise to the simultaneous upregulation of a subset of clock-controlled genes [7] activated by the transcriptional activity of the ARNTL/CLOCK heterodimer.
Markers of circadian phase in humans include melatonin [8] and body temperature [9]; however melatonin has been established as not circadian in the horse [10]. Previous studies in humans have used white blood cells or oral mucosa as a method of detecting human clock gene expression [11,12]. These methods have several reported drawbacks [11,13]. Physical stimuli and time delays due to the processing of cell separation may affect levels of expression of clock genes and the overall quality of the isolated mRNA. For instance with white blood cells, the issue relates to time delays due to the processing of cells prior to transcriptional inactivation which in turn may affect the levels of mRNA of clock genes. A similar concern can be seen with the collection of oral mucosa cells. RNA samples were shown to be severely fragmented and thus the results were discarded [12]. There are additional impracticalities in collecting oral mucosal cells from horses. These could be avoided through the collection of hair follicles as an alternative sampling method [13]. The main advantages of using hair follicle cells are that they can be obtained noninvasively and cells can be collected and the RNA stabilised simply by plucking hairs and adding them directly to an RNA stabilisation buffer.
A non-invasive and effective method for detecting peripheral equine clock gene expression has yet to be established. This hinders progress in areas of equine circadian research. Currently, research investigating equine peripheral clocks has relied on collecting muscle biopsies or blood samples. These methods can often be time consuming, costly and can occasionally lead to animal welfare concerns as this approach to collecting samples can be invasive.
In this report we examine a convenient and non invasive method for detecting equine clock gene expression through the use of hair follicle cells collected from the mane of the horse.
Sample collection
Five healthy, non-pregnant mares (Equus caballus) of various lightweight breeds were individually housed in standard 12 ft x 12 ft stalls for 24-h under a lightdark (LD) cycle that mimicked the environmental photoperiod for that time of year. Mares were chosen for their availability on UCD's Lyons Research Farm. Stallions were not available for the study and castrated males (geldings) are considered unsuitable as their neuroendocrine system is compromised. The experiment was conducted in March where the times of dawn and dusk were 06:00 and 18:00 respectively, corresponding to a 12 h Light : 12 h Dark LD cycle at longitude 138 W6.8, latitude N53.2 (County Kildare, Ireland). While stabled, horses had access to hay and water ad libitum. We collected mane hair samples at 4-h intervals from 16:00 [Zeitgeber Time (ZT) 9, where time of lights on defines ZT 0] for a period of 24 h.
Each hair sample consisted of 10-20 hair follicles that were trimmed to remove excess hair and carefully placed in a 2 mL screw-cap tube containing 400 ul of binding buffer from the High Pure RNA Isolation Kit (Roche, Indianapolis, Indiana).
Quantitative polymerase chain reaction (qPCR)
Total RNA was isolated using the High Pure RNA Isolation Kit (Roche, Indianapolis, INdiana) according to the manufacturer's instructions with minor modification: The hair follicles were placed in a 2.0 mL screw cap tube, containing 400 uL of binding buffer from the High Pure RNA Isolation Kit (Roche) and 200 uL of PBS. A single 5 mm stainless steel bead (Qiagen) was added to each tube and the samples were homogenised at maximum speed (30 Hertz) for 2 min using the Qiagen TissueLyser system. Following homogenisation the samples were spun for 1 min at maximum speed to reduce foaming, the homogenate was then applied to the filter column and RNA was extracted as per instructions for the High Pure Isolation Kit. RNA was eluted in 50 uL and stored at -80 deg C.
RNA quantity was measured using the NanoDrop ND1000 spectrophotometer V 3.5.2 (NanoDrop Technologies, Wilmington, DE). RNA quality was assessed using the Agilent Bioanalyser RNA Chip (Santa Clara, California). All samples were shown to have a RIN value in excess of 7.5. The RNA was converted to complementary (c) DNA and a cDNA pool containing 3.5 ul from each sample was prepared and used to generate a 7 point, 1 in 4 serial dilution. This serial dilution was used to test the efficiency of each primer pair used in the study. The remaining cDNA was diluted to 2.0 ng/ul of RNA equivalents and stored at −20°C. A number of minus reverse transcription (RT) controls were included during the cDNA preparation.
Quantitative PCR assays were performed using Biosystems 7500 Sequence Detection System and the Sensi Mix SYBR Kit (Bioline, Taunton, Massachusetts). A panel of eight putative reference genes was assessed for stability using the GeNorm algorithm with the qBase Pair Softwear package. Results showed that ARTB and HPRT were suitable reference genes and the optimal normalization factor was calculated as the geometric mean of these two reference targets. Each PCR reaction was prepared in duplicate and in a volume of 20ul [10 ul master mix, 5.0 ul cDNA, 1.2 ul forward (300 uM), 1.2 ul reverse (300 uM) and 2.6 ul water]. A panel of five core clock genes was selected; PER1 (period homolog 1), PER2 (period homolog 2), ARNTL (aryl hydrocarbon receptor nuclear translocator-like), CRY1 [cryptochrome 1 (photolyase-like)], NR1D2 (nuclear receptor subfamily 1, group D, member 2) and the clock controlled gene (CCG) DBP (D-site of albumin promoter binding protein). Candidate genes were selected based on prior evidence of their cyclic expression in human hair follicles [13] and where equine primer sequences were previously published [14]. Primers sequences were commercially synthesised by Eurofins MWG Operon (Ebersberg, Germany).
Thermal cycling consisted of one cycle of 50°C for 2 min and 95°C for 10 min, followed by 40 cycles at 95°C for 15 seconds and 60°C for 1 minute. Melt curves were examined to confirm specificity of each PCR product. Primer efficiencies were shown to be between 90% and 110%. Transcript abundance was determined relative to ACTB and HPRT1 using the Q Base Plus Software package (Biogazelle, Belgium). CNQR results were analysed using SPSS.
Data analysis
One-way repeated measures ANOVA (GraphPad Prism Version 5.0 for Mac, GraphPad software, San Diego, California, USA, http://www.graphpad.com) was used to determine whether the temporal pattern of expression for each transcript varied significantly over the 24-h period. The presence of diurnal (24-h) temporal variation for transcript means was evaluated using a Cosinor programme [15] based on the least squares cosine fit method [16]. In all cases significance was assessed as p < .05. Data are presented as means ± SEM.
Results
The expression patterns of five core clock genes PER1, PER2, ARNTL, CRY1 and NR1D2 and the clock controlled gene, DBP (D-site of albumin promoter binding protein) were investigated in this study. We detected mRNA expression of all six genes in equine hair follicles. ANOVA revealed significant 24-h variation for three of the genes PER1, PER2 and DBP (p = .024, p = .02 and p = .036, respectively; Figure 1) but not for the remaining three genes ARNTL, CRY1 and NR1D2 (p = .58, p = .32, p = .37, respectively; Figure 1).
Cosinor analysis confirmed a significant 24-h cyclic component for PER1 (p = .002), DBP (p = .0033) and NR1D2 (p = .0331) while the 24-h cosine fit for PER2 (p = .0643) was just shy of significance. As the inverse oscillatory relationship between PER2 and ARNTL expression is considered a characteristic component of the molecular clockwork mechanism, we further examined whether any individual expression profiles of ARNTL exhibited a circadian waveform. No individual 24-h expression profiles for ARNTL exhibited significance via cosinor analysis (p > .05) Results of cosinor analysis including estimated acrophases are presented in Table 1.
Discussion
In this study, we demonstrate that RNA extraction from equine hair follicle cells yields high quality RNA and is suitable for detection of 24 h oscillations of core components of the equine molecular clock. To demonstrate that oscillating clock gene expression in hair follicle cells can be used as markers to assess an equine peripheral clock, we examined gene expression in hair follicle cells collected from the manes of five mares over a 24 h period. This is the first study to investigate a noninvasive method of assessing an equine peripheral tissue clock.
We show that mRNA levels of PER1, PER2 and DBP vary significantly over time as determined by ANOVA. However, of these three, only PER1 and DBP are shown to exhibit a 24-h sinusoidal profile as determined by cosinor analysis. Furthermore, although non-significant by ANOVA, the expression profile of NR1D2 is positive for 24-h rhythmicity by cosinor analysis. These apparent discrepencies can be explained by understanding the nature of each analysis. Cosinor analysis is more sensitive than ANOVA at picking up sinusoidal patterns in the data associated with a specified period as that is precisely what it is designed to do. The ANOVA can take into account the repeated measures design in partitioning out the variance but has no means of accounting for the temporal relationship (angular proximity) of the time points and no model for temporal variation against which the observed variation is compared. In the case of PER2, the highly variable means at specific time points were detected by ANOVA but the angular proximity of the time points were only weakly sinusoidal. Conversely, while the mean expression of NR1D2 at specific time points varied to a lesser extent, the angular proximity of the data points more closely reflected a sinusoidal pattern. Of the six gene transcripts examined, we find that PER1 and DBP may serve as the most reliable markers for this peripheral clock in the horse with limited sampling frequency.
Furthermore, we report that as few as 10 equine hairs are sufficient for high quality RNA yield. A significant advantage of this technique is that it can be carried out by an untrained person and eliminates the need for tissue biopsies.
A comparison of our results with a previous study evaluating clock gene expression from human hair plucked from the scalp and chin [13] revealed that a similar temporal pattern of expression of PER2, NR1D2 and DBP exists between species when maintained under LD12:12. Although the LD cycle under which the human subjects were maintained is not reported, the three oscillating genes in common to the two studies, PER1, DBP and NR1D2, exhibit peaks in the morning between 07:00 and 09:30, suggesting a similar phase relationship between these genes in the two species. Importantly, in a previous investigation of clock gene expression profiles in equine gluteal muscle conducted by our lab [14], peak values of PER2, NR1D2 and DBP were observed at 07:00. Although further studies are required to accurately identify the phase relationship between the peripheral clocks in muscle tissue and hair follicles, these findings suggest that hair follicle expression profiles may provide a valuable marker with a similar phase relationship to the envitronmental LD cycle as a more performance relevant equine tissue, gluteal muscle.
It was surprising to fail to find rhythmic expression of the core clock genes, ARNTL and CRY1, in hair follicle cells. However, it has become clear from studies in peripheral tissues that the specific contributions and interactions between specific clock components vary in a tissue-specific manner [4]. Moreover, in the recent human study of hair follicle clock gene expression, only "slight oscillations" of ARNTL were detected while CRY1 was not examined [13]. It cannot be excluded that the cycling amplitude of ARNTL and CRY1 in follicle cells may be inherently small and undetectable in the current assays, or that post-translational processing plays a more pivotal role in this tissue clock.
Circadian rhythms regulate hundreds of functions relevant to the athletic horse including body temperature and hormone production [10,17], immune function [18] and muscle metabolism [14]. Disruption of the circadian system could have a profound influence on equine health. In humans, the disruption of circadian rhythms has been linked to jetlag [19], insomnia [20], stomach ailments [21] and depression [22]. With more research into the characterisation of the circadian clock, it may be possible to pin point the exact consequences of disruption to the circadian cycle on the equine system.
The need for a non-invasive marker of circadian phase is particularly relevant for furthering equine research in relation to transmeridian transportation of horses for international competition. It is well accepted that the circadian clock regulates activity and also muscle metabolism in mammals [23,24]. This is supported by evidence that nocturnally active rats experience peak expression of PER2 in skeletal muscle at the onset of dark [23,24] whereas diurnally active horses have an opposing peak of PER2 expression at dawn [14]. The capacity of the molecular clock within the SCN to reset following an abrupt 6-h LD shift, as occurs during transmerdian travel across six time zones, was shown to take up to eigth days in mice [25]. The gradual re-entrainment of clock genes within specific areas of the SCN was correlated with, and mirrored, a disruption in circadian behavioral output, as measured by locomotor activity rhythms in a further study [26].
In addition to desynchronized clock gene expression within the master pacemaker in the SCN, rhythmic gene expression in peripheral tissues, which rely on SCN signals for synchrony, are also significantly disrupted. This was clearly demonstrated in rats when it was reported that clock gene cyclicity in skeletal muscle, liver and lung shifted more slowly than the SCN following both LD cycle advances and delays [27]. The authors concluded that this likely further explained the physical malaise in humans associated with rapid transmeridian travel.
The capacity to evaluate markers of circadian phase, as determined by clock gene expression patterns, to determine the extent and duration of circadian misalignment in response to abrupt changes in the 24-h LD cycle, as occurs during transmeridian travel, represents a valuable experimental tool. Our results suggest that equine hair follicles may be used in future studies as a reliable and non-invasive method to detect the time duration required for a peripheral equine tissue to adjust to a new time zone. If it is found that the phase of the equine hair follicle clock closely mirrors that in skeletal muscle, which our results suggest, and exhibit similar re-entrainment rates following simulated jet lag, then there clearly exisits the opportunity to quantify the duration of potential performance deficits and to develop a molecular test for time zone resynchronization in these valuable global athletes. This in turn could help horse trainers determine how much time in advance to transport their racehorse to a temporary location before an important race.
Conclusions
We demonstrate that RNA extraction from equine hair follicle cells is a suitable method of evaluating certain core clock genes and a clock-controlled gene to assess an equine peripheral clock. In particular, our findings support the evaluation of the gene transcripts PER1 and DBP in hair follicle cells as suitable markers for evaluating the phase of a peripheral clock in the horse. | 4,218.2 | 2012-10-05T00:00:00.000 | [
"Biology"
] |
Using Convolutional Neural Networks for the Assessment Research of Mental Health
Existing mental health assessment methods mainly rely on experts' experience, which has subjective bias, so convolutional neural networks are applied to mental health assessment to achieve the fusion of face, voice, and gait. Among them, the OpenPose algorithm is used to extract facial and posture features; openSMILE is used to extract voice features; and attention mechanism is introduced to reasonably allocate the weight values of different modal features. As can be seen, the effective identification and evaluation of 10 indicators such as mental health somatization, depression, and anxiety are realized. Simulation results show that the proposed method can accurately assess mental health. Here, the overall recognition accuracy can reach 77.20%, and the F1 value can reach 0.77. Compared with the recognition methods based on face single-mode fusion, face + voice dual-mode fusion, and face + voice + gait multimodal fusion, the recognition accuracy and F1 value of proposed method are improved to varying degrees, and the recognition effect is better, which has certain practical application value.
Introduction
With the development of economy and the acceleration of the pace of life, people's life pressure is becoming bigger and bigger, and mental health problems have become the focus of global attention. At present, the methods of mental health assessment are mainly based on experts' assessment or selfassessment, which is assessed from the perspective of the patient and the practitioner. Briggs Hannah et al. explored the thoughts, feelings, and educational requirements of nursing staff and nurses at the clinical help desk of emergency medical services, and the focus was on the classification tools used for calls and uses related to mental health. Here, quantitative data are analyzed by descriptive statistics, and qualitative data are analyzed by subject analysis. us, mental health assessment and triage of patients and their families are realized [1]. Scelzo Anna evaluated mental health in the form of questionnaires and believed that a good mental health assessment is conducive to promoting healthy aging [2]. Michael R. Hass et al. proposed a concept of case conceptualization and realized the assessment of students' mental health by determining students' psychological needs and writing goals. is method is better than the traditional evaluation process [3]. Fortuna Lisa R. adopted the 2.2 pros and cons method to introduce trauma narrative in the process of sheltered mental health assessment, so as to improve the accuracy of mental health assessment [4]. Scott A. Bresler, Ph. D., reviewed the mental health assessment by forensic in the digital era and believed that the rational use of Internet data is conducive to accurately assess mental health [5]. Higuchi Masakazu et al. constructed a mental health assessment system based on voice modes in a mobile device based on voice, which opens the mental health voice assessment with certain foresight [6]. Newson Jennifer J. et al. assessed the Chinese and Canadian interactive mental health by taking a pilot primary care outpatient clinic led by nurse practitioners as the research object, which is conducive to strengthening the mental health communication between clinicians and patients [7]. OReilly Michelle et al. analyzed 28 videos recording British children's psychology by using discourse psychology, established a rhetorical case to prove the clinical need, and believed that children's mental health is related to parents' teaching by words and deeds [8]. Since then, with the development of information technology, people began to introduce computer-aided methods to evaluate psychology, such as Heesacker Martin using computer system, and CNN proposed by some scholars to evaluate psychology [9][10][11]. e above research indicates that most of the current mental health assessment methods are mainly based on experts' experience and analysis, and there is a certain degree of subjectivity. In order to better objectively assess mental health, an automatic intelligent assessment method of mental health based on the rapidly developing convolutional neural network is proposed.
Introduction to OpenPose
Algorithm. OpenPose algorithm is a bottom-up algorithm based on convolutional neural network, which is suitable for single and multiperson pose recognition and has good robustness. e basic structure of OpenPose algorithm is shown in Figure 1, and there are two branches and multistage convolutional neural networks [12]. Here, the yellow and blue parts represent one branch, respectively, and the left and right parts represent two phases, respectively. e yellow branch is used to describe the confidence map of face and posture key points, and the blue branch is used to describe the correlation degree of each key point. e left part is responsible for generating detection confidence maps and partial affinity domains, and the right part is responsible for connecting the prediction results of different branches of yellow and blue to improve the prediction accuracy.
Introduction to Multimodal Fusion.
Multimodal fusion refers to the fusion of feature information of different modes [13], including three fusion modes, namely, data layer fusion, feature layer fusion, and decision layer fusion. Data layer fusion first combines data, extracts features from the combined data, and then inputs them into a classifier for recognition. Feature layer fusion extracts different modal information data features separately and inputs the combined features into a classifier for recognition. Decision layer fusion extracts data features separately, identifies each extracted feature, and finally fuses the recognition results. In practical applications, multimodal fusion based on data layer plays a positive role in recognition task, but its fusion efficiency is low. e multimodal fusion method based on feature layer may increase the amount and difficulty of calculation because it cannot screen effective features, thus reducing the model recognition results. e fusion method based on decision layer only combines the results of different modes but theoretically does not really integrate the information of all modes [14,15]. According to the characteristics of mental health assessment mainly from three aspects of human face, voice and gait, as well as reference [16], the multimodal fusion method based on feature layer is adopted, and its basic process is shown in Figure 2.
Introduction to Attention Mechanism.
Attention mechanism is a kind of perception mode that simulates the human brain to selectively attach importance to useful information and discard useless information, which is first applied in the field of visual images. In recent years, with the in-depth study of attention mechanism, it has been widely used in image recognition, recommendation system, and other fields. e attention mechanism usually follows the form of query (Q), keyword (K), and weight value (V). e structure of the classical attention mechanism is shown in Figure 3 [17].
When the attention mechanism is introduced to distribute weight, formula (1) can be used to distribute weight [18]: where L represents the number of keywords and Similarity ( ) means similarity calculation function, which usually includes the following three functions: where W represents the learnable parameter and d represents the dimension of keyword and weight value.
Introduction of SVM.
SVM algorithm is a nonstatistical classification algorithm, and its basic principle is based on nonlinear transformation regression function to map sample data to high-dimensional feature space, which can realize the sample data conversion. Its kernel function is defined as follows: where x and z represent data points in the original space and φ represents nonlinear transformation. In general, the kernel function of SVM is Gaussian kernel function, as shown in the following: where z represents the center value of Gaussian function and σ represents the width parameter of function.
Overall Framework of the Model.
Based on the above analysis, multiple modes of face, voice, and gait are integrated based on the OpenPose algorithm of convolutional neural network, and attention mechanism is used to allocate the weight of different modes reasonably. A mental health assessment model of multimode fusion with the introduction of the attention mechanism is proposed, and its overall framework is shown in Figure 4. Firstly, OpenPose algorithm is used to extract key points of human face and posture, and openSMILE is used to extract low-level voice descriptors. en, the modal characteristics can be calculated by time domain statistical parameters. Finally, attention mechanism is introduced to allocate the weight of each mode reasonably, and support vector machine (SVM) is used to classify and recognize the mental health assessment.
Feature Extraction.
For feature extraction of face and gait images, OpenPose algorithm is adopted to extract key points of face and gait. At the same time, face data and gait data are input into the algorithm to generate detection confidence graph combination S and confidence graph unit S j , whose calculation formulas are shown as follows: Computational Intelligence and Neuroscience where J values are 68 and 18, respectively; then coordinate set Fcoo t of key points of face image in frame T and coordinate set Gcoo t of key points of gait image in frame T can be expressed as follows: Here, openSMILE method is used to extract the short time energy, formant, pitch frequency, and MFCC of voice features. e short-time energy E(i) of frame i voice signal y i n can be calculated by formula (11): By calculating data(n) of voice signal and carrying out Fourier transform, pitch frequency PF is solved.
where v(n) represents the corresponding filtering of sound channel and u(n)u(n) represents the excitation response of glottis pulse. e formant parameters For1, For2, and For3 of voice signal peak are calculated by LPC root method [20].
According to formula (11), the spectrum of voice signal is calculated. Combined with Mayer filter and discrete cosine transform, the 12th-order MFCC features are obtained, which can be expressed as formula (12): MFCC � mfcc 1 , mfcc 2 , . . . , mfcc 12 , where P is the power spectrum; FFT is the fast Fourier transform; X i is the voice signal; and N � 512.
Calculating Time Domain Statistical Parameter.
Time domain features can describe the characteristics of different data in time dimension. In addition, arithmetic sum, mean, minimum, maximum, variance, standard deviation, skewness, kurtosis, and correlation coefficient between two axes are selected as the calculation types of time domain feature statistical parameters by referring to literature [21,22].
Introducing Attention Mechanism.
Similarity calculation of mental health assessment query and keyword with attention mechanism includes two aspects, namely, dot product calculation of vector check and cosine similarity calculation, as shown in the following formulas [23]: similarity2 Query, en, the weight coefficient is solved through normalization operation, as shown as follows: Finally, formula (16) shows the weighted sum of weight coefficients, the final fusion feature Fusatt can be obtained [24]. e size of Fusatt is 103 * 4, and its calculation method is shown in formula (17) [25]: In formula (17), F fea , V fea , andG fea represent facial, phonetic, and gait features, respectively.
SVM Classification.
According to the evaluation indicators of mental health, there are 10 SVM classifiers trained to judge different mental health conditions, corresponding to 10 mental health indicators such as somatization, depression, and anxiety. Each psychological index includes negative and positive two states, corresponding to not suffering from or suffering from the corresponding psychological disease of this index.
Construction of Experimental Environment.
is experiment is conducted on Windows7 operating system with Intel Xeon Silver4110 CPU; graphics card is NVIDIA GeForce GTX 1080Ti with of 16 G memory; the memory is 128 G; and the development language is Python.
Data Sources and Preprocessing
4.2.1. Data Sources. In this experiment, facial, gait, and voice data of 680 employees in a company collected by Guangdong Electric Power Research Institute and mental health data collected in the form of questionnaires are selected as experimental data. e basic information is shown in Table 2 [26].
Data Preprocessing.
To avoid the impact of invalid data on model performance, this experiment deletes and processes the invalid data of some missing values contained in the data set and finally obtains 672 valid data samples. In addition, considering that there may be noise and background sound in the acquisition process of facial, voice, and gait data, the video and audio data are preprocessed, respectively.
For the video data, GaussianBlur function is called to denoise the video data, and then short time series video is generated by resampling, so as to improve the proportion of effective information. For facial video data, a video with a duration of 30 s is used as a segment; for gait video data, a video with a duration of 8 s is taken as a segment [27,28]. e preprocessed face and gait are F and G, respectively, so the video data can be expressed as where t is determined by the size of video frames and f t and g t are single-frame facial and gait images. For audio data, the first is to delete incomplete recording data; the second is to call wiener filtering method in Wiener function to denoise data; thus the random noise in audio is eliminated; finally, the voice is divided into multiple sequence combinations within 1 s to obtain audio set V, which can be expressed as where t is determined by the length of audio and voice; v t is the data storage format; and the storage format of v t in this experiment is matrix storage. rough the above preprocessing, a total of 658 valid data samples are obtained in this experiment.
Parameter Settings.
In this experiment, parameters of SVM are set as follows: the kernel function is Gaussian function; degree � 3; and the penalty coefficient of error term is 1.
where TP and TN represent true positive cases and true negative cases and FP and FN represent false positive cases and false negative cases. According to the formulas, the higher the accuracy, accuracy, and recall, the better the model performance. However, the accuracy and recall cannot grow at the same time. To balance the two, F1 value index is proposed. e higher F1 value indicates that the accuracy and recall are most balanced. Based on the above analysis, accuracy and F1 values are finally selected as indicators to evaluate the performance of model.
Method Verification.
To verify the effectiveness of proposed method, the preprocessed data are used to verify the proposed method, and identification accuracy of somatization, depression, anxiety, and other mental health indicators is used as evaluation criteria. Figures 5∼7 show the recognition results based on the single mode of face, voice, and gait; Figures 8∼10 show the recognition results of face + voice, face + gait, and voice + gait; Figure 11 shows the multimode recognition results of face + voice + gait; Figure 12 shows the recognition results of introducing attention mechanism based on Figure 11.
As can be seen from Figure 5, the overall recognition accuracy of recognition method based on facial single mode for all mental health indicators is 71.86%, among which the recognition accuracy of obsessive-compulsive and anxiety indicators is higher, reaching 73.75% and 73.53%, respectively. e recognition accuracy of other indicators is the lowest, reaching 68.53%. In addition, the overall F1 value of the method is 0.71.
As can be seen from Figure 6, the overall recognition accuracy of recognition method based on voice single mode for all mental health indicators is 64.89%. Among them, the recognition accuracy of anxiety and hostility indicators is relatively high, reaching 68.36% and 67.36%, respectively. Computational Intelligence and Neuroscience e recognition accuracy of other and dreadness indicators is relatively low, reaching 60.36% and 61.52%, respectively. In addition, the overall F1 value of the method is 0.65.
As can be seen from Figure 7, the overall recognition accuracy of recognition method based on gait single mode for all mental health indicators is 62.51%. Among them, the recognition accuracy of obsessive-compulsive and anxiety indicators is relatively high, reaching 64.74% and 64.53%, respectively. However, the recognition accuracy of other indicators, interpersonal relationship sensitivity, and psychopathy is relatively low, reaching 60.04%, 60.52%, and 60.83%, respectively. In addition, the overall F1 value of the method is 0.60.
As can be seen from Figure 8, the recognition accuracy of recognition method based on the face + voice dual-modal fusion for mental health indicators is high, and the overall recognition accuracy is 73.06%. Among them, the recognition accuracy of depression and anxiety indicators is higher, reaching 76.76% and 76.65%, respectively.
e recognition accuracy rate of other and dreadness indicators is the lowest, reaching 66.13% and 68.73%, respectively. In addition, the overall F1 value of the method is 0.74. Compared with the recognition method based on single mode, the average recognition accuracy is improved by 1.42%, and the average F1 value is increased by 11.71%.
As can be seen from Figure 9, the recognition accuracy of recognition method based on the face + gait dual-modal fusion for mental health indicators is high, and the overall recognition accuracy is 72.30%. Among them, the recognition accuracy of obsessive-compulsive is high, reaching 76.67%, while the recognition accuracy of other indicators is low, reaching 65.49%. In addition, the overall F1 value of the method is 0.71.
As can be seen from Figure 10, the overall recognition accuracy of recognition method based on voice + gait dualmodal fusion for mental health indicators is 64.94%. Among them, the recognition accuracy of anxiety and crankiness is 69.48% and 68.53%, respectively, while the recognition accuracy of other indicators is 58.64%. In addition, the overall F1 value of the method is 0.66.
As can be seen from Figure 11, compared with the recognition methods based on single-modal and multimodal fusion, the overall recognition accuracy of recognition method based on facial + speech + gait multimodal fusion for various mental health indicators is improved to different degrees, reaching 73.49%. Among them, the recognition accuracy of anxiety reaches 78.72%, and the recognition accuracy of other indicators is lower, reaching 64.18%. In addition, the overall F1 value of the method is 0.75.
Computational Intelligence and Neuroscience
As can be seen from Figure 12, the recognition method of face + voice + gait multimodal fusion with attention mechanism can accurately evaluate mental health. e recognition accuracy of anxiety and hostility can reach more than 80%, and the recognition accuracy of somatization, depression, and psychopathy can reach more than 79.3%. e overall recognition accuracy of mental health indicators is 77.20%. In addition, the overall F1 value of the method reaches 0.77.
In conclusion, the proposed method of mental health assessment can effectively improve the recognition accuracy of mental health indicators and F1 value by fusing face, voice, and gait. In addition, attention mechanism is introduced. Compared with recognition method based on singlemodal, double-modal, and multimodal fusion, the proposed method of mental health assessment has better recognition effect and has certain effectiveness.
Comparison of Methods.
To further verify the effectiveness and superiority of proposed method, the evaluation effect is compared with that of the commonly used mental health assessment method. e recognition accuracy of different methods is shown in Figure 13(a), and the F1 value is shown in Figure 13(b). In the figures, F, V, and G are single-modal recognition methods based on face, voice, and gait, respectively. F + V, F + G, and V + G are dual-modal fusion recognition methods based on face + voice, face + gait, and voice + gait, respectively. F + V + G is face + voice + gait multimodal fusion recognition method. (F + V + G) attention is a multimodal fusion recognition method that introduces attention mechanism. Figure 13(a) shows that the recognition accuracy of multimodal fusion method is higher than that of single-modal and dual-modal fusion recognition methods. e proposed multimodal fusion method with attention mechanism has the highest recognition accuracy, Computational Intelligence and Neuroscience reaching 77.2%. As can be seen from Figure 13(b), the proposed multimodal fusion recognition method based on the attention mechanism has the highest F1 value, reaching 0.77. Compared with the recognition method based on face single-modal fusion, F1 value is increased by 9.10%. Compared with the recognition method based on dualmodal fusion, the average F1 value is improved by 11.53%. Compared with the multimode fusion recognition method without attention mechanism, the F1 value is improved by 2.60%. e experimental results show that the proposed method has certain effectiveness and superiority in mental health assessment and solves the problem of insufficient information based on single mode and double mode. Meanwhile, the attention mechanism is introduced to reasonably allocate the weight of face, voice, and gait modes and improve the model performance. Compared with the recognition method based on single-modal fusion and dualmodal fusion, and the multimodal fusion recognition method without attention mechanism, the recognition accuracy and F1 value of the proposed method are improved to varying degrees, and the recognition effect is better.
Conclusion
To sum up, the proposed mental health assessment method based on convolution neural network can realize effective identification and evaluation of somatization, depression, anxiety, and other mental health indicators, where modal characteristics of face, voice, and gait are fused. In addition, attention mechanism is introduced to allocate different modal weights. e overall accuracy can reach 77.20%, and F1 value can reach 0.77. Compared with the recognition methods based on face single-modal fusion, face + voice dual-modal fusion, and face + voice + gait multimode fusion, the recognition accuracy and F1 value of the proposed method are improved to varying degrees, and the recognition effect is better, which has certain practical application value. However, due to the limitation of conditions, there are Computational Intelligence and Neuroscience still some deficiencies to be improved, mainly focusing on the construction of data set. At present, there are few data sets about mental health in China, and the size of data sets has a great influence on mental health assessment model, so the number of data sets selected in this paper is far from the requirements. erefore, it is necessary to build a mental health database with large amount of data and high quality. e next step is collect more original data to enhance the model performance and improve the recognition accuracy of model.
Data Availability
e experimental data used to support the findings of this study are available from the author upon request.
Conflicts of Interest
e author declares no conflicts of interest regarding this work. | 5,138 | 2022-05-09T00:00:00.000 | [
"Computer Science"
] |
Growth Monitoring and Yield Estimation of Maize Plant Using Unmanned Aerial Vehicle (UAV) in a Hilly Region
More than 66% of the Nepalese population has been actively dependent on agriculture for their day-to-day living. Maize is the largest cereal crop in Nepal, both in terms of production and cultivated area in the hilly and mountainous regions of Nepal. The traditional ground-based method for growth monitoring and yield estimation of maize plant is time consuming, especially when measuring large areas, and may not provide a comprehensive view of the entire crop. Estimation of yield can be performed using remote sensing technology such as Unmanned Aerial Vehicles (UAVs), which is a rapid method for large area examination, providing detailed data on plant growth and yield estimation. This research paper aims to explore the capability of UAVs for plant growth monitoring and yield estimation in mountainous terrain. A multi-rotor UAV with a multi-spectral camera was used to obtain canopy spectral information of maize in five different stages of the maize plant life cycle. The images taken from the UAV were processed to obtain the result of the orthomosaic and the Digital Surface Model (DSM). The crop yield was estimated using different parameters such as Plant Height, Vegetation Indices, and biomass. A relationship was established in each sub-plot which was further used to calculate the yield of an individual plot. The estimated yield obtained from the model was validated against the ground-measured yield through statistical tests. A comparison of the Normalized Difference Vegetation Index (NDVI) and the Green–Red Vegetation Index (GRVI) indicators of a Sentinel image was performed. GRVI was found to be the most important parameter and NDVI was found to be the least important parameter for yield determination besides their spatial resolution in a hilly region.
Introduction
The population of the world has been rising day by day, thus increasing the demand for food, shelter, and other basic needs [1]. Land is the most common natural resource which fulfills all the basic needs providing a platform for food production, shelter, and other basic needs [2]. Land may be thus taken as a finite resource in the sense that its area cannot be increased. Thus, for the increasing population, the only way to maintain food resources is by increasing productivity [3]. Due to the advancement of technology, the use of fertilizers, and other means, productivity can be increased, thus maintaining a balance between population and food resources [4,5]. For viable agricultural production, the study of the latest trends and technology in an agricultural domain is necessary. Tracking the phases of a crop can be achieved by studying its phenology and biomass estimation, which ultimately helps in understanding environmental factors that affect the crop growth and yield it provides [6].
In agricultural streams such as forestry and crop production, biomass is normally defined as the dry mass of the above-ground part of a specific category of plants [7,8].
Biomass is important in various fields as it provides much information regarding plant growth, its yield, energy that can be liberated from it, and so on [9]. Therefore, examining biomass is helpful for many research and forecast activities [10]. Biomass can be examined with various methods, i.e., direct burning and weighing, and using empirical formulas for specific plants.
Remote sensing products such as Vegetation Indices are often used to estimate biomass and monitor plant growth [10]. Till now, various Vegetation Indices have been developed to monitor plant growth and estimate biomass. Some of them are Normalized Difference Vegetation Index, Soil Adjusted Vegetation Index, Green Vegetation Index, Green-Red Vegetation Index, Excess Green Vegetation Index, etc. [11]. Among them, NDVI and SAVI are considered to be the more common and accurate means to estimate biomass and monitor plant growth [11,12]. However, the calculation of NDVI and SAVI requires an NIR camera; the images seem to be expensive to produce [13]. On the other hand, ExG and GRVI provide a means of plant growth and monitoring using RGB cameras and images since they can be easily calculated from images captured from RGB cameras [14,15].
These days, most precision farming investigations are focused on the execution of a wide extent of sensors and instruments able to remotely identify crop and soil properties in quasi-real time [16,17]. The spatial resolution of major satellite sensors has been upgraded dramatically in the modern days [18,19]. However, they are not able to perform repeated measurements regarding the crop cycle. In order to alleviate such problems, the use of Unmanned Aerial Vehicle ((UAV) that allows very high spatial resolution (of the order of a few centimeters), as well as the ability to obtain repeated measurements from time to time, is the advantage over high-altitude remote sensing [20,21]. Within the last decade, the improvement of Unmanned Aerial Vehicle (UAV) platforms characterized by small size has advertised an unused solution for crop management and observing, capable of convenient provision of high-resolution images, particularly where small productive ranges have to be checked [22,23].
This research study aims to capture aerial imagery through UAV and further process those images from which Crop Surface Model (CSM), Plant Height (PH), Green-Red Vegetation Index (GRVI), Biomass, and, finally, Yield values can be obtained using a fieldbased technique, aerial survey and satellite-based technology [4,24,25]. The result obtained from various sources has been compared with the ground-based actual result to see the deviation from actual yield in the ground [26]. This research project has the potential to revolutionize the way that maize is grown and harvested. By providing farmers with accurate and timely information about plant growth, the project can help them to make better decisions about irrigation, fertilization, and other management practices. This can lead to increased yields and improved profitability for farmers [27,28].
Materials and Methods
First and foremost, visiting the proposed project site was one preliminary task. GCPs were established and the coordinates of the GCPs were determined with the help of DGPS. Images of the plot only (without crop) were taken with the help of an Unmanned Aerial Vehicle (RGB spectral bands). Then, UAV images were taken at numerous phenological stages of the maize plant life cycle to monitor the crop growth and to estimate yield [29,30]. From the acquired images, Orthophoto, DTM, and DSM were created. DSM is needed for average crop height. From the generated Orthophoto, Green-Red Vegetation Index was calculated. Leaf Area Index (LAI) was calculated from the field-based method in which the area of the leaf is computed with the help of a measuring tape. Around 25 sample points of areas of 1 m 2 (1 m × 1 m) were chosen in the field. The average Plant Height, Biomass, and LAI of the sample points were computed to generalize our result. From the computed values, GRVI, LAI, Biomass and Plant Height, Yield of the crop plot was estimated. Variations in LAI wrt Yield and Biomass, Plant Height wrt Yield and Biomass, GVI wrt Biomass and Yield, GRVI wrt Biomass and Yield and GVI and GRVI wrt Biomass and Yield were modelled with the help of respective graphs between them. NDVI was also calculated using a Sentinel-based product from Google Earth Engine [31]. A Sentinel satellite dataset is a collection of data collected by the European Space Agency's (ESA) Sentinel satellites. These satellites are part of the Copernicus program, a large Earth observation program which collects data about land, marine, and atmospheric environments. The data collected by the Sentinel satellites are used to monitor and study climate change, natural disasters, land use, and other environmental conditions. Sentinel datasets can be used for a variety of applications, including monitoring of agricultural land, mapping glaciers, assessing deforestation, detecting oil spills, and studying ocean currents. The NDVI product was also used to estimate the yield. Finally, all the parameters were used to determine the yield and validated with the actual yield from the ground. Figure 1 illustrates the research design which has been implemented for the completion of this study. Both primary and secondary data were used to complete this work. The data collection method is termed a mixed method because both primary and secondary datasets have been used here. Primary data were collected from a field-based survey and remote sensing imagery was the secondary source data. As a part of primary data, the imagery was obtained from the Unmanned Aerial Vehicle. Secondary data source incorporated the use of sentinel based satellite imageries. computed values, GRVI, LAI, Biomass and Plant Height, Yield of the crop plot was estimated. Variations in LAI wrt Yield and Biomass, Plant Height wrt Yield and Biomass, GVI wrt Biomass and Yield, GRVI wrt Biomass and Yield and GVI and GRVI wrt Biomass and Yield were modelled with the help of respective graphs between them. NDVI was also calculated using a Sentinel-based product from Google Earth Engine [31]. A Sentinel satellite dataset is a collection of data collected by the European Space Agency's (ESA) Sentinel satellites. These satellites are part of the Copernicus program, a large Earth observation program which collects data about land, marine, and atmospheric environments. The data collected by the Sentinel satellites are used to monitor and study climate change, natural disasters, land use, and other environmental conditions. Sentinel datasets can be used for a variety of applications, including monitoring of agricultural land, mapping glaciers, assessing deforestation, detecting oil spills, and studying ocean currents. The NDVI product was also used to estimate the yield. Finally, all the parameters were used to determine the yield and validated with the actual yield from the ground. Figure 1 illustrates the research design which has been implemented for the completion of this study. Both primary and secondary data were used to complete this work. The data collection method is termed a mixed method because both primary and secondary datasets have been used here. Primary data were collected from a field-based survey and remote sensing imagery was the secondary source data. As a part of primary data, the imagery was obtained from the Unmanned Aerial Vehicle. Secondary data source incorporated the use of sentinel based satellite imageries. •
Study Area
The study area of the project "Growth Monitoring and Yield Estimation of Maize Plant using UAV" is Dhulikhel, Kavrepalanchowk, as illustrated in Figure 2 below. Dhulikhel is one of the leading municipalities in Kavrepalanchok District of Nepal. The study area of the project "Growth Monitoring and Yield Estimation of Maize Plant using UAV" is Dhulikhel, Kavrepalanchowk, as illustrated in Figure 2 below. Dhulikhel is one of the leading municipalities in Kavrepalanchok District of Nepal. Dhulikhel is located at 27 • 37 20 North latitude and 85 • 33 34 East longitude [32,33]. It is situated at the Eastern edge of Kathmandu Valley, south of the Himalayas at 1550 m over sea level, and lies 30 km southeast of Kathmandu and 74 km southwest of Kodari. B.P Highway and Araniko Highway pass through Dhulikhel, which are the vital highways of Nepal [34,35]. The majority of population is engaged in agriculture; rice, maize and wheat are the major crops of Dhulikhel Municipality [36]. The production of maize seems to be increasing rapidly every year, whereas the production of wheat seems to be decreasing. Winter is characterized by much less rainfall than summer [37,38]. Production of the maize crop is very favorable in this climate, and since the production is growing every year, the study of different phonological stages of maize is needed as has been performed here [39]. Dhulikhel is located at 27°37′20″ North latitude and 85°33′34″ East longitude [32,33]. It is situated at the Eastern edge of Kathmandu Valley, south of the Himalayas at 1550 m over sea level, and lies 30 km southeast of Kathmandu and 74 km southwest of Kodari. B.P Highway and Araniko Highway pass through Dhulikhel, which are the vital highways of Nepal [34,35]. The majority of population is engaged in agriculture; rice, maize and wheat are the major crops of Dhulikhel Municipality [36]. The production of maize seems to be increasing rapidly every year, whereas the production of wheat seems to be decreasing. Winter is characterized by much less rainfall than summer [37,38]. Production of the maize crop is very favorable in this climate, and since the production is growing every year, the study of different phonological stages of maize is needed as has been performed here [39]. •
Research Design
The overall methodology of the experiment is presented in Figure 3. •
Research Design
The overall methodology of the experiment is presented in Figure 3.
Results
The whole project area was divided into five different sample areas based on the area of the individual plot ( Figure 4). Further, the sample area was divided into five different sub-plots based on the clustered sampling technique as 1a, 1b, 1c, 1d and 1e for Sample Area 1. Similarly, for Sample Area 2, these sub-plots were divided into 2a, 2b, 2c, 2d and 2e sample areas. Further, other sub-plots were classified accordingly.
Results
The whole project area was divided into five different sample areas based on the area of the individual plot ( Figure 4). Further, the sample area was divided into five different sub-plots based on the clustered sampling technique as 1a, 1b, 1c, 1d and 1e for Sample Area 1. Similarly, for Sample Area 2, these sub-plots were divided into 2a, 2b, 2c, 2d and 2e sample areas. Further, other sub-plots were classified accordingly.
DGPS Survey Result
DGPS survey was carried out in order to establish the Ground Control Points (GCPs) Thus established GCPs were further used for referencing in the images ( Figure 5). The base used was the fourth-order control point from the Land Management Training Center.
Obtained coordinates of the respective plots are tabulated below in Table 1.
DGPS Survey Result
DGPS survey was carried out in order to establish the Ground Control Points (GCPs). Thus established GCPs were further used for referencing in the images ( Figure 5). The base used was the fourth-order control point from the Land Management Training Center.
Obtained coordinates of the respective plots are tabulated below in Table 1.
Growth Monitoring through Leaf Area Index
Leaf Area Index is another approach to monitoring the growth of the plant. The leaf area was measured directly in the field in different growth stages of the plant. The change in Leaf Area Index shows the growth pattern of the plant. LAI was computed using the formula given below: (1)
Growth Monitoring through Leaf Area Index
Leaf Area Index is another approach to monitoring the growth of the plant. The leaf area was measured directly in the field in different growth stages of the plant. The change in Leaf Area Index shows the growth pattern of the plant. LAI was computed using the formula given below: (1) Figure 6 shows the Leaf Area Index of the maize plant in the project area at different phenological stages (26 days for first phase, 22 days for second phase, 25 days for third phase). According to the graph, LAI value ranges from 0 to 9.55. According to the data obtained on 8 August, the value of the growth of a plant is higher in Sub-plot 5, i.e., 5a, 5b, 5c, 5d and 5e. The LAI on 8 August was affected by rain and wind, causing the leaf of Sub-plot 2e to be zero, meaning that the plant was dead. LAI seems to be continuously Figure 6 shows the Leaf Area Index of the maize plant in the project area at different phenological stages (26 days for first phase, 22 days for second phase, 25 days for third phase). According to the graph, LAI value ranges from 0 to 9.55. According to the data Sensors 2023, 23, 5432 8 of 26 obtained on 8 August, the value of the growth of a plant is higher in Sub-plot 5, i.e., 5a, 5b, 5c, 5d and 5e. The LAI on 8 August was affected by rain and wind, causing the leaf of Sub-plot 2e to be zero, meaning that the plant was dead. LAI seems to be continuously increasing in different phenological stages, starting on 25 May 2021, 22 June 2021, 14 July 2021, and ending finally on 15 August 2021.
Growth Monitoring through Crop Surface Model
Crop Surface Model is one of the approaches to observing the development of a plant. The CSM generated at numerous phases helps in monitoring the growth in individual plots. The variation in the plant height at different stages shows the growth pattern of the plant. Generation of the Crop Surface Model from the obtained Digital Surface Model (DSM) was obtained at different phases of crop life cycle as indicated in the figure above. Figure 7 shows the Crop Surface Models of the project area at different phenological stages. The CSM on 25 May shows that the plant height is in the range of 0 m to 2.4 m. The growth of a plant is maximum in Plot T9 and Plot T10 and low in Plots T3, T4, T5, and T6.
The CSM on 23 July shows that the height of plants is in the range of 0 m to 4 m. The growth of a plant is maximum in Plots T9 and T10. The CSM maps show the plot-wise comparison of the plant height at different growth stages. The change in plant height in each plot is seen on the CSM maps generated on a different date. From the above maps, it is seen that the overall growth of a plant is maximum in Plot T9 and Plot T10. Figure 8 shows the plot-wise plant height of the plot at different growth stages generated from the Crop Surface Model. The variation in plant height in various plots at numerous developmental phases helps in monitoring the growth of a plant. The graph shows that the overall growth of a plant is maximum in Plot 4 and Plot 5. Since Plots 1, 2 and 3 were adversely affected, the Plant Height value in these Plots also seems to be highly affected. In Plot 2e, Plant Height seems to be 0, meaning that all the plants in that sample plot were dead during the image acquisition at the last time frame, i.e., 15 August 2021.
Growth Monitoring through Crop Surface Model
Crop Surface Model is one of the approaches to observing the development of a plant. The CSM generated at numerous phases helps in monitoring the growth in individual plots. The variation in the plant height at different stages shows the growth pattern of the plant. Generation of the Crop Surface Model from the obtained Digital Surface Model (DSM) was obtained at different phases of crop life cycle as indicated in the figure above. Figure 7 shows the Crop Surface Models of the project area at different phenological stages. The CSM on 25 May shows that the plant height is in the range of 0 m to 2.4 m. The growth of a plant is maximum in Plot T9 and Plot T10 and low in Plots T3, T4, T5, and T6. The CSM on 23 July shows that the height of plants is in the range of 0 m to 4 m. The growth of a plant is maximum in Plots T9 and T10. The CSM maps show the plot-wise comparison of the plant height at different growth stages. The change in plant height in each plot is seen on the CSM maps generated on a different date. From the above maps, it is seen that the overall growth of a plant is maximum in Plot T9 and Plot T10. Figure 8 shows the plot-wise plant height of the plot at different growth stages generated from the Crop Surface Model. The variation in plant height in various plots at numerous developmental phases helps in monitoring the growth of a plant. The graph shows that the overall growth of a plant is maximum in Plot 4 and Plot 5. Since Plots 1, 2 and 3 were adversely affected, the Plant Height value in these Plots also seems to be highly affected. In Plot 2e, Plant Height seems to be 0, meaning that all the plants in that sample plot were dead during the image acquisition at the last time frame, i.e., 15 August 2021.
Growth Monitoring through Green-Red Vegetation Index
Green-Red Vegetation Index is another approach to monitor plant growth. The images taken at a different time are processed to obtain an orthomosaic of the images from which the Green-Red Vegetation Index is generated using the ArcGIS 10.8 software.
GRVI =
Green Red Green Red .
(2) Figure 9 shows the Green-Red Vegetation Index at different growth stages. The change in GRVI value shows the growth pattern of the plants. The GRVI map generated on June 22 shows that the GRVI value at that stage ranges from −0.2 to 0.2 and there is a similar pattern of GRVI in each sub-plot, which shows that there is a similar growth pattern of plants in each sub-plot. The negative GRVI value represents that there is the less green pigment in the plant, i.e., the reflectance value of visible red light is greater than the that of visible green light. The GRVI map generated on May 25 shows that the GRVI value at that stage ranges from −0.2 to 0.4 and there are maximum GRVI values in Plots 1 and 3. Similarly, the GRVI map generated on July 14 shows that the GRVI value at that specific time period ranges from 0.2 to 0.4 at different sub-plots. Finally, after the image acquisition on 15 August, GRVI map was generated on which GRVI value slightly declined to that of 14 July. Even at the final growth monitoring stages, GRVI value at different subplots declined randomly; this is because the plant in that region was dead because of heavy rain and wind at that time. Figure 10 shows the plot-wise average GRVI value in each growth stage. The graph shows the change in GRVI value in each growth stage. The GRVI value is higher in Plot 4 and Plot 5, which shows that the value of the growth of a plant is higher in Plot 4 and Plot 5. In some plots, the GRVI value on August 15 decreased from the previous stage, which means that the plant started to become yellowish, i.e., the maturity stage of the plant started.
Growth Monitoring through Green-Red Vegetation Index
Green-Red Vegetation Index is another approach to monitor plant growth. The images taken at a different time are processed to obtain an orthomosaic of the images from which the Green-Red Vegetation Index is generated using the ArcGIS 10.8 software.
(2) Figure 9 shows the Green-Red Vegetation Index at different growth stages. The change in GRVI value shows the growth pattern of the plants. The GRVI map generated on June 22 shows that the GRVI value at that stage ranges from −0.2 to 0.2 and there is a similar pattern of GRVI in each sub-plot, which shows that there is a similar growth pattern of plants in each sub-plot. The negative GRVI value represents that there is the less green pigment in the plant, i.e., the reflectance value of visible red light is greater than the that of visible green light. The GRVI map generated on May 25 shows that the GRVI value at that stage ranges from −0.2 to 0.4 and there are maximum GRVI values in Plots 1 and 3. Similarly, the GRVI map generated on July 14 shows that the GRVI value at that specific time period ranges from 0.2 to 0.4 at different sub-plots. Finally, after the image acquisition on 15 August, GRVI map was generated on which GRVI value slightly declined to that of 14 July. Even at the final growth monitoring stages, GRVI value at different sub-plots declined randomly; this is because the plant in that region was dead because of heavy rain and wind at that time. Figure 10 shows the plot-wise average GRVI value in each growth stage. The graph shows the change in GRVI value in each growth stage. The GRVI value is higher in Plot 4 and Plot 5, which shows that the value of the growth of a plant is higher in Plot 4 and Plot 5. In some plots, the GRVI value on August 15 decreased from the previous stage, which means that the plant started to become yellowish, i.e., the maturity stage of the plant started. Prior to these, some of the outliers can even be seen in the graph, meaning that the final stage of the maize plant was monitored with the acquisition of image on 15 August, and several plants were found dead, so their Leaf Area Index was mismatched. Prior to these, some of the outliers can even be seen in the graph, meaning that the final stage of the maize plant was monitored with the acquisition of image on 15 August, and several plants were found dead, so their Leaf Area Index was mismatched.
Relation between GRVI and NDVI
The below graph shows the relation between GRVI and NDVI of the plot (Figure 12). The regression equation between the GRVI and NDVI is y = 1.3785x + 0.5373 with a coefficient of determination of 0.5. This shows the mild relationship between GRVI and NDVI. NDVI was generated using a Sentinel-based product in Google Earth Engine. Since the NDVI does not seem to have a strong relation with the GRVI, we use GRVI to estimate the yield of the crop and test whether or not the Sentinel-based product is capable of estimating the yield. and Leaf Area Index. With the increase in Plant Height, the Leaf Area Index also increases This relation shows that there is a strong relation between Leaf Area Index and Plant Height, therefore Leaf Area Index can be used to monitor the growth of the plant.
Prior to these, some of the outliers can even be seen in the graph, meaning that the final stage of the maize plant was monitored with the acquisition of image on 15 August and several plants were found dead, so their Leaf Area Index was mismatched.
Relation between GRVI and NDVI
The below graph shows the relation between GRVI and NDVI of the plot (Figure 12) The regression equation between the GRVI and NDVI is y = 1.3785x +0.5373 with a coefficient of determination of 0.5. This shows the mild relationship between GRVI and NDVI NDVI was generated using a Sentinel-based product in Google Earth Engine. Since the NDVI does not seem to have a strong relation with the GRVI, we use GRVI to estimate the yield of the crop and test whether or not the Sentinel-based product is capable of estimating the yield.
Model Generation, Estimation and Validation
As illustrated in the methodology figure, the overall methodology of the thesis work is clearly shown. Yield estimation and validation were part of the workflow diagram. Estimation of yield was performed as part of a model generation, where the model was generated using regression analysis, which is shown below [40]. Further, the methodology for the validation of the obtained yield is also illustrated in Figure 13 below.
Model Generation, Estimation and Validation
As illustrated in the methodology figure, the overall methodology of the thesis work is clearly shown. Yield estimation and validation were part of the workflow diagram. Estimation of yield was performed as part of a model generation, where the model was generated using regression analysis, which is shown below [40]. Further, the methodology for the validation of the obtained yield is also illustrated in Figure 13
Relation between Yield and Plant Height
Plant Height was generated from the Crop Surface Model, which was used to develop a relationship between Plant Height and Yield. The yield from all the samples measured in the field was used to generate the regression model presented in Table 2.
Relation between Yield and Plant Height
Plant Height was generated from the Crop Surface Model, which was used to develop a relationship between Plant Height and Yield. The yield from all the samples measured in the field was used to generate the regression model presented in Table 2.
Relation between Yield and Leaf Area Index
Leaf Area Index was measured from the field measurement where the leaf of every plant of the sample area was measured. T2, T3, T4, and T5 time periods were used to measure the Leaf Area Index and were later used to develop a relationship with Yield. The yield from all samples measured in the field was used to generate the regression model (Table 3).
Relation between Yield and Green-Red Vegetation Index
Green-Red Vegetation Index was calculated in the different time period within the sample plot to see its relation with Yield. T2, T3, T4, and T5 time periods were used to measure the Green-Red Vegetation Index and were later used to develop a relationship with Yield (Table 4).
Relation between Yield and Biomass
Biomass was calculated in the final time period within the sample plot to see its relation with Yield (Table 5).
Relation between Yield and NDVI
Normalized Difference Vegetation Index (NDVI) generated from Sentinel-based products was used to see the relation between NDVI and Yield (Table 6).
Relation between Yield and Satellite-Based GRVI
Green-Red Vegetation Index generated from the Sentinel-based product was used to see the relation between GRVI and Yield ( Table 7).
Estimation of Yield from Plant Height
The regression model developed between Yield and Plant Height for the plot was determined using the regression model, which was used to estimate the yield as tabulated below (Table 8). Yield was estimated from each sample plot using the regression equation [41]. Since 70% of the total data was used for constructing the regression equation, the remaining 30% of the data was used to validate the result [42,43]. The error in percentage for the first sample plot was found to be 21.81%. Similarly, for the second sample plot, an error was computed to be 66.27%, which is the maximum error within all the sample plots. The main reason behind this large error is that the second plot was heavily affected by wind and rain, causing the plant to die. Since the model was generated using the condition that any other environment circumstance hasn't affected the growth of maize plant, this led to a difference between the actual ground scenario and the model equations, resulting in high error. Moreover, for the third sample plot, the error was found to be 19.64%. The fourth and fifth sample plots contained a lesser amount of error compared to other sample plots as they were the least affected by environmental circumstances, and the error was found to be 7.32% and 3.33%, respectively.
Estimation of Yield from Leaf Area Index (LAI)
The regression model developed between the Yield and Leaf Area Index for the plot was determined using the regression model, which was used to estimate the yield as tabulated below (Table 9). Yield was estimated from each sample plot using the regression equation [41]. Since 70% of the total data was used for constructing the regression equation, the remaining 30% of the data was used to validate the result. The error in percentage for the first sample plot was found to be 14.15%. Similarly, for the second sample plot, the error was computed to be 58.49%, which is the maximum error within all the plots i.e., 1,2,3,4 and 5. The main reason behind this large error is that the second plot was heavily affected by wind and rain, causing the plant to die. Since the model was generated using the condition that any other environment circumstance has not affected the growth of maize plant, this led to a difference between actual ground scenario and the model equations, resulting in high error. Moreover, for the third sample plot, the error was found to be 15.60%. The fourth and fifth sample plots contained a lesser amount of error compared to other sample plots as they were the least affected by environmental circumstances, and the error was found to be 5.30% and 1.08%, respectively.
Estimation of Yield from Green-Red Vegetation Index (GRVI)
The regression model developed between Yield and Green Red Vegetation Index (GRVI) for the plot was determined using a regression model, which was used to estimate the yield as tabulated below (Table 10). Yield was estimated from each sample plot using the regression equation [41]. Since 70% of the total data was used for constructing the regression equation, the remaining 30% of the data was used to validate the result. The error in percentage for the first sample plot was found to be 11.84%. Similarly, for the second sample plot, the error was computed to be 54.86%, which is the maximum error within all the sample plots. The main reason behind this large error is that the second plot was heavily affected by wind and rain, causing the plant to die. Since the model was generated using the condition that any other environment circumstance has not affected it, this led to a difference between actual ground scenario and the model equations, resulting in high error. Moreover, for the third sample plot, the error was found to be 10.62%. The fourth and fifth sample plots contained a lesser amount of error compared to other sample plots as they were the least affected by environmental circumstances, and the error was found to be 4.26% and 1.07%, respectively.
Estimation of Yield from Biomass
The regression model developed between Yield and Biomass for plot was determined using a regression model, which was used to estimate the yield as tabulated below in Table 11. The yield of each sample plot was calculated using a regression equation. In total, 70% of the data was used to construct the equation, and the rest of the data was employed to validate the results. The percentage error for the first plot was determined to be 17.51%. On the other hand, the maximum error among all the plots was found to be 62.10% for the second plot, which was due to the environmental conditions such as wind and rain that killed the plants. The error for the third plot was 16.63%. The errors for the fourth and fifth plots were found to be 7.25% and 2.47%, respectively, since these plots were least affected by environmental conditions.
Estimation of Yield from Normalized Difference Vegetation Index (NDVI)
The regression model developed between Yield and Normalized Difference Vegetation Index (NDVI) for the plot was determined using the regression model, which was used to estimate the yield as tabulated below in Table 12. The yield of each sample plot was estimated using a regression equation. The accuracy of the results was validated using the remaining 30% of the data. The error for the first sample plot was 20.45%; for the second sample plot, it was 58.52% due to the adverse weather conditions; for the third sample plot, it was 19.49%, and the error was 7.94% and 8.68%, respectively, for the fourth and fifth sample plot as they were less affected by the outside factors.
Analysis of Error to Select the Parameters
Based on the observation of data directly from the field and comparing it with the yield generated from the regression model, an error for Sample Plot 4 and Sample Plot 5 was generated with various parameters to see how these parameters actually work. An error was generated by differentiating the yield generated from the field to the yield from the regression model (Table 13). The error was visualized with the help of a graph to see the pattern of error with several parameters (Figure 14). error was generated by differentiating the yield generated from the field to the yield from the regression model (Table 13). The error was visualized with the help of a graph to see the pattern of error with several parameters (Figure 14). The error in Sample Plot 4 with several parameters such as Biomass, Plant Height, Leaf Area Index, Green-Red Vegetation Index and Normalized Difference Vegetation Index was generated. The maximum error was seen on the Normalized Difference Vegetation Index parameter, which was 9.23%, and the minimum error was seen on the Green-Red Vegetation Index parameter, which was 4.26%. This actually demonstrates that GRVI is the most important parameter in yield calculation. NDVI was derived from a Sentinelbased product. NDVI has a lower resolution compared to the other data here. This might be the reason why NDVI has deviated more from the actual value in comparison to the other parameters.
For Sample Plot 5, the same parameters were used to check the error. Various parameters were used to predict the yield in Sample Plot 5, and the error in percentage is visualized in Table 14. The actual visualization of the error is shown in Figure 15. The error in Sample Plot 4 with several parameters such as Biomass, Plant Height, Leaf Area Index, Green-Red Vegetation Index and Normalized Difference Vegetation Index was generated. The maximum error was seen on the Normalized Difference Vegetation Index parameter, which was 9.23%, and the minimum error was seen on the Green-Red Vegetation Index parameter, which was 4.26%. This actually demonstrates that GRVI is the most important parameter in yield calculation. NDVI was derived from a Sentinelbased product. NDVI has a lower resolution compared to the other data here. This might be the reason why NDVI has deviated more from the actual value in comparison to the other parameters.
For Sample Plot 5, the same parameters were used to check the error. Various parameters were used to predict the yield in Sample Plot 5, and the error in percentage is visualized in Table 14. The actual visualization of the error is shown in Figure 15. The error was visualized with the help of a graph to see the pattern of error with several parameters.
Discussion
The error in Sample Plot 5 with several parameters such as Biomass, Plant Height, Leaf Area Index, Green-Red Vegetation Index and Normalized Difference Vegetation Index was generated in Figure 15. The maximum error was seen on the Normalized Difference Vegetation Index parameter, which was 8.67%, and the minimum error was seen on the Green-Red Vegetation Index parameter, which was 1.07%. This actually demonstrates that GRVI is the most important parameter in yield calculation. NDVI was derived from a Sentinel-based product. NDVI has a lower resolution compared to the other data here. This might be the reason why NDVI has deviated more from the actual value in comparison to the other parameters. Figure 15 shows The error was visualized with the help of a graph to see the pattern of error with several parameters.
Discussion
The error in Sample Plot 5 with several parameters such as Biomass, Plant Height, Leaf Area Index, Green-Red Vegetation Index and Normalized Difference Vegetation Index was generated in Figure 15. The maximum error was seen on the Normalized Difference Vegetation Index parameter, which was 8.67%, and the minimum error was seen on the Green-Red Vegetation Index parameter, which was 1.07%. This actually demonstrates that GRVI is the most important parameter in yield calculation. NDVI was derived from a Sentinel-based product. NDVI has a lower resolution compared to the other data here. This might be the reason why NDVI has deviated more from the actual value in comparison to the other parameters. Figure 15 shows From the regression equation developed above, we can find several reasons why the NDVI and GRVI (Sentinel-based) values for estimating yield became deviated from the actual yield on the ground. Some of the reasons are discussed here. The first and foremost reason is that the image that was acquired with remote sensing technology, a Sentinel product, did not exactly match the date of flight of the UAV images. This can bring some changes during the development phase of the maize plant. Another reason could be the cloud coverage of the image, where reflectance is very high for the cloudy pixel. Similarly, this might also be due to the presence of mixed pixels of the satellite imagery, meaning that the plot where the maize plant was grown completely did not fall completely on one pixel but rather on multiple pixels mixed with other types. Therefore, while computing NDVI and GRVI, multiple mixed pixel effects caused NDVI and GRVI values to become abnormal. Moreover, the Sentinel image has a resolution of 10 m, which has been compared with the GRVI resolution of 0.5 cm and with the 1 m (sample plot area) yield data. The gap between satellite-based products and ground-based products is generally 10-fold, which also may be the reason behind the deviation of yield from the satellite-based NDVI and the GRVI value.
Multiple Linear Regression Analysis
After the simple linear regression analysis, several parameters were checked to see the dependency of one on another. Sentinel-based NDVI was even plotted with Sentinelbased GRVI to see the difference. The image with same resolution (Sentinel NDVI and GRVI) did not vary much, so those two products, NDVI and GRVI (satellite-based), were not used for multiple linear regression analysis. The rest of the other parameters such as Biomass (X1), Green-Red Vegetation Index (X2), Plant Height (X3), and Leaf Area Index From the regression equation developed above, we can find several reasons why the NDVI and GRVI (Sentinel-based) values for estimating yield became deviated from the actual yield on the ground. Some of the reasons are discussed here. The first and foremost reason is that the image that was acquired with remote sensing technology, a Sentinel product, did not exactly match the date of flight of the UAV images. This can bring some changes during the development phase of the maize plant. Another reason could be the cloud coverage of the image, where reflectance is very high for the cloudy pixel. Similarly, this might also be due to the presence of mixed pixels of the satellite imagery, meaning that the plot where the maize plant was grown completely did not fall completely on one pixel but rather on multiple pixels mixed with other types. Therefore, while computing NDVI and GRVI, multiple mixed pixel effects caused NDVI and GRVI values to become abnormal. Moreover, the Sentinel image has a resolution of 10 m, which has been compared with the GRVI resolution of 0.5 cm and with the 1 m (sample plot area) yield data. The gap between satellite-based products and ground-based products is generally 10-fold, which also may be the reason behind the deviation of yield from the satellite-based NDVI and the GRVI value.
Multiple Linear Regression Analysis
After the simple linear regression analysis, several parameters were checked to see the dependency of one on another. Sentinel-based NDVI was even plotted with Sentinel-based GRVI to see the difference. The image with same resolution (Sentinel NDVI and GRVI) did not vary much, so those two products, NDVI and GRVI (satellite-based), were not used for multiple linear regression analysis. The rest of the other parameters such as Biomass (X1), Green-Red Vegetation Index (X2), Plant Height (X3), and Leaf Area Index (X4) were only used to see the relation with the Yield (Ŷ). Here, in the equation, Yield is the dependent variable, whereas Biomass, Green-Red Vegetation Index, Plant Height and Leaf Area Index are the independent variables whose relation is shown below: This equation is a multiple linear regression model that can be used to predict the Yield of a maize plant,Ŷ. The equation takes four independent variables: Biomass (X1), Green-Red Vegetation Index (X2), Plant Height (X3) and Leaf Area Index (X4) [44][45][46]. The equation assigns each of these variables a coefficient-1.16, 1.18, 0.94, and 0.98, respectively. This coefficient indicates the importance of each variable in predicting the output, with a higher coefficient indicating a greater importance [47,48]. The equation also includes a constant of 0.85, which is added to the sum of all the other terms. This constant is required to ensure that the equation is correctly centered on zero.
By multiplying each of the four variables by their respective coefficients and then summing the products, this equation is able to calculate an estimate of the maize plant yield. The larger the value of the independent variables, the more the output of the equation increases. For example, if the biomass is doubled, the output of the equation increases 1.16-fold, assuming that all other variables remain constant. Similarly, if the GRVI is doubled, the output increases 1.18-fold.
The equation can be written asŶ = 0.85 + 1.16X1 + 1.18X2 + 0.94X3 + 0.98X4. The constant term 0.85 is the intercept, which represents the predicted yield when all four predictor variables are zero. The coefficients of the predictor variables (1.16, 1.18, 0.94 and 0.98) indicate the amount of change in the predicted yield per unit increase of the respective predictor variable.
Biomass (X1) is the total mass of a plant, including the leaves, stems, flowers, fruits, and other parts [49][50][51]. A higher biomass indicates a larger plant size and thus can be used to predict the yield of a maize plant [52]. A coefficient of 1.16 implies that for every unit increase in biomass, the predicted yield of the maize plant increases by 1.16 units.
The Green Red Vegetation Index (X2) is an indicator of the amount of green leaf area of a crop compared to the total land area [53][54][55]. A high GRVI indicates a healthy crop with a large amount of green leaf area and a higher yield. A coefficient of 1.18 implies that for every unit increase in the GRVI, the predicted yield of the maize plant increases by 1.18 units.
Plant Height (X3) is an indicator of the size and overall growth of a plant [56,57]. A taller plant can indicate a healthier crop and thus a higher yield. A coefficient of 0.94 implies that for every unit increase in the Plant Height, the predicted yield of the maize plant increases by 0.94 units.
Leaf Area Index (X4) is the total area of a plant's leaves relative to the ground area. A higher leaf area indicates a larger plant with a higher yield. A coefficient of 0.98 implies that for every unit increase in the Leaf Area Index, the predicted yield of the maize plant increases by 0.98 units.
This equation can be used to make predictions about the yield of maize plants, given the information about the four independent variables is provided. By adjusting the values of the four variables, the equation can then be used to understand the expected yield of a maize plant given different combinations of Biomass, GRVI, Plant Height and Leaf Area Index.
Conclusions
Three distinct strategies were employed to calculate the yield of maize plants: a groundbased method, an Unmanned Aerial Vehicle, and remote sensing technology. Several factors that impact the growth of the plants were taken into account when performing this estimation. These included Plant Height, Green-Red Vegetation Index, Leaf Area Index, Crop Surface Model, Biomass and Normalized Difference Vegetation Index. The groundbased method was used to measure the Leaf Area Index and Wet Biomass of maize plants. Photogrammetry-based methods were applied to measure the Green-Red Vegetation Index, Plant Height, and Crop Surface Model. Lastly, to assess the yield, a remote sensing-based method was used, which involved the Normalized Difference Vegetation Index and Green-Red Vegetation Index. Based on the yield generated by the model, the most and least convenient parameters were selected and compared to the actual yield from the field.
The ultimate result of the project, yield, is the product of the regression of various parameters. Ground-based data and data abstracted from the resultant post-processing of the UAV images along with the secondary data for NDVI, GRVI, and Sentinel products from a Google Earth Engine were the data sources of the project. Plant Height, GRVI, LAI, CSM, Biomass and NDVI were the used parameters for the Yield estimation. Sample yield from the field was probed with the listed parameters for the estimation of the Yield of the whole plot. The regression model, Yield vs. GRVI, has the higher regression coefficient, while Yield vs. NDVI has the lowest, as GRVI is the primary data with higher resolution, while NDVI has a low resolution of 10 m as it is abstracted from the satellite.
Finally, the conducted research estimated the yield of the maize plant. In addition, this research even helped to determine the most important parameters for estimating maize yield. The study found that the Green-Red Vegetation Index (GRVI) was the most important parameter, whereas the least important parameter was satellite-based Normalized Difference Vegetation Index (NDVI). The study also found that satellite-based NDVI was less important than UAV-based GRVI due to its lower spatial resolution and cloud cover. The study concluded that GRVI is the most important parameter for estimating maize yield and that future research should focus on incorporating higher-resolution NDVI data, genomic information, management practices, and environmental data into yield estimation models.
Here is a summary of the key points: • GRVI is the most important parameter for estimating maize yield. • Satellite-based NDVI is less important than UAV-based GRVI due to its lower spatial resolution and cloud cover. • Future research should focus on incorporating higher-resolution NDVI data, genomic information, management practices, and environmental data into yield estimation models. Informed Consent Statement: Not applicable.
Data Availability Statement:
The data supporting the findings of this study are available from the first author upon reasonable request. | 11,491.2 | 2023-06-01T00:00:00.000 | [
"Environmental Science",
"Agricultural And Food Sciences",
"Mathematics"
] |
Hierarchical motor control in mammals and machines
Advances in artificial intelligence are stimulating interest in neuroscience. However, most attention is given to discrete tasks with simple action spaces, such as board games and classic video games. Less discussed in neuroscience are parallel advances in “synthetic motor control”. While motor neuroscience has recently focused on optimization of single, simple movements, AI has progressed to the generation of rich, diverse motor behaviors across multiple tasks, at humanoid scale. It is becoming clear that specific, well-motivated hierarchical design elements repeatedly arise when engineering these flexible control systems. We review these core principles of hierarchical control, relate them to hierarchy in the nervous system, and highlight research themes that we anticipate will be critical in solving challenges at this disciplinary intersection.
H ow neural circuits govern motor behavior has long been a central question for neuroscience research. In particular, it is a classical theme that the brain controls motor behavior through hierarchical anatomical structures. An early explicit proposal is owing to John Hughlings Jackson, who, by the 1870s, described the nervous system as a "sensorimotor machine", consisting of a hierarchy of three evolutionary levels 1 . Since then, hierarchy both of anatomy and generation of behavior have been revisited in the study of instinct 2 , motivation 3,4 , and motor pattern generation 5,6 . Across these contexts, the focus has often been neuroethological, detailing the kinds of behaviors produced by species-specific nervous systems in their ecological niches. These ideas developed through study of the nervous system have inspired other disciplines, including robotics, with clear influence, for example, on the subsumption architecture 7,8 .
In recent decades, the theme of hierarchy has partially receded in motor neuroscience research, and the field has emphasized a largely complementary perspective, emphasizing taskspecific optimality of movement 9 , with the contemporary version known as optimal feedback control (OFC) 10,11 . OFC is typically applied by postulating a cost function or formal definition of a task and asking what behavior is optimal with respect to that cost function. This perspective has been productive for motor neuroscience and facilitated the analysis of specific, well-defined motor behaviors. However, despite its great utility and its alignment with the experimental preference to study isolated behaviors in single tasks, the focus on specific movements runs contrary to the deeper interest in understanding the generation of diverse, ethological behaviors produced by nervous systems 12 .
OFC is a framework closely related to reinforcement learning (RL), which contemporary motor control for AI and robotics has widely adopted. We proceed by briefly reviewing computational approaches to motor control, focusing on the OFC framework, as well as reflecting upon recent developments in research involving control of complex, simulated physical bodies, including attempts to scale up OFC directly. However, as research into artificial control has developed, it has become clear that in addition to task objectives, system architecture design is also critical. OFC does not provide direct guidance on the design or interpretation of systems that must perform many behaviors or which reuse and compose overlapping skills to solve multiple tasks. We therefore formulate a set of core design principles of hierarchical systems in the context of motor control, which are synthesized from the AI research literature. In essence, recent work in AI has circled back to themes that were more central in earlier eras of neuroscience. This prompts us to take a fresh look at the neuroscience literature through a focused survey, which highlights how the core design principles help us make sense of hierarchical structure and function in the vertebrate nervous system. Both AI researchers engaging in the design of motor control systems and motor neuroscientists attempting to understand how specific nervous systems produce movement share many interests; we believe these fields will continue to benefit from interdisciplinary collaboration, so we close by highlighting some of these areas of overlap.
Computational approaches to motor control The challenge of motor control, both for animals and artificial systems, is to coordinate a body to produce patterns of adaptive movement behavior that satisfy objectives of the agent. When studying motor control with quantitative models, we consider a body in an environment, governed by a controller. The controller (or policy) receives observations from sensors, which measure features of the state of the system, and produces control signals that command the effectors. The controller runs in closed-loop with the body and environment, actuating the effectors based on online feedback from sensory observations to produce temporally extended behavior (Fig. 1a). For comparison, we depict a flat controller (Fig. 1b) as well as a minimal example of a hierarchical controller (Fig. 1c), in which high-level and low-level controllers receive different inputs and the motor commands are generated by the lowlevel controller with some input from the high-level controller.
Beyond the basic control system elements, specific control schemes may involve forward or inverse models 13 (Here we focus on dynamics models. A distinct class of model supports coordinate transformations via forward and inverse kinematic models), and in biology, animals may use "internal" versions of these models 14,15 . Forward (dynamics) models predict the future state of the animal's body and the environment given the current state and an action, either real or imagined. Internal forward models are used to predict the future consequences of actions. Comparing these predictions with sensory inputs enables filtering-based estimation of body and environment state. Forward models can also be used for action selection, as they allow an animal to "try out" actions using the model before acting with the real body. Inverse (dynamics) models form a special class of controller. They infer the action that takes the animal from the current state to a future outcome state. If this future outcome state is the "goal" of the animal, the inverse model generates the action that aims to achieve it.
OFC frames motor control as an optimization problem and was proposed as a normative theory of biological motor control 10 ; this consolidated principles relatively well understood in movement neuroscience 16 . At present, OFC is the dominant framework used by motor neuroscientists to explain volitional control 17,18 . Earlier frameworks had recognized the value of optimizing movement trajectories 9 , but OFC emphasizes the importance of leveraging sensory feedback to produce task-optimal corrective responses to unexpected perturbations. As such, the key prediction that differentiated OFC from related proposals was that movements produced by animals correct for perturbations only to the extent needed to optimize the task. The OFC framework was generalized to encompass essentially all approaches that use closed-loop, feedbackbased control, where the behavior generated is supposed to optimize a cost function (or goal) 11 . The broadened OFC framework consists of three principles: (1) Motor control is generated to optimize an objective function. (2) Deviations from an intended trajectory that arise should be corrected by leveraging sensory feedback in a task-optimal fashion. Together, these first two principles imply that online correction of movements should prioritize task-relevant dimensions (a "minimum intervention principle"). (3) Internal models help compensate for sensory delays and assist with state estimation.
From a contemporary perspective, the principles of OFC, including the utility of feedback and sensory delays, are widely accepted. The commitment in OFC that is perhaps most open to fundamental dispute is whether the controller really optimizes an objective (and what objective?). However, at its broadest, the OFC Fig. 1 a Interaction cycle between an embodied control system and a physical environment to generate behavior. b A flat controller with no architectural segregation of different inputs. c A basic, brain-inspired two-stage hierarchy: a lower-level motor controller directly generates motor commands to the effectors based on input from proprioceptive sensors and modulatory input from a higher-level controller, which is responsive to additional signals, including vision and task context signals.
framework is fairly inclusive about what constitutes an objective. Efficient movement need not be a direct objective, but will indirectly emerge out of coordinating movement to rapidly solve tasks. So, if an animal is optimizing movement for solving a sequence of tasks, the efficiency of the movement is indirectly incentivized in order to facilitate the concrete task goals. Despite this theoretical generality, until recently is has not been widely feasible to consider task objectives more complex than those related to production of specific movements on short horizons.
Motor control of synthetic systems
The optimization framework associated with OFC has been widely popularized in the context of "deep reinforcement learning" (Deep RL) (Deep RL refers to reinforcement learning that employs deep learning, or the use of deep neural networks.). The primary challenge of implementing optimal control approaches is generating the optimal control law (i.e., controller). For specific control problems described by known equations involving simple dynamics and cost functions, or problems formulated in lowdimensional state and action spaces, optimal controllers can be computed exactly. Specifically, one of the most fundamental and computationally straightforward ways to derive an optimal controller is through dynamic programming 19,20 . But for the control of more realistic, high-dimensional bodies, the design of the approximation scheme, learning algorithm, or numerical approach to produce the controller is important. Specific, contemporary approaches often reformulate or restrict the generic problem in order to make it computationally tractable. A widespread algorithmic technique is to look for locally optimal control laws instead of globally optimal control laws. Examples of locally optimal algorithms include model predictive control 21 or specialized planning methods 22,23 , which enable control of humanoid systems. However, planning approaches such as these are model-based, meaning they require access to the simulator within the planning computation; this is only available to an agent or animal if it possesses a high-quality forward model, possibly learned from previous experience. If there is no preexisting or learned model of the environment, the alternative is to directly learn the policy (or, alternatively, a representation of the values of actions) via model-free RL 24 .
Over the last few years, there has been an explosion of interest in producing Deep RL agents that are trained in simulated environments. Progress made towards playing Atari games from images 25 and navigating virtual environments 26 have inspired considerable follow-up research. In parallel, there has also been significant effort applied towards control of articulated bodies in simulated physical environments 27 , with broad interest facilitated by the release of research environments 28,29 , which build accessible interfaces for underlying physics simulators such as MuJoCo 30 . These physicsbased control (or continuous control) problems involve training a controller to produce an action-vector of continuous values, which actuate a physically simulated body, in order to optimize objectives in a task. Although primarily studied by Deep RL researchers for algorithm development, these challenges essentially amount to motor control. The approaches used in simulated environments also overlap with learning-based approaches for robotics research [31][32][33][34] . Of course, although significant development has occurred in recent years, many core ideas in Deep RL research were anticipated by earlier research 35 , including neural network control for graphically rich environments in the NeuroAnimator 36 , as well as design of impressive controllers for physically simulated humanoids [37][38][39] and animals 40 .
Robust control of physically simulated humanoids, especially without access to the simulator for planning, is a challenge that has made progress in recent years. End-to-end learning approaches with relatively simple policy architectures (e.g., feedforward policies) are capable of producing simple locomotion behaviors 41 and traversing obstacle courses 27 . In particular, Heess et al. 27 pushed OFC to a certain extreme: motor behavior was generated via a simple feedback controller trained entirely end-toend with deep RL to solve a single task, consisting of a distribution of more specific obstacle courses. The resulting policy was robust and responded well to random, procedural terrain variations as well as interactive perturbations by a human. In this work, the sensory observations consisted of feature-based heightmaps of the terrain, similar to approaches in animation 42 . Subsequent work has since demonstrated the ability to solve similar problems from egocentric proprioceptive information and sensory information from touch sensors and egocentric cameras for a more ethologically plausible sensory embodiment 43 . Although sensors and effectors of simulated agents are not accurate models of those found in animals, it is nevertheless clear that simulated embodied agents face similar perceptual and motor challenges as real-world animals (or robots).
However, although end-to-end Deep RL approaches to motor control have expanded the scope of OFC, there are a number of difficulties. For settings with narrow objectives, such as running forwards, environment variations during training can induce robust behaviors. But for this to work, careful task design using a balanced curriculum is often needed 27 . And whereas intrinsic ethological drives of biological organisms are quite varied (including feeding, fighting or fleeing, and fornicating), typical Deep RL agents exist in a universe that consists of only a single, comparatively narrow objective. Broader challenges include dealing with changing objectives, learning behaviors that are reusable, and rapidly adapting to solve novel tasks. So, although there is clear value in scaling up OFC, it is far from the whole story of how animals generate motor behavior, and these broader challenges bring us back to aspects of motor control that were central in earlier work in both AI and neuroscience. To more efficiently solve complex control problems, many recent innovations relating to hierarchical system architecture are being developed. In the subsequent section, we will present core principles of hierarchical motor control. These principles reflect our distillation of older ideas, points that have been made in recently published work, as well as more 'craft-level' insights shared among researchers currently working in the field. For a concrete illustration of a simple, contemporary architecture reflecting versions of many of these principles, see Box 1.
Core principles of hierarchical motor control
Researchers engaged in the study of hierarchical control believe that hierarchy can add value for issues ranging from effective exploration and planning to transfer and composition of skills. Synthesizing the literature, we have attempted to clarify and summarize core principles of hierarchical control that we believe facilitate design and interpretation of hierarchical systems. In particular, the principles we identified are well motivated when considering systems capable of generating a wide range of motor behaviors across multiple settings. The principles are elaborated below and a brief description and motivation for each principle is summarized in Table 1.
Information factorization. Information factorization refers to the property of hierarchical systems that involves providing partial or pre-processed information to certain parts of a system (c.f. information hiding 45,46 ). In our simple example ( Fig. 1), this principle is illustrated by different sensory signals being routed to the high-and low-level controllers, respectively. Although a flat policy could, in principle, integrate all available information and produce controls directly, a system with fewer inputs per module is likely to learn more efficiently. Furthermore, by segregating information immediately relevant to the low-level controller from information that only needs to modulate the low-level controller in a low-bandwidth fashion (e.g., via an inter-layer bottleneck), the low-level controller is likely to generalize better. By construction, the information routed to it is invariant to many possible contexts, and it only directly processes the subset of sensory information that the behavior it is responsible for generating depends upon. Concretely in the example in Fig. 1, the higherlevel controller might provide modulatory signals as simple as steering signals, whereas the low-level controller may have to produce high-dimensional locomotion motor patterns. This idea is connected to a view of reinforcement learning in which subsystems that have access to different information are able to share appropriately abstract behavior across contexts 47,48 . For example, while visually guided locomotion in the context of a particular task may involve focusing on specific elements in the visual scene that do not transfer entirely to a new task, the locomotor movement patterns may generalize. In this example, low-level behavior is more invariant owing to information factorization. However, it can also be the case that high-level behavior is invariant. Sufficiently abstract goals or intentions permit many distinct low-level movements to achieve them, so a high-level controller with limited access to body state may communicate an abstract goal that does not fully specify the Box 1 | Reusable motor skills for hierarchical control of bodies End-to-end RL with a "flat" controller initially explores the space of possible behaviors through uncoordinated, unstructured movements of each joint independently. For a complicated, humanoid body, intelligent behavior in this space is a needle in a haystack, making the search for task solutions a difficult problem. To promote a diversity of behavior as well as the exploration and discovery of new ones, the neural probabilistic motor primitives (NPMP) architecture has been introduced 44 , which expresses a set of robust, human-like motor behaviors as a basis for further task learning. The system is first trained using motion capture data of humans performing movements. The motion capture data are time series of configurations of the body and joints. The details of the construction of the system are not critical, but, to give some insight, for each motion capture snippet, a neural network is trained by RL to produce actions, a t , such that the resulting movement trajectory approximately tracks the kinematic position of the body in the original reference motion. Then, these movement controllers are combined or "distilled" into one large model that can track any of the movements given a description of the near future path of the body, x* t . A coding space, z t , in the system comes to represent each of these movements and allows interpolation among them. Downstream of the code is a motor policy, which, when cued with z t and proprioceptive information s t , is able to generate patterns of human-like movement autonomously. Thus, exploration of the space of human-like movements becomes possible by varying the input z t to the motor policy. To this low-level motor system, a high-level controller can be attached to solve complicated tasks in virtual environments. The highlevel controller has full visual input and is provided task information, o t . It learns by RL to produce actions of the same size as the coding space, which modulate the movements carried out by the low-level policy. The NPMP's modular, hierarchical design has made it possible to solve complicated problems otherwise of great difficulty for flat RL. See supplementary materials (videos and associated captions) for examples of motor reuse. Subsystems are invariant to hidden information and therefore are reusable across contexts. Partial autonomy Lower-level systems function somewhat autonomously, with modulation from higher-level systems.
System is more robust and lower-level does not require costly micromanagement.
Amortized control
Movements that have been successfully executed multiple times are compressed into a system that can rapidly reproduce them.
Re-execution of frequently repeated movements should be more computationally efficient than novel variations.
Modular objectives
Specific subsystems may be trained to optimize specific objectives, distinct from the global task objective.
Training of subsystems can leverage error signals that are denser or more well known than the global task objective.
Multi-joint coordination
Movement is produced in a manner that reflects common patterns across the body.
Exploration and action-selection can exploit commonly co-occurring multi-joint patterns. Temporal abstraction Common temporal motifs are abstracted.
Behavior specification or planning can occur at a coarser timescale. PERSPECTIVE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-13239-6 required details of the movement, leaving it to the lower-levels to sort out the details. That some goals or tasks can be solved by a multiplicity of execution details ("motor equivalence") has long been recognized as important in movement science 49,50 and has also been identified as relevant for robot control 51 .
Partial autonomy. Partial autonomy refers to the property of certain types of hierarchical systems that the lower-levels of the hierarchy can semi-autonomously produce behavior even without input from higher-levels. This principle is related to the intuition underlying the subsumption architecture 7 : build low-level controllers that function autonomously; then add modulatory control layers such that the overall system can produce more behaviors. The insight reflected in this approach is that robustness can be achieved if lower-layer controllers are sufficiently autonomous (albeit for a more limited range of behavior), such that removal of the higher layers leaves the lower-layer generated behavior intact. This style of architecture is evocative of the brain 8 , insofar as for many animals, considerable functionality remains in animals with substantial portions of the central nervous system removed, as we discuss later. This partial autonomy is related to information factorization insofar as a lower-level system should have adequate information to be partially autonomous. For example, a low-level locomotion controller may simply produce straight-ahead (or randomlydirected) walking behavior in the absence of inputs from the higher-level controller, but this locomotion can still be stabilized by proprioceptive feedback. Partial autonomy also pertains to a class of robustness having to do with appropriate responsiveness to perturbations. Consider a setting in which an agent (or animal) is engaged in a behavior (e.g., walking) and, owing to something unanticipated in the environment, the agent slips or is perturbed. Although "default" behavior may be somewhat automatic, a role for higher-layers might be to detect that something unexpected has occurred via monitoring what is unfolding, and respond with the appropriate modulation of the overall behavior. So, whereas simple walking may be performed adequately by lower-levels of control, increasingly intelligent responsiveness may require rich sensory information as well as the ability to assess the environment for safe affordances (e.g., something to hold onto in response to slipping).
Amortized control. In order to accelerate computation of behaviors that require complex motor coordination, hierarchical systems can benefit from amortized control. Amortized control refers to a wide range of approaches that involve training a lower-level system to produce appropriate behaviors for a behavioral context or modulatory signal, without having to engage in a costly process. For example, although it is quite costly to plan or optimize movements entirely from scratch, once movements have been produced, it should be possible to train a "reactive" subsystem that can reproduce these movements repeatedly without redundant planning. This principle is related to partial autonomy, as it may involve the production of a semi-autonomous subsystem, but the emphasis of this principle is on the benefit with respect to computation attained through caching previously obtained solutions.
Motivated by this insight, it has been demonstrated that policies produced via trajectory optimization could be distilled into a neural network that could then be reused interactively 52,53 . Similar ideas have also been explored 44,[52][53][54] , reflecting a shared intuition that well-behaved trajectories obtained from various sources can be used to train a neural network that may generalize from the examples. From a system perspective, this is a kind of self-supervised learning where trajectories generated by one (presumably slow or costly) mechanism are used to train another part of the system to produce equivalent behavior in an amortized fashion.
Modular objectives.
Many examples of neural networks applied to control problems use "end-to-end" optimization 25 ; that is, there is a single task objective, and the entirety of the architecture maximizes this singular objective. However, the broad alternative is that control systems have some functional separation of roles by subsystem, and different modules benefit from being trained by distinct modular objectives. A specific, practical, and popular approach trains a controller to solve a task while also training a set of internal representations to predict future sensory data 26,55,56 . This approach to learning internal state representations can improve experience efficiency by leveraging dense selfsupervised objectives to train perceptual and memory modules, whereas task reward can still provide learning signals for the controller. This approach is "heterarchical" insofar as different objective functions, consisting of a predictive objective as well as a policy improvement objective, are imposed in parallel on different parts of the overall network architecture.
Another classic approach involves the overall system specifying subordinate objectives for modular subsystems, while maintaining the priority of a high-level objective. Paradigmatically for control problems, a high-level controller can communicate a goal to a low-level controller, which serves both as instruction to modulate low-level behavior and also as a reference for learning. Such an approach amounts to a divideand-conquer strategy 57 , and has been implemented via reinforcement learning 45 . For example, in locomotion control, a high-level controller may decide to move in a certain direction, provide a signal to the low-level controller as instruction, and this signal also serves as a dense teaching signal that the low-level controller learns from as it assesses how well it stays on the instructed course. In such schemes, the low-level controller is trained to satisfy its received instruction, whereas the high-level controller intelligently programs these objectives to solve a more global task. Most work on this idea has used fixed forms of the cost function for the low-level controller 58,59 , but other work has explored how to learn more abstract goal spaces 60 .
Multi-joint coordination. Although it may make sense to be able to modulate or directly control single muscles or joints in specific contexts, most control is perhaps better thought of as selective activation of established motor synergies. There are many variations on the motor synergy concept 61 ; here we mean functional couplings of different joints or muscles such that motor control operates at the level of multi-joint coordination patterns rather than through independent control of all joints. Producing actions at this slightly higher level of abstraction can facilitate exploration and learning of new skills as well as simplify planning. This is perhaps most readily apparent in a setting like reaching and grasping, where random movement of all degrees of freedom independently will be ineffective, but random movements in the subspace of hand configurations encountered during grasping will lead to more effective interactions.
Perhaps, the conceptually most straightforward way to implement multi-joint coordination is to perform control or planning in a prespecified, low-dimensional space. For well understood classes of movement, such as locomotion, versions of low-dimensional control have been around for a while, such as specifying the walking in terms of a simplified body model and computing leg movements to achieve the target movement of the center-of-mass 62 . This strategy has been advocated more generally 63 , and a relatively recent representative performs low-dimensional planning for locomotion in a hand-designed space that interacts with a low-level controller 64 . An alternative to hand-engineering the low-dimensional control space involves unsupervised learning (or self-supervised learning) of sensorimotor primitives in order to produce a learned low-level controller 11,65 .
Temporal abstraction. Temporal abstraction simplifies the specification of behavior that endures over extended time intervals via higher-level controllers operating at a coarser temporal resolution. For example, in the context of locomotion, a higher-level controller may instruct a low-level controller at a less-frequent timescale on where to navigate (or when to turn), but the actual movement is executed over an extended duration by a lower-level controller that operates at the full temporal precision required for motor behavior. Through this scheme, a trade-off is established, whereby the highlevel controller may cede control precision, but gain in time-horizon through the reduced temporal resolution-this enables the highlevel controller to more easily discover or plan behavior that endures on a longer natural timescale.
In the hierarchical reinforcement learning literature, a number of schemes have been proposed that focus on leveraging temporal abstraction 66 . In particular, the options framework, which involves high-level transfer of control to self-terminating subroutines, has been highly influential 67 . Deep RL also can incorporate temporal abstraction 68 . The conventional focus on temporal abstraction as opposed to multi-joint coordination in hierarchical RL makes sense when one appreciates that many canonical RL problems have comparatively low-dimensional, discrete action spaces. In settings where control is simple, the only way to abstract control complexity is in the time domain. For problems with high-dimensional continuous action spaces such as control of bodies or robotic manipulators, multi-joint coordination can be more critical than temporal abstraction 63 . But of course, longer-term motor planning and behavior selection do require temporal abstraction.
Temporal abstraction can also be implemented via commitment to a task, goal, or context. That is, agents may, for a period of time, select a behavioral mode or "goal" and all behavior executed could be directed in support of this goal (this overlaps with the use of goals for modular objectives, but is distinct in motivation). In such an implementation, the selected goal is a form of high-level action and allows for coarser control, both temporally and in terms of level of precision of the goal state. Whereas "state abstraction" with respect to goals is distinct from temporal abstraction, the two are correlated in many settings-for example, in navigation settings spatially distal goals are usually temporally distal as well 45 .
Neurobiological hierarchical motor control
As noted earlier, the renewed relevance of hierarchy in AI returns attention to a theme that was central not only in earlier AI research, but also in earlier neuroscience research. With this in mind, we turn now to our survey of hierarchy as relevant in neuroscience research on motor control, considering how the principles described in the previous section relate to known properties of brain function. The nervous system of higher vertebrates controls movement through a distributed set of structures that are both anatomically and functionally hierarchical (see Box 2 for overview). Of course, in very broad terms, that the nervous system is hierarchically structured is something that is widely accepted and touted at the level of introductory textbooks. But more specifically, as there are distinct ways for a system to be hierarchical, we believe the principles of hierarchical control emerging through the study of artificial systems help us make sense of even the detailed elements of the biological motor control system.
Our brief survey will primarily focus on the functional role of key parts of the nervous system in the context of motor control. Historically, this has been investigated through now classic studies involving the removal of portions of the brain, as well as neural recording and stimulation. This classic literature is bolstered by relatively more recent work that considers loss of function in the context of inactivation and removal specifically of motor areas. The review will proceed from lower-level motor structures up to "higher" brain regions, and we will emphasize the relevant principles introduced in the previous section where appropriate.
"Lower-level" movement centers. It is an incredible feature of the nervous system that substantial parts of the brain can be removed while preserving significant functionality. This broadly reflects the relevance of the hierarchical control principles of partial autonomy as well as information factorization-brain subsystems receive relevant partial information and can control some movement even without higher-level inputs. The spine, even in spinalized preparations, is responsive to somatic sensory feedback and can act semi-autonomously from the brain to coordinate multiple joints over time. Spinal circuits are capable of both generating their own spatiotemporal coordination patterns, such as "fictive" locomotion 70 via central pattern generators (CPGs) as well as modulating activity locally via sensory reafference 71,72 . There is also a rich literature on spinally controlled time-varying movement primitives involving coordination of multiple joints to control to an end-point or to trace a "virtual trajectory" [73][74][75] . While difficult to assess directly, it is believed that these primitive spinally generated movements and patterns are relevant for humans 76 , with the basic movements that support walking behavior having an innate component that arises early in development 76,77 .
At the level of the brainstem, much of our knowledge comes from experiments involving decerebration as well as stimulation. We know a great deal about the functional anatomy of decorticate and decerebrate cats 78 . Depending on precisely where decerebration is performed, animals retain the ability to walk spontaneously, or only under stimulation of nuclei such as the mesencephalic locomotor region (MLR). In intact animals, nuclei such as MLR receive inputs from relatively higher regions including the hypothalamus and basal ganglia that modulate locomotor behaviors. Locomotor nuclei do more than generate oscillatory patterns-some version of which is already handled by the spine. Instead, these nuclei orchestrate slightly more abstract multi-joint coordination of movement patterns and regulate locomotion. They also incorporate cerebellum-derived signals, somatic feedback, and inputs from other sensory systemts to help coordinate movement.
Subcortical "mid-level" movement regulation. Where decerebration removes the entire cerebrum, decortication refers to the removal of cortex without damage to thalamus or basal ganglia, so essentially all subcortical structures are intact, modulo atrophy owing to removal of significant sources of inputs. Cats and dogs with their entire cortex removed often generate superficially normal behavior after a recovery period 78 . In an early review into the behavior of decorticate cats, David McK. Rioch vividly observed: "During the first few days following the operation, when the animal walks into a corner, it continues to push forward, butting its head against the wall. Struggling, sprinting, and climbing reactions may occur, but escape from the corner is accidental. Later on the animal will turn aside from an obstruction after having bumped into it, or after having merely touched it with its whiskers or ears" 79 .
This description of the behavior of decorticate cats reveals a number of critical features from the perspective of hierarchical control: (1) cortex is not required for a significant amount of the behavior generated by the cat. This reflects partial autonomy as well as amortized control, insofar, as stereotyped movements are "habitual". In particular, we also know that decorticate animals with intact basal ganglia can initiate goal-directed locomotor behavior 80 . The basal ganglia then appropriately modulates the brainstem locomotor nuclei, which in turn modulate spinal CPGs. (2) Subcortical structures can select among different modes of coordinated behavior, possibly reflecting short-term temporal abstraction and multi-joint coordination. Specifically, it has been proposed that motor program selection is performed by the basal ganglia, normally informed by inputs from cortex and thalamus 6 . This is also consistent with recent work correlating neural activity in striatum with moment-to-moment sequencing of movement "syllables" 81 . (3) While sensory-guided insight is impaired upon removal of cortex, residual sensory information that has been processed through non-cortical pathways remains available, reflecting appropriate information factorization. (4) Certain forms of learning still occur, obviously mediated via noncortical circuitry 79,82 . It is believed that learning of motor coordination is mediated by cerebellum and learning related to action selection is mediated by basal ganglia 83,84 . This is consistent with the broader literature on the basal ganglia being involved in the learning and deployment of context-triggered habitual actions, with this circuitry thought to implement something like reinforcement learning 85,86 .
Further, complex patterns of behavior associated with motivational states are also substantially intact in decorticate animals. For example, decorticate male rodents are even capable of generating the complex motor repertoire required to engage in copulatory activity and sire pups 87 . A fully integrative perspective Box 2 | Review of the neuroanatomical hierarchy The diagram depicts an abstraction of the hierarchical anatomy of the mammalian nervous system. The scheme is, insofar as possible, a consensus view of previous hierarchical interpretations 3,4,6,69 , with the intent of serving as an uncontroversial foundation. A natural entry point is the motivation regulation nuclei. The central nervous system receives information about the body via signals from the gut, level of hydration, hormones, blood sugar levels, and other measures. Much of this information arrives via structures such as the hypothalamus, which then communicates information related to motivational state to other parts of the brain. These signals related to basic drives (hunger, arousal, etc.) directly or indirectly will guide behavior. Subcortical structures, such as the basal ganglia, are responsible for regulating behavioral context and modulate the activity of more foundational motor generators in the brainstem and spine, which also receive limited sensory information via subcortical sensory structures. In parallel, motivational ("drive") information and sensory information are processed in cortical areas which in turn modulate behavioral context and ultimately allow for the use of more processed information to inform motor coordination via motor cortical areas. A common motif across specific hierarchical models that have been proposed is the presence of multiple routes of information transmission and motor coordination. In terms of sensory input, dual sensory input pathways transmit information along a subcortical pathway as well as a cortical pathway 4 . Similarly, there are direct subcortical pathways from motivational centers (or what has been referred to as the limbic system) to brainstem nuclei that activate motor patterns, as well as indirect routes, either via the basal ganglia or through frontal cortices 3 . This multi-pathway motif structurally reflects some of hierarchical control principles, with multiple layers to the system being partially autonomous, each having access to partial and differently processed information. should aim to include drive assessment and selection of motivational-behavioral contexts as part of the hierarchical control system. In particular, the hypothalamus is involved in regulating motivational state, and stimulation of hypothalamic sites produces the motivation to engage in certain behaviors 88,89 . Contemporary research continues to corroborate the perspective that evoked behaviors mediated by discrete hypothalamic regions reflect specific goals or motivated states 90 , with certain hypothalamic nuclei more specifically implicated in aggressive responses 91 as well as sexual behaviors 92 . Our inclusion of drive regulation as part of hierarchical control connects with historical characterizations of hypothalamus as related to movement regulation 93 or hierarchical interpretations that place hypothalamus atop the motor control hierarchy 4 . These motivated states signal to other areas to initiate behaviors suited to the satisfaction of the motivated state. And consistent with partial autonomy and the structured information factorization in the nervous system, there seems to be a direct motivation-driven subcortical system that handles coarse behavioral selection, as well as a secondary pathway that is frontally mediated and refines motor objectives or goals on a longer horizon 3 .
Cortical "high-level" control of movement. Despite the fact that many decorticate mammals show superficially normal behavior, clear deficits become apparent upon closer inspection, and these deficits are more dramatic in primates. This was initially a source of confusion for David Ferrier and Friedrich Goltz in the late 19th century. Although Goltz and others could produce non-primate decorticates that showed the kinds of behavior described in the preceding sections, Ferrier found significant impairments amounting to partial paralysis when only motor cortex was removed in a monkey 94 . Convergent evidence comes from humans in clinical cases involving focal motor cortical damage owing to injury; strokes have a substantial affect, resulting in transient partial paralysis, followed by considerable recovery, though without recovery of fine motor skills 94 . Although there is still uncertainty about the role of motor cortex 95 , at least as early as Bernstein, it has been appreciated that increasingly sophisticated organisms need elaborated, higher-level motor structures to solve general motor challenges; these elaborations enable the generation of a broader repertoire of diverse motor responses and support the performance of extemporaneous, unrehearsed movements 5 . This flexible higher-level functionality or motor "wit" is what Bernstein termed "dexterity" and defined as: "finding a motor solution for any situation and in any condition" 96 . To facilitate this high-level function, Bernstein observed that higher-level structures are well integrated with telereceptors (i.e., "long-range" sensors that detect olfactory, visual, and auditory signals); on the basis of evolutionary and anatomical evidence, Bernstein argued that this factorized sensory stream informs high-level structures that coordinate or override stereotyped and automatic movements generated by lower-level structures 5,96 .
The settings in which higher-level structures are most relevant depend upon the specific behaviors for which the animal is adapted. For example, dogs and cats do not execute dexterous finger movements, whereas non-human primates, humans, and even rodents do 97 . And increasingly for animals that reach and exhibit dexterous finger control, direct cortical control of upperlimb extremities allows closer integration of visual and tactile information for hand-eye (and finger) coordination. To support sensory-guided fine motor control, which is required for dexterous manipulation, non-human primates and humans have more substantial direct projections from cortex to spine 80,98 . The anatomical variation continues even among primates, with fine motor control by humans even surpassing other primates 99 . More broadly, the general role for high-level structures in mediating sensory-rich control may be relevant in other niches; for example, legged traversal of precarious terrains, as performed by a mountain goat navigating small footholds, is also obviously dependent upon visual guidance for foot placement.
Recent studies involving targeted inactivation or removal of motor cortex provide evidence that supports this view that cortex refines movement, primarily in contexts involving precise sensory-guided control or dynamic motor improvisation. In rodents, the production of grasping behaviors has been localized to the rostral forelimb area (RFA), and long-duration intracortical microstimulation can generate reaching and grasping behaviors 100 (paralleling similar results in monkeys 101 ). Experimenters have demonstrated that transient, reversible, and specific deficits in pellet-grasping ability are produced in behaving rats when RFA is silenced via cooling 102 . In other experiments, rodents traversed a simple "obstacle course" with infrequent dynamic perturbations 94 . Although rodents with bilateral motor cortical lesions showed no significant deficits in navigating stable terrains, in the presence of dynamic perturbations, lesioned animals were unable to rapidly adapt their movements. The sensory-guided element of motor cortical control was perhaps most directly tested in experiments making use of a virtual environment that allows for the experimental dissociation of motor control and sensory feedback-researchers found that in response to experimental perturbations of the visual environment, the local cortical microcircuit in motor cortex was involved in producing corrective motor responses to situations where the actual sensory consequences did not match predictions 103 . Taken together, motor cortex appears required for fine-scale, dexterous motor control, especially involving sensory guidance, but motor cortex may not be required for stereotyped (autonomous and amortized) movements, consistent with previous interpretations 94,103 .
In yet other experiments involving rodents, complex, but nondexterous, stereotyped motor trajectories that an animal learned in order to solve a task were preserved when motor cortex was bilaterally removed 104 . However, learning was shown to be dependent on the presence of motor cortex, which is interpreted as evidence for initial production of the movement being mediated by cortex, followed by tutoring of subcortical regions 104 , seemingly implementing a form of amortized control. However, the science of where amortized motor representations are stored (c.f. "automaticity") remains unsettled as other findings suggest cortex may store certain learned patterns after being driven by exploration generated subcortically 105 .
The alternative to control being amortized, regardless of the neural locus, is that every movement is planned from scratch each time any movement is executed. It has been argued that planning or optimization occur via preparatory activity preceding movement, both for reaching behavior [106][107][108] and in the context of decision-making tasks [109][110][111] . Although it remains an open question how the nervous system balances pre-movement planning with amortized control in ethological settings, we expect planning to be most beneficial for control of idiosyncratic movements or in settings in which control must be precisely micro-managed by sensory feedback. Insofar, as experiments which study preparatory activity employ paradigms in which animals engage in highly stereotyped behavior, it is difficult to know how to relate preparatory processes in these settings to ethologically relevant motor planning.
Two of the principles of hierarchical control that have not featured as prominently in this short review, despite being important for cortical function, are learning by modular objectives and temporal abstraction. It is beyond the present scope to review how the nervous system learns to extract structured information from sensory signals or encodes memories -these processes undoubtedly are governed by diverse learning signals (i.e., modular objectives). We also will not cover the various frontal structures that are even "higher" than the motor cortices. These structures are involved in planning and reasoning processes, which may result in the specification of goals; temporal abstraction certainly features prominently 112,113 .
Shared challenges for biological and synthetic motor control As the preceding section articulates, many of the interest areas pursued in recent AI work on hierarchical motor control find corresponding relevance in neuroscience. This makes evident a current opportunity for synergistic exchange between the two fields. We also emphasize that hierarchical control in AI is far from solved-despite significant progress in artificial intelligence research over the past years, there remain meaningful challenges in dealing with rich sensation, a broader range of tasks, rapid adaptation or improvisation, as well as object interaction and tool use. However, we are optimistic that we can make progress on these outstanding challenges. Towards this end, we highlight research themes that already have active interest, but which we believe deserve further attention.
Towards full-scale body control. Theories of biological motor control must actually confront the problem of controlling a fullscale body in an environment for a range of tasks-we should aim to build models that both reflect the nervous system and function as controllers. For single-behaviors, motor control in simulation has already afforded a constructive setting in which to define biologically informed models, and various interesting research has been undertaken towards control of bodies, often with an emphasis on biomechanics and muscle-level control 114 . Previous efforts have generally considered control of certain movement behaviors, such swimming in lamprey 115 , control of locomotion in cats 116 or humans 117 , as well as swimming and walking in salamander 118 . Efforts by Delp and colleagues have pushed to model biomechanical control of musculotendon-driven models 119 , including tendon-driven simulations of upper 120 and lower limbs 121 ; these models can be used to analyze specific movements and prepare surgical interventions. Despite the aforementioned efforts, which begin to demonstrate the utility of physics-based simulation for studying neural control, building controllers that capture meaningful diversity of behavior is a tremendous opportunity that remains, at present, underexplored.
To produce controllers that capture the rich behavioral diversity of biological organisms, two broad approaches are possible-train the system to solve diverse tasks or produce datadriven generative models of observed behavior. With task modeling, we acknowledge that real animals can solve a wide range of tasks efficiently, and we produce diverse behavior through defining tasks and learning algorithms. Intriguing forays have been made within neuroscience at handling multiple cognitive tasks 122,123 , albeit with the role of motor control quite restricted. The complementary approach is to produce datadriven generative models of animal behavior; specifically, this involves control of a physically simulated body in an environment with an aim of matching empirically observed reference behavior. As highlighted previously in this review, there has been some research into hierarchical control schemes for which animal or human motion capture is leveraged to produce a low-level movement controller 40,[42][43][44][124][125][126] . A related idea that is more familiar within neuroscience involves building descriptive models of the behavior of an animal 127-129 , but fewer efforts have so far aimed to combine descriptive models of animal behavior with physically realistic control of movement.
The structure of inter-region communication. At present, we do not fully understand what coding schemes brain regions use to communicate, and we are similarly uncertain how to specify information flow in synthetic hierarchical motor control systems. The default scheme for communication between layers or modules of learning systems is for the output of one layer to serve as an input to another layer. However, there are still various open questions-for example, should communication follow prescribed semantics? Learning systems will not necessarily result in interpretable inter-layer communication, unless structure emerges through the learning process or is encouraged explicitly. A second question is how, mechanistically, the outputs of one system should modulate another-whether activations from one layer should serve as simple inputs or if they should nonlinearly modulate their target, such as via multiplicative gating (e.g., see the "Transformer" 130 or FiLM layer 131 ). Yet another question concerns the level of resolution of the signals sent between regions-what is the balance between communicating abstract goals that only partially specify behavior versus communicating rich instructions that precisely tell the lower-level system what to do? Too intense micromanagement makes the function of a lowlevel system redundant, yet in certain cases it may be useful for a high-level system to entirely override low-level behavior.
To ground these issues in neuroscience, we can consider a specific debate in the field-Friston 132 identifies a key difference between classes of proposed hierarchies as having to do with the semantics of signals sent from higher-level controllers to lowerlevel controllers, noting that "In active inference, descending signals are in themselves predictions of sensory consequences." As an alternative, Todorov et al. 63 advocated for the interface between the higher-level and lower-level controllers to be engineered and reflect insight into an appropriate set of variables well suited to the range of behavior. Although it is not yet clear which of these proposals, if either, corresponds to biology, the general point is clear-hierarchical systems must employ a language or code at the interface between layers or regions. Here, we do not propose to resolve this issue, but instead suggest that this area presents an opportunity for neuroscience and AI efforts to collaborate in proposing communication schemes and evaluating which are effective.
Ethological motor learning and imitation. Animals and humans efficiently learn motor behaviors throughout life via active exploration, imitation of conspecifics, and subsequent refinement of skills. Although birdsong is a narrow behavior relative to primate motor control, it serves to illustrate some of the multiple requirements-evolutionarily initialized motor variability ("babbling") in juvenile songbirds is shaped into skilled behavior by a process of vocal imitation learning followed by self-directed rehearsal [133][134][135] . More broadly and across species, intrinsically motivated active exploration is required to learn both about the environment as well as how self-generated behavior can affect the environment 136 . In humans, imitation-based learning begins with observing the movements of others, but can involve inference of the goals of the demonstrator as well as intelligent exploration to imitate their movements or goal-directed activity 137 . Further, it is thought that non-verbal pedagogical behavior is an evolutionary adaptation 138 , and related imitative behavior may have antecedents in the gestural communication already present in some other species 139 .
At present, the conventional forms of artificial "imitation learning" do not yet match the biological inspiration.
Contemporary approaches require that demonstrations are essentially performed on the body of the student (e.g., via teleoperation), granting first-person access to demonstrated behavior. Learning from this information is referred to as behavioral cloning 140 , and usually is implemented as a regression from demonstrated states to actions 141,142 . But recent advances take steps toward more natural imitation. For example, adversarial imitation 143 can scale to humanoids even without access to actions 124 , possibly from only allocentric, video demonstrations 144 . Another particularly exciting and naturalistic development is "one-shot imitation learning", where, after training, the system is presented with a novel demonstration and immediately attempts to reproduce that demonstrated behavior 145 ; this style of approach has also been employed for humanoids 44,146 . As an intermediate representation that supports one-shot observation and imitation of demonstrations, systems may possess an embedding space that simultaneously encodes the demonstrated behavior and reflects what the agent will do. Conceptually, this is similar to the representation identified for mirror neurons 147 .
Concluding remarks
In this review, we have attempted to reflect upon the principles of motor control in biological nervous systems as well as ideas for designing motor control architectures for synthetic systems. Both neuroscience and artificial intelligence research have clearly benefited from taking the perspective that behavior should be optimized to solve tasks. But overemphasis on isolated, straightforward motor control tasks obscures meaningful challenges. Recent work in AI involving efforts to scale motor control to richer and more diverse behaviors, has catalyzed a shift in focus towards hierarchical systems capable of handling a diversity of tasks. This trend points to themes that were central in earlier eras of both artificial intelligence and neurobiological motor control research. Moving forward, we propose that effort should be focused on building models that can generate the flexibility and breadth of motor behavior produced by animals. Once embraced, this perspective will accelerate efforts to reverse engineer the motor system. | 12,191.2 | 2019-12-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Biology"
] |
Laser recrystallization and inscription of compositional microstructures in crystalline SiGe-core fibres
Glass fibres with silicon cores have emerged as a versatile platform for all-optical processing, sensing and microscale optoelectronic devices. Using SiGe in the core extends the accessible wavelength range and potential optical functionality because the bandgap and optical properties can be tuned by changing the composition. However, silicon and germanium segregate unevenly during non-equilibrium solidification, presenting new fabrication challenges, and requiring detailed studies of the alloy crystallization dynamics in the fibre geometry. We report the fabrication of SiGe-core optical fibres, and the use of CO2 laser irradiation to heat the glass cladding and recrystallize the core, improving optical transmission. We observe the ramifications of the classic models of solidification at the microscale, and demonstrate suppression of constitutional undercooling at high solidification velocities. Tailoring the recrystallization conditions allows formation of long single crystals with uniform composition, as well as fabrication of compositional microstructures, such as gratings, within the fibre core.
Supplementary Figure 2
As-drawn fibre crystallinity. Representative electron backscattered diffraction pattern from an as-drawn 6 at% Ge fibre; the entire fibre cross section showed the same pattern (see Fig. 6 in the main text for a map of the orientation). Brightness and contrast were each increased by 40% over the original image. Emission profile at 514nm. Image of fibre melt zone using a 514 nm narrow band filter (a) and a greyscale value plot from the red line (b). A sharp decrease in greyscale value in seen at the solid-liquid interface due to a difference in emissivity of the two phases. Noise in the central region is due to emission from particles in the interface layer.
Supplementary Figure 9
Diffraction patterns integrated over a range of projection angles φ a) Pure Ge microwire, overlaid with the calculated powder diffraction rings for Ge. The Bragg peaks are sharp, with the radial width dominated by the instrumental resolution. b) Recrystallized SiGe (6 at% Ge) microwire, overlaid with the calculated powder diffraction rings for Si. In this case, the Bragg reflections in (b) are radially broadened, as expected for an inhomogeneous Si-Ge blend. The strong isotropic scattering at low q can be ascribed to the glass surrounding the semiconductor core. The Ge content of the melt is higher than the fibre average due to migration of Ge-rich material to the melt zone (as shown in Supplementary Video 3) and due to the preferential segregation of silicon into the solid phase, as indicated by the phase diagram. A concentration of approximately 9 at% Ge was estimated using the width over which Ge was gathered, giving a melting temperature of 1673K. By considering the ratio between the highest intensity value in the melt and the value at the interface (~1.3), assuming a constant emissivity in the melt and linear response in the detector, the temperature difference can be estimated using the Planck distribution: where is the emissivity, h the Planck constant, c the speed of light, the wavelength, T the temperature and k B the Boltzmann constant. Taking the intensity value in Supplementary Fig. 3 at the melting point B(514 nm, 1673 K) and solving for T at an intensity 1.3 times as great gives a maximum temperature of 1983 K.
Dividing by the distance from the interface to the maximum yields an upper bound of 1.4×10 4 K cm -1 for the temperature gradient .
The same procedure was performed using a 633 nm narrow band filter. A greyscale value ratio of ~1.24 is observed for the frame presented in Supplementary Fig. 4. Solving for T max gives a thermal gradient of 1.5×10 4 K cm -1 , in reasonable agreement given the noise levels in the images.
Supplementary Note 2 Critical Velocity
The breakdown of a planar solid-liquid growth interface during unidirectional The slope of the liquidus can then be determined by differentiating equation (2).
Solving equation (2) and (3) The problems in realising homogeneous growth of SiGe are typically ascribed to constitutional undercooling due to the large miscibility gap (see Fig. 1a, main text).
The severity of the constitutional undercooling depends on the composition of the melt, but also the growth velocity of the phase front, as a higher growth velocity will suppress solute diffusion into the liquid. The Tiller criterion for inhomogeneous growth during unidirectional solidification is still being used today to predict critical growth rates. However, Mullins and Sekerka 8 presented a model that also considers the effect of the difference in thermal conductivity between the phases, the temperature gradients in the phases, the latent heat released, the curvature effects on equilibrium concentration at the interface and capillarity (solid-liquid interface energy) and thus the lateral dimensions of the interface.
In their model, capillarity and high temperature gradients stabilize the phase front.
Thus, higher growth velocities can be used while still suppressing inhomogeneous growth in small dimensions and with large temperature gradients. Additionally, Yim and Dismukes 9 have pointed out that strong thermal gradients will enhance thermal diffusion and further stabilize the phase front.
Experiments were performed with fibre cores of ~130 µm and ~15 µm to see whether a difference in the critical velocity could be measured as a function of radius.
The compositional uniformity across the polished fibre cross-sections indicated whether the critical velocity had been exceeded. The Ge content in electron micrographs of polished cross-sections gives greyscale contrast in backscattered electron (BSE) imaging, and automated analysis of these images was performed using a MATLAB® script. Images were taken with identical microscope settings and were not processed prior to analysis. Polishing minimized topological contrast in the BSE signal, leaving only atomic number contrast. Edge detection was performed to determine the core/cladding interface, the radius, R, and the position of the core centre. Greyscale values were integrated in evenly spaced annuli of width R/100 and were normalized by the average greyscale value for the image to provide a quantitative metric of the fibre inhomogeneity, as seen in Supplementary Fig. 5a. A similar procedure with angular slices of size 2π/40 was performed to visualize the angular distribution, as seen in Supplementary Fig.5 b. The sum of least squares (SLS) for the radial and angular distributions gives a single-value indication of the homogeneity of a fiber, with a compositionally uniform fiber having a SLS of 0.
Plotting the resulting SLS for fibres recrystallized at different velocities, the critical value for the onset of inhomogeneous growth can be determined. Supplementary Fig. 5c presents the radial SLS, and Supplementary Fig. 5d shows the angular SLS for all tested growth velocities of the 6 at% Ge fibers with core diameters of 120 µm. There is a very distinct step between 200 and 1000 µm s -1 .
Manual investigation of SEM images of an untreated sample and a sample treated at 1000 µm s -1 reveals a very similar compositional distribution, as seen in Supplementary Fig. 6. This suggests that either the critical growth rate was reached and inhomogeneous growth occurs, or possibly, insufficient power was available for melting the cores at these high rates.
Experimental details
The text description of the experimental details is presented in the main manuscript and Supplementary Fig. 8 shows the geometry used.
X-ray diffraction analysis
The microwires were measured still encased in glass. X-ray diffraction patterns were collected i) as function of axial position along the microwire length for a chosen angle φ, and, ii) as a function of angle φ for rotations about the microwire long axis at selected axial positions.
Phase identification
The diffraction patterns acquired at selected axial positions with 1° steps in φ were summed to obtain rotationally integrated diffraction patterns, thus being similar to the classical so-called "rotating crystal method". The summed diffraction patterns were compared with the calculated powder diffraction rings of the well-known diamond cubic unit cells for Ge and Si. Examples of integrated diffraction patterns are shown in Supplementary Fig. 9 for a pure Ge microwire and for a recrystallized (100 µm s -1 scan rate) SiGe (6 at% Ge) microwire.
The diffraction patterns for the pure Ge microwire were, as expected, in agreement with the diamond cubic unit cell of Ge with a = 5.66 Å. The diffraction data for recrystallized SiGe exhibited radial broadening centred near the diamond cubic unit cell of Si having a = 5.43 Å.
Axial scans
With a beam spot size of 200 µm the spatial resolution was sufficient to observe single or polycrystalline regions along the length of the microwire. XRD measurements on the recrystallized SiGe microwire (100 µm s -1 scan rate) at different axial positions, obtained for the same sample orientation angle φ, are shown in Supplementary Fig. 10. Diffraction patterns obtained millimetres apart exhibit the same diffraction peaks, revealing longitudinal uniformity of the crystal structure and hence crystallographic coherence over several millimetres. Other samples did not exhibit this coherency, signifying that those samples were polycrystalline with crystalline domains smaller than the volume probed by the X-ray beam.
Rotation scans and 3D reciprocal space analysis
Having obtained diffraction data for a wide range of projection angles φ, threedimensional reconstructions of reciprocal space were calculated. The symmetries of different Bragg reflections were studied, as shown in Supplementary Fig. 11 for the {311} family of reflections, which has q = 3.84 Å -1 and a multiplicity of 24.
Supplementary Fig. 11a and 11b show the ideal reciprocal space structure and the acquired experimental data at approximately the same sample orientation. While the nucleating crystallite is likely to be randomly oriented, the XCT images also indicate that in some cases (cf. Fig. 3a in the main text), the fibre geometry guides and reorients the subsequent crystal growth. | 2,242.8 | 2016-10-24T00:00:00.000 | [
"Materials Science"
] |
Complete Cell Killing by Applying High Hydrostatic Pressure for Acellular Vascular Graft Preparation
Pressure treatment has been developed in tissue engineering application. Although the tissue scaffold prepared by a ultrahydrostatic pressure treatment has been reported, an excessive pressure has a potential to disrupt a structure of extracellular matrix through protein denaturation. It is important to understand the suitable low-pressure condition and mechanisms for cell killing. In this study, cellular morphology, mitochondria activity, and membrane permeability of mammalian cells with various pressure treatments were investigated with in vitro models. When the cells were treated with a pressure of 100 MPa for 10 min, cell morphology and adherence were the same as an untreated cells. Dehydrogenase activity in mitochondria was almost the same as untreated cells. On the other hand, when the cells were treated with the pressure of more than 200 MPa, the cells did not adhere, and the dehydrogenase activity was completely suppressed. However, green fluorescence was observed in the live/dead staining images, and the cells were completely stained as red after above 500 MPa. That is, membrane permeability was disturbed with the pressure treatment of above 500 MPa. These results indicated that the pressure of 200 MPa for 10 min was enough to induce cell killing through inactivation of mitochondria activity.
Introduction
In addition to the synthetic vascular grafts made of poly(ethylene terephthalate) (PET) fibers or expanded poly(tetrafluoroethylene) (ePTFE), bioderived artificial grafts such as acellular grafts are recently commercially available. Not only the homografts (derived from human tissues) but also the xenografts are tried in the clinical stages. Decellularized whole organs are also focused on the tissue-engineered artificial organs for providing the novel treatment of organ failure [1][2][3].
To remove cellular fragments, the tissues were treated with the detergent such as sodium dodecyl sulfate (SDS) and washed thoroughly. Although the SDS treatment is effective for removing the cells, the remaining chemicals may be toxic, and it was reported that the repopulation on the tissue was suppressed by the detergents [4]. As alternatives, some strategies such as enzymatic treatment [5][6][7], hypotonic solution [8], cryochemical treatment [9], and detergent treatment [1,9] have been reported [10].
Pressurization is the useful technology in various fields. Especially, fundamental investigation of the pressure treatment of the bacteria in food science field has been reported in several years [11,12]. Recently, we have developed new decellularization technique by using ultrahigh hydrostatic pressure about 1000 MPa for only 10 min followed by adequate washing process in order to provide suitable tissueengineered scaffolds that are preserving the native mechanical strength [13,14]. In these trials, the complete cell death is of prime importance. Decellularization by the ultrahigh hydrostatic pressure (UHP) treatment does not require any toxic chemical reagents, and the cellular component can be completely washed out without any damage in the extracellular matrix (ECM). In our previous work, decellularized blood vessel was transplanted into pig descending aorta, and rapid endothelialization has been reported [13]. In these grafts, complete elimination of cellular component in addition to the sterilization effect is the critical issues in terms of the reduction of immunogenicity.
In the orthopedic surgical fields, pressure treatment has been investigated as the inactivation method of cancer cells in the tendon and bone [15][16][17]. In spite of several reports on pressure treatment for the decellularization and cell inactivation, the detailed effects of the pressure treatment on the cell death are not documented. In this study, we have fundamentally investigated the killing activity of the UHP treatment for mammalian cell line in order to achieve the complete cell killing or destruction with as low pressure as possible with the least protein denaturation and the other needless effects. Fibroblast, endothelial, and smooth muscle cells that were cellular components of blood vessel tissues were selected, and the effects of high pressure on cellular adhesive property, dehydrogenase activity in mitochondria, and membrane permeability were evaluated. Adhesive property of cells was evaluated by the microscopic observation 3 and 24 hrs after seeding. Dehydrogenase activity and membrane permeability of cells were evaluated by water soluble tetrazolium salts (WST) assay and live/dead staining, respectively. The results may provide the fundamental evidence of the cell death after UHP treatment of the tissue for the decellularization.
Cell
Culture. The cells were grown to confluence. The cultures were placed in a humidified 95% air and 5% CO 2 atmosphere at 37 ∘ C. The culture medium was changed every two days and confluency was typically achieved in 6-8 days. After the confluency of cells, the cells were washed with phosphate saline buffer (−) under the room temperature and immersed in 0.05% of trypsin solution containing 0.01% EDTA. After 2-5 minutes, the cells were rounded up. Trypsin was neutralized, and 2 × 10 5 cells were seeded on 10 cm cultured dish and cultured until confluency.
Pressurizing of Cells.
A suspension of 10 5 cells/mL in the culture medium was packed in a plastic bag with the cell culture medium and put in a sample chamber of cold isostatic pressurization machine (Dr. Chef; Kobelco, Kobe, JAPAN) with transmission fluid. The pressure was increased up to 100, 200, 300, 500, and 980 MPa at the rate of 65.3 MPa/min and kept for 10 min. After decreasing the pressure to atmospheric pressure at the same rate, the cells were seeded into the 24well cell culture plate (Iwaki, Tokyo, JAPAN) and cultured for 3 and 24 hours, and the morphology was observed under the microscope (Nikon TE-200; Tokyo, Japan).
WST-8 Assay.
After the cultivation for a given period of time on 24-well plate, 10 L of WST-8 assay reagent (Dojindo, Kumamoto, Japan) was added to each well and incubated at 37 ∘ C for 1 hour. Then, the plate was gently sharked, and the absorbance at 450 nm was measured by using multiplate reader (Thermo Varioskan Flash; Thermo scientific, USA ).
Live/Dead Staining. UHP treated cells (4 × 10 4 cells)
were washed with PBS, and then the cells were suspended into live/dead solution which was prepared by following the provided manual (Live/Dead Cell Staining Kit II; PromoCell GmbH, Germany). The cells were incubated at 37 ∘ C for 1 hour. After the incubation, images were obtained using an Olympus FluoView confocal laser scanning microscope (Olympus, Tokyo, Japan).
Statistical Analysis.
Quantitative results were shown as mean ± standard error of the mean. Difference between each data was evaluated by using Student's t-test. Significant difference was defined when < 0.01.
Cell Adhesive Property after Pressure Treatment.
The effect of pressure treatment on the cell adhesive property was evaluated by microscopic observation (Figure 1). Cells were treated at various pressures and seeded onto tissue culture plates. Cells treated with 0 and 100 MPa adhered after 3 hrs culture ( Figure 2) and spread out after 24 hrs (Figure 3). Their morphologies were almost similar to the untreated cell. On the other hand, when the cells were treated with the pressure above 200 MPa, they became round shape and did not adhere on the culture dish even after 24 hours (Figure 3). The round shape morphology was not changed during the 24-hour cultivation.
Mitochondria Enzyme Activity of UHP Treated Cells.
To evaluate the enzymatic activity in the mitochondria, WST cell viability assay was carried out. WST-8 activity against the pressure treatment is shown in Figure 4. The dehydrogenase activity in cells treated with 100 MPa was almost the same as the untreated cells. At above 200 MPa, no enzymatic activity was observed, suggesting that mitochondria enzyme was inactivated at 200 MPa. Pressure lower than 100 MPa does not affect mitochondrial enzyme activity. In the first step, the cell suspension was packed into the plastic bag. In the second step, the cells were treated with pressure. Finally, the cell killing was evaluated by cell morphology, WST-8 assay, and live/dead staining. The picture was cited from http://www.kobelco.co.jp/machinery/products/ip/product/cip/cip 05.html.
Discussion
In our previous work, the blood vessel tissue was treated with the pressure of 1000 MPa for the decellularization [13]. The cellular component was completely eliminated by the decellularization process. This study investigated the effect of the pressure treatment on cell killing. After the 100 MPa pressure treatment of cells, the cells attached on the culture dish and spread out after the seeding for 24 hours. Moreover, the dehydrogenase activity in the mitochondria was almost the same as the untreated cells. Fluorescence images of CLSM in Figure 5 were almost the same as the untreated cells after 100 MPa pressure treatment, indicating no membrane permeability. These results suggested that the 100 MPa pressure treatment did not induce the cell killing. When the cells were treated with a 200 MPa, the cells were floating on the culture dish after the cultivation at 3 and 24 hours, and the dehydrogenate activity was completely suppressed. When treated at higher 500 MPa, the cells were stained as red in live/dead staining images, suggesting that the membrane permeability largely increased. The cell killing by the pressure BioMed Research International treatment was summarized in Figure 6. Under the lowpressure condition, the cells were alive. When the pressure was raised to around 200 MPa, the dehydrogenase activity was suppressed, and the cells were killed. When the cells were treated with higher than 500 MPa, dehydrogenase inactivation and membrane permeability destruction would synchronously occur. To induce the cell killing before washing out of cell fractions from the tissue, pressure treatment higher than 500 MPa would be beneficial for the decellularization.
The cells treated at 1000 MPa were completely removed from the tissue because the decellularization was accomplished by not only the cell killing but also the deformation of cell membrane and its barrier activity. Although large number of papers have discussed the effect of pressure treatment on bacteria, there were few reports that argue the effect on mammalian cells. In the case of mammalian cells, the sensitivity against the pressure seems to be higher than the bacteria due to the structural complexity of cells. Florian-Dominiquenaal (2005) reported that the pressure treatment of around 200 MPa induced the cell death of human chondrocytes and chondrosarcoma cells [17]. The inactivation of cellular outgrowth by pressure has been studied for the treatment of cancer therapy in orthopedic surgery [15,16]. Mitochondria activity is largely related to an important function for cell growth such as the polymerization of actin filaments and adenine triphosphate (ADP) conversion. Therefore, 200 MPa treatment would induce the cell killing through an inactivation of mitochondria activity. Ishii et al. (2004) reported that bacterial cytoskeleton FtsZ polymers were inactivated by the pressure treatment of 40 MPa, and colony formation of E. coli was inhibited [18]. Although sensitivity of the pressure treatment would depend on a cell type, suppression of cytoskeleton-related enzyme activity might be directly affected by the cell killing.
Many reports discussed the effect of the pressure on bacterial cell viability defined by colony formation assay, cell wall hydrolase activity, ATP assay, and membrane potential [11,12,[18][19][20][21][22][23]. Malone et al. (2002) reported that colony formation unit (CFU) was largely decreased by the pressure of around 200-300 MPa, and this tendency dependeds on bacteria strain [19]. The similar pressure dependency on the bacterial growth has been illustrated in many reports [12,19,20,23]. The effects of pressure treatment on the membrane permeability and electric potential have also been studied [12,19,20,[22][23][24]. Malone et al. (2002) reported that cell wall hydrolase activity increased with the pressure until 400 MPa. The CFU was suppressed under the pressure of around 200 MPa, and then the deformation of membrane permeability was elicited. It is also reported that the high pressure treatment increased the cell permeability [19]. Ulmer et al. (2000) reported that the membrane activity of the bacteria was exponentially reduced, and the treatment at 500 MPa for 10 min was enough to inactivate the membrane [12]. Membrane potential also continuously decreased with increase of the pressure until 400 MPa [20,22]. These data supported that the features of cellular membrane are largely related to the cell killing activity of the pressure treatment. However, the effect might not be a critical factor for cell killing in mammalian cells because 400-500 MPa was needed to induce a damage of the cell membrane but we found that 200 MPa is enough to kill cells. The pressure treatment decreases the metabolic and enzymatic activity [12,19,20,22,24]. The effect of esterase, ATPase, and cell wall hydrolase activities on cell growth was investigated. The pressure for inactivation of enzyme activity was largely dependent on enzyme, and the enzymatic activity was decreased during 200-400 MPa. Ishii et al. (2004) reported that the cell survival and morphology were largely correlated with the cytoskeleton polymers [18]. Therefore, the inactivation of the enzyme in mitochondria would mainly induce the cell killing.
The presented data supported that 200 MPa was enough for cell killing and 1000 MPa which we have been using for decellularization treatment was effective to remove the cells in addition to cell killing because of the enhanced membrane permeability at 500 MPa. These findings would lead us to an effective decellularization process. It is expected that detail evaluation of enzyme activity and structural analysis of the cellular component would provide significant information about the mechanisms of the cell death under the pressure treatment.
Conclusion
In this study, we suggested that the cell killing was completely induced by 200 MPa treatment through inactivation of enzyme activity in mitochondria. It is well known that the mitochondria are related to the polymerization of actin filaments and supply of the cellular energy. The pressure treatment of 200 MPa could induce the cell killing by the inactivation of mitochondria enzyme activity. On the other hand, cell membrane permeability was also changed by the pressure of more than 500 MPa. The sensitivity to the pressure would be largely related to the components of cells.
In conclusion, we successfully define the effect of pressure treatment on cell killing. | 3,210.2 | 2014-04-30T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Protease-activated receptor 2 induces ROS-mediated inflammation through Akt-mediated NF-κB and FoxO6 modulation during skin photoaging
Long-term exposure to ultraviolet irradiation to skin leads to deleterious intracellular effects, including reactive oxygen species (ROS) production and inflammatory responses, causing accelerated skin aging. Previous studies have demonstrated that increased expression and activation of protease-activated receptor 2 (PAR2) and Akt is observed in keratinocyte proliferation, suggesting their potential regulatory role in skin photoaging. However, the specific underlying molecular mechanism of PAR2 and the Akt/NF-κB/FoxO6-mediated signaling pathway is not clearly defined. In this study, we first used the UVB-irradiated photoaged skin of hairless mice and observed an increase in PAR2 and Gαq expression and PI3-kinase/Akt, NF-κB, and suppressed FoxO6. Consequently, increased levels of proinflammatory cytokines and decreased levels of antioxidant MnSOD was observed. Next, to investigate PAR2-specific roles in inflammation and oxidative stress, we used photoaged hairless mice topically applied with PAR2 antagonist GB83 and photoaged PAR2 knockout mice. PAR2 inhibition and deletion significantly suppressed inflammatory and oxidative stress levels, which were associated with decreased IL-6 and IL-1β levels and increased MnSOD levels, respectively. Furthermore, NF-κB phosphorylation and decreased FoxO6 was reduced by PAR2 inhibition and deletion in vivo. To confirm the in vivo results, we conducted PAR2 knockdown and overexpression in UVB-irradiated HaCaT cells. In PAR2 knockdown cells by si-PAR2 treatment, it suppressed Akt/NF-κB and increased FoxO6, whereas PAR2 overexpression reversed these effects and subsequently modulated proinflammatory target genes. Collectively, our data define that PAR2 induces oxidative stress and inflammation through Akt-mediated phosphorylation of NF-κB (Ser536) and FoxO6 (Ser184), which could be a critical upstream regulatory mechanism in ROS-mediated inflammatory response.
Introduction
The skin is the largest and most complex organ in the body, which is in direct contact with the external environment. Cumulative exposure to ultraviolet (UV) irradiation damages the skin, leading to photoaging [1,2]. Premature skin aging is characterized by epidermal thickening, hyperpigmentation, coarse wrinkles, angiogenesis, immune and inflammatory responses, and reactive oxygen species (ROS) production [1]. At the molecular level, it is accepted that NF-κB is one of the core transcription factors that becomes activated and plays a critical role in the induction of proinflammatory cytokines, such as IL-6, IL-1β, IL-1α, and cyclooxygenase-2 (COX-2) [3]. During this elevated inflammatory response, intracellular ROS levels increase and ubiquitous targeting can induce oxidative stress even in adjacent cells, leading to molecular oxidative damage [4]. However, the detailed signaling pathways, action mechanisms, and regulatory signaling molecules of this process are not fully defined.
Protease-activated receptors (PARs) are a subfamily of G proteincoupled receptors (GPCRs), and are seven-transmembrane domain receptors comprising PAR1, 2, 3, and 4. PAR2 is cleaved and activated by serine proteases, including coagulation factors VIIa, tissue factor (TF), trypsin, kallikrein, and others. These enzymes cleave the extracellular Nterminus, unmasking endogenous tethered peptide sequences for the receptor binding loop for receptor activation [5,6]. PAR2 is well expressed in the skin epidermis, and receptor activation becomes prominent in UV-irradiated skin and cultured keratinocytes [7]. PAR2 is known to exert regulatory functions in the epidermal barrier, keratinocyte differentiation, cutaneous tumorigenesis, inflammation, and pigmentation [8].Canonical PAR2 signaling includes pathways in which receptor activation stimulates G protein signaling by coupling to the G protein α subunits such as G αi , G αq , and G α12/13 [9]. In one of the canonical signaling pathways, activation of Gαq leads to it coupling to phospholipase C (PLC), thereby activating the PLC-mediated hydrolysis of phosphatidylinositol 4,5-biphosphate (PIP 2 ) to diacylglycerol (DAG) and inositol 1,4,5-triphosphate (IP 3 ). IP 3 then initiates Ca 2+ release into the cytosol [9]. Upon elevated cytosolic Ca 2+ concentration, proline-rich protein tyrosine kinase 2 becomes activated through phosphorylation at the Y402 residue [10,11]. This can further activate p85 through direct interaction, leading to the initiation of the PI3-kinase (PI3K)/Akt signaling pathway [9,12]. It is generally accepted that Akt activates NF-κB by phosphorylating the Ser 536 residue during skin photoaging [13]. However, the downstream signaling pathways of the PAR2-mediated signaling pathway in the induction of the inflammatory response during skin photoaging is not clearly defined.
Among the downstream mediators of PI3K/Akt, forkhead box O (FoxO) transcription factors are the main downstream mediators of Akt [14]. FoxOs are negatively regulated by Akt signaling and are known to exert inhibitory effects on cell proliferation in various cell types. In skin, PI3K signaling regulates keratinocyte proliferation by activating Akt and its subsequent target molecules, including FoxO. For example, in psoriasis, an immune-mediated inflammatory disease, activation of PI3K/Akt and loss of FoxOs have been observed [15]. In the search for regulatory transcription factors that have activity during oxidative stress, FoxO6 has been recently reported for its protective role by inducing antioxidant gene expression during intrinsic and extrinsic skin aging [16]. This study demonstrated that treating UVB-irradiated B16F10 cells with FoxO6 suppressed intracellular ROS and peroxynitrite (ONOO − ) levels, subsequently leading to decreased melanin content. Such suppressive effects on the melanin content were not observed in the FoxO6 knockdown experiment [16]. In addition to skin, FoxO6 transactivates the antioxidant genes MnSOD and catalase in human liver cancer cells [17]. As elevated oxidative stress is one of the major characteristics of skin photoaging, the modulation of intracellular antioxidant enzymes through FoxO6 could play an essential role in ameliorating accelerated aging.
In this study, we investigated whether PAR2-mediated Akt activation could phosphorylate NF-κB and FoxO6 and induce cytokines and suppress antioxidative enzymes MnSOD and catalase, which could subsequently promote the intracellular inflammatory response and ROS production, respectively. Our data showed that the PAR2-mediated Akt/ NF-κB/FoxO6 signaling pathway led to ROS-mediated inflammation during skin photoaging, suggesting that this signaling axis can be an efficient therapeutic target for the prevention of skin photoaging.
Mice
HRM-2 hairless mice (8 weeks old, male) were purchased from Hoshino Laboratory Animals (Saitama, Japan). The mice were housed under a 12 h/12 h light/dark cycle and given ad libitum access to standard laboratory diet and water. Mice were exposed to UVB radiation (UVP CL-1000) at 150 mJ/cm 2 every other day for 28 days to induce skin photoaging. After 28 days of the experiment, mice were euthanized with carbon-dioxide and the dorsal skin tissue was obtained and quickly frozen in liquid nitrogen for additional analysis. For histological analysis, the obtained skin was fixed in 10% formalin. In additional experiment, hairless mice were topically applied with the PAR2-specific antagonist GB83. GB83 was dissolved in a vehicle mixed with ethanol and propylene glycol at a ratio of 3:7 and then applied daily at 0.4 μM or 5 μM to the dorsal surface of the mice skin. The mice were exposed to UVB (UVP CL-1000) at 150 mJ/cm 2 every other day for 28 days. After 28 days, dorsal skin was obtained and frozen in a liquid nitrogen tank for additional analysis. These animal experiments were approved by the Pusan National University Institutional Animal Care and Use Committee Homozygous PAR2-knockout (KO) mice of the PAR2− /− ; strain and 8-week-old male B6.Cg-F2RL1 tm1Mslb /J mice were kindly provided by Dr. Hak-Sun Yu (Department of Parasitology and Tropical Medicine, School of Medicine, Pusan National University, South Korea). After a 1week habituation period, mice were exposed to UVB radiation (UVP CL-1000) at 90 mJ/cm 2 every other day for 3 weeks. Due to greater acute responses observed in the strain, mice were exposed to lower UVB dosage than that of HRM2 hairless mice. After 3 weeks of the experiment, mice were euthanized with carbon-dioxide and the dorsal skin was obtained and quickly frozen in liquid nitrogen for additional analysis. This experiment was reviewed and approved by the Pusan National University Institutional Animal Care and Use Committee (Approval Number PNU-2020-2615).
Cell transfection
Cell transfection was performed using Lipofectamine 3000 (Invitrogen, Carlsbad, CA, USA). Briefly, 6 × 10 5 cells per well were seeded in 6-well plates and incubated at 37 • C in humidified 5% CO 2 atmosphere. When the seeded cells reached approximately 70% confluence, they were incubated with PAR2 plasmid (2 μg) and Lipofectamine 3000 complex in normal growth media for 24 h at 37 • C in humidified 5% CO 2 atmosphere. Then, cells were washed with ice-cold 1X PBS and pellets were collected at 12,000 x g at 4 • C for 15 min for further analysis. The pellets were resuspended in total lysis buffer solution composed of NaCl (150 mM), Triton X-100 (1%), sodium deoxycholate (1%), SDS (0.1%), Tris-HCl pH 7.5 (50 mM), EDTA (2 mM, pH 8.0) supplemented with protease inhibitor and phosphatase inhibitors for extraction of total protein from the cells. The human PAR2 construct for cell transfection was kindly provided by Dr. Morley Hollenberg (University of Calgary, Calgary, CA).
siRNA-mediated gene silencing
Pre-designed PAR2 siRNA was purchased from Santa Cruz Biotechnology. siRNA was transfected using Lipofectamine 3000 (Invitrogen) following the manufacturer's protocol. Cells were seeded to be approximately 50-60% confluent at the time of transfection. The final concentration of siRNA was 10 nM. Cells were incubated at 37 • C in 5% CO 2 for 24 h prior to transfection.
Separations of cytosol and nuclear extraction in skin tissue
Frozen skin tissues (150-200 mg) were ground using liquid nitrogen in a mortar and pestle. Ground skin tissues were homogenized in 1 mL hypotonic lysis buffer. Buffer A was composed of KCl (10 mM
Histological analysis of skin tissue
Skins were fixed in 10% formalin, embedded in paraffin and 5 μm sections were stained with hematoxylin and eosin (H&E) and was examined using Motic AE31 Inverted Microscope (Motic, Kowloon Bay, Hong Kong).
Measurement of ROS production
ROS production was measured using 2 ′ ,7 ′ -dichlorofluorescein diacetate (DCFDA) protocol. Briefly, nonfluorescent DCFDA was oxidized to the highly fluorescent 2,7 ′ -dichlorofluorescein (DCF) in the presence of intracellular esterase and reactive species. DCFDA (25 μM) was added to the skin cytosol fraction to obtain a total volume of 250 μL. Fluorescence intensity was measured and quantified every 5 min for a total of 30 min using a fluorescence plate reader at an excitation wavelength of 485 nm and emission wavelength of 535 nm.
Immunohistochemistry
For immunostaining, skin sections were treated with 3% H 2 O 2 in distilled water to block residual peroxidase for 10 min at RT. After treatment, skin sections were incubated with Tris-buffered saline (TBS) containing 0.1% Triton-X-100 and normal goat serum at 37 • C for 1 h, and then incubated with PAR2 primary antibody (1:200 dilution) (Santa Cruz Biotechnology) in TBS-T at 4 • C overnight. Sections were then further incubated with secondary goat anti-mouse IgG-horseradish peroxidase-conjugated (HRP) antibody (1:500 dilution) (Santa Cruz Biotechnology) at RT for 1 h. Sections were then stained with diaminobenzidine (DAB) solution, mounted with Dako mounting medium (Dako, Glostrup, Denmark), and covered with cover slips. Stained images were acquired using a Motic AE31 Inverted Microscope (Motic, Kowloon Bay, Hong Kong).
The same amount of protein (8-10 μg) was loaded and separated via SDS-PAGE using 7-15% gels and then transferred to PVDF membranes at 25 V for 10 min using a semi-dry transfer method. Membranes were then immediately incubated with a blocking buffer consisting of 10 mM Tris (pH 7.5), 100 mM NaCl, and 0.1% Tween-20 containing 5% non-fat milk. Membranes were blocked for 1 h at RT and then incubated with specific primary antibodies (1:1000-1:2000 dilution) at 4 • C overnight on a shaker. This was followed by HRP-conjugated secondary incubation (1:10000 dilution) for 1 h at RT. Antibody labeling was detected using enhanced chemiluminescence according to the instructions provided by the manufacturer. Molecular weights were determined using broadrange protein markers.
Reverse transcription and real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR)
Frozen skin tissues (150-200 mg) were excised and ground in a mortar and pestle in liquid nitrogen. The ground skin tissues were used for RNA isolation using the RNeasy Mini Kit (Qiagen, Hilden, Germany).
Total amount of 2 μg of total RNA was used to synthesize cDNA. qRT-PCR analysis was performed to detect mRNA levels using the SYBR Green and CFX Connect System (Bio-Rad Laboratories Inc., Hercules, CA, USA). All primers were designed and purchased from Bioneer (Daejeon, South Korea). The primer sequences used are listed in Supplementary Tables 1 and 2 The primer concentration used for qRT-PCR analysis was at 10 pmol concentration.
Statistical analyses
Analysis of variance was used to determine the statistical significance of the differences among the groups. Fisher's protected least significant difference post-hoc test was used to test the significant differences between group means. P-values < 0.05 were considered statistically significant.
UVB-induced skin photoaging was associated with increased oxidative stress and inflammatory responses in hairless mice
In skin exposed to long-term, persistent UV irradiation, ROS Fig. 1. Increased inflammatory response and oxidative stress in UVB-irradiated dorsal skin of hairless mice. After HRM2 mice were repeatedly exposed to UVB radiation (150 mJ/cm 2 ) for 28 days, skin tissue was excised and homogenized to separate the nuclear and cytosolic fractions (N = 5 per group). (A) A photograph of the dorsal skin of control and UVB-irradiated mice was taken for phenotype analysis. (B) A photograph of skin tissue homogenates was taken to visualize discoloration. (C) H&E-stained histological image of skin section of control and UVB-irradiated mice was captured and (D) epidermis thickness was quantified using Motic Image Plus 2.0 software (N = 5 per group). (E) The protein expression levels of IL-6 and IL-1β were measured using western blotting and (F) quantified using ImageJ software. α-tubulin was the loading control of the cytosolic fractions. (G) mRNA levels of IL-6 and IL-1β were quantified using qPCR (N = 5 per group). (H) The protein levels of catalase and MnSOD were measured using western blotting and (I) quantified using ImageJ software. α-tubulin was the loading control of the cytosolic fractions. (J) ROS production was determined by measuring the DCF fluorescence level in the skin cytosolic fraction (N = 5 per group). All data are represented as the mean ± SEM, and significance was determined using an unpaired t-test; *P < 0.05 vs. control. production and the inflammatory response is promoted, which is mediated by the activation of diverse intracellular signaling molecules such as NF-κB [19]. We irradiated dorsal skin of hairless mice with UVB (150 mJ/cm 2 ) every other day for 28 days. We confirmed the induction of oxidative stress and the inflammatory response in our in vivo experimental skin photoaging model. We observed that the lightness of UVB-irradiated dorsal skin of hairless mice was significantly decreased in comparison to control mice skin (Fig. 1A). Consistently, UVB-irradiated dorsal skin homogenates showed discoloration and was darker than that of control mice skin (Fig. 1B), suggesting greater melanin content. To further verify induction of skin photoaging, we examined skin by H&E staining and observed a significant increase in the thickness of the epidermis of UVB-irradiated mice skin in comparison to control mice ( Fig. 1C and D). At the molecular level, we measured proinflammatory cytokines and observed a notable increase in both protein and mRNA expression levels of IL-6 and IL-1β (Fig. 1E, F and G). The antioxidative enzyme MnSOD was observed to be downregulated in UVB-irradiated dorsal skin compared to that in control mice skin, The protein expression level of FoxO6 was detected using western blotting. TFIIB was the loading control of the nuclear fraction. All data are represented as the mean ± SEM and significance was determined using an unpaired t-test; *P < 0.05 vs. control.
whereas no changed was observed in catalase level ( Fig. 1H and I). As a consequence of decreased levels of antioxidant enzyme, we observed increased ROS production in the skin cytosolic fraction (Fig. 1J). These data indicate that characteristics of UVB-induced skin photoaging were observed at both phenotypical and molecular levels in hairless mice.
UVB-induced skin photoaging was associated with increased PAR2/ Akt pathway and NF-κB/FoxO6 modulation in hairless mice
Previous studies have reported that PAR2 expression is upregulated in UVB-irradiated human skin, suggesting its role in skin inflammation [7]. Here, we confirmed that protein expression of PAR2 and G-protein Gαq subunit was notably upregulated where the PAR2 expression was particularly increased in the epidermis area of skin in UVB-irradiated mice in comparison to that in control mice ( Fig. 2A, B and C). In the TFIIB was the loading control of the nuclear fraction. (J) The mRNA expression level of FoxO6 was measured using qRT-PCR (N = 5 per group). All data are represented as the mean ± SEM and significance was determined using an one-factor analysis of variance (ANOVA); *P < 0.05. canonical signaling pathway, Gαq is an upstream G-protein subunit that mediates the release of endoplasmic reticulum (ER)-stored calcium into the cytosol, which is well observed in UVB-irradiated skin. In general, such cellular effects are mediated through activation of the PLC-IP 3 signaling pathway [20]. To confirm whether UVB-irradiated skin inflammation is mediated by PAR2 coupling to the Gαq subunit, we investigated the physical interaction between PAR2 and Gαq in the UVB-irradiated skin cytosolic fraction. The degree of physical association between PAR2 and Gαq was notably increased in UVB-irradiated mice skin compared to that in control mice skin (Fig. 2D and E). Furthermore, we considered the PLC-mediated PI3K/Akt activation pathway for cell proliferative effects [21] observed during skin photoaging. We observed an increase in PI3K and Akt phosphorylation (Ser473) in UVB-irradiated skin of hairless mice (Fig. 2F and G). As is well established, we confirmed increased p65 phosphorylation by Akt at Ser536 in the nucleus (Fig. 2H) and p65 mRNA level (Fig. 2I). Although not much is known about its role in inflammation during skin photoaging, we detected that mRNA level of FoxO6 was decreased, among other FoxO isoforms (Fig. 2J). And the protein level of FoxO6 was decreased in the nucleus (Fig. 2K). These data indicate that PAR2 upregulation occurs during skin photoaging, suggesting its potential association with the Akt/NF-κB-mediated inflammatory response and The protein expression levels of phosphorylated Akt, total-Akt (I) phosphorylated p65, p65, and (J) total FoxO6 were detected using western blotting (N = 5 per group). β-actin was the loading control. All data are represented as the mean ± SEM, and significance was determined using an onefactor ANOVA; *P < 0.05. Akt/FoxO6-mediated MnSOD suppression. This upregulation leads to increased ROS production, further exacerbating the inflammatory response during skin photoaging.
PAR2 inhibition suppressed oxidative stress and inflammatory response in hairless mice during skin photoaging
To determine the role of PAR2 in oxidation and inflammatory responses through Akt/NF-κB/FoxO6 modulation, we used the PAR2specific antagonist, GB83. GB83 was dorsally applied to the UVBirradiated skin of hairless mice (Fig. 3A). The PAR2 antagonist significantly suppressed both UVB-induced skin discoloration and epidermal thickness in comparison to control mice skin (Fig. 3B, C and D). We next measured the mRNA levels of IL-6 and IL-1β, which were decreased by GB83 at both low and high doses (Fig. 3E). We also measured the expression of MnSOD, which was upregulated by GB83 treatment at both low and high doses in comparison to in vehicle-treated UVB-irradiated mice skin (Fig. 3F). Subsequently, we measured ROS production with DCF fluorescence and found that ROS levels were decreased by GB83 treatment at both low and high doses in comparison to vehicle-treated UVB-irradiated mice skin (Fig. 3G). At the molecular level, we detected that p65 phosphorylation in the nucleus was suppressed by GB83 treatment, notably at low doses, in comparison to that in vehicle-treated UVB-irradiated mice skin (Fig. 3H). The protein and mRNA expression of FoxO6 was increased by GB83 treatment at both low and high doses in comparison to vehicle-treated UVB-irradiated mice skin. Otherwise, FoxO6 phosphorylation (inactivated form) in the nucleus was suppressed by GB83 treatment (Fig. 3I and J). These results confirm that PAR2 mediates oxidative stress and inflammation during skin photoaging. The protein expression levels for phosphorylated Akt, total-Akt, FoxO6, phosphorylated p65 and p65 were measured in the UVBirradiated HaCaT cells overexpressing PAR2 plasmid and were quantified using ImageJ software (N = 3). β-actin was the loading control of the whole lysis. The mRNA levels of (E) IL-6 and IL-1β were quantified under the same experimental condition. All data are represented as the mean ± SEM (N = 3 per group) and significance was determined using an one-factor ANOVA; *P < 0.05.
PAR2 KO mice exhibited decreased oxidative stress and inflammation during skin photoaging
To confirm the pivotal role of PAR2 in oxidative stress and inflammatory responses, we used PAR2-deficient (PAR2 KO) mice. The mice were subjected to UVB irradiation (90 mJ/cm 2 ) every other day for 3 weeks (Fig. 4A). We first compared the dorsal skin photographs of wildtype (WT) and PAR2 KO mice with or without UVB irradiation at the end of the experiment. The UVB-irradiated PAR2 KO mice showed suppressed skin barrier destruction and epidermal thickness in comparison to UVB-irradiated WT mice (Fig. 4B, C and D). To investigate changes in the inflammatory response, we measured the mRNA levels of IL-6 and IL-1β and demonstrated that these were suppressed in UVB-irradiated PAR2 KO mice in comparison to those in UVB-irradiated WT mice (Fig. 4E). We detected mRNA levels of MnSOD and catalase and found that MnSOD was increased in UVB-irradiated PAR2 KO mice compared to that in UVB-irradiated WT mice. Otherwise, catalase mRNA level was not significantly changed (Fig. 4F). Similarly, the level of ROS production was suppressed in UVB-irradiated PAR2 KO mice in comparison to in UVB-irradiated WT mice (Fig. 4G). The phosphorylation levels of Akt and p65 were also suppressed in UVB-irradiated PAR2 KO mice in comparison to that in UVB-irradiated WT mice (Fig. 4H and I). FoxO6 levels were increased in UVB-irradiated PAR2 KO mice compared to that in UVB-irradiated WT mice (Fig. 4J). These results confirm the PAR2specific regulatory role in oxidative stress and inflammation through Akt/NF-κB/FoxO6 signaling modulation during skin photoaging.
PAR2 induced inflammatory response through the Akt/NF-kB/ FoxO6 signaling pathway in HaCaT cells
To confirm the in vivo results, we performed an in vitro experiment using UVB-irradiated HaCaT cells. We deleted PAR2 using si-PAR2, which was efficiently achieved at a concentration of 10 nM (Fig. 5A). We treated cells with 10 nM si-PAR2 and observed that Akt and p65 phosphorylation was decreased and FoxO6 was increased (Fig. 5B). To confirm this result, the PAR2 plasmid was transfected in HaCaT cells (Fig. 5C), which led to increase in Akt and p65 phosphorylation and decrease in FoxO6 expression in these cells (Fig. 5D). Next, we examined mRNA levels of IL-6 and IL-1β and demonstrated that they had significantly increased in UVB-treated and PAR2 plasmid-transfected cells ROS levels in HaCaT cells were measured using DCF fluorescence levels (F) mRNA levels of antioxidant catalase and MnSOD genes were quantified. All data are represented as the mean ± SEM (N = 3 per group) and significance was determined using an unpaired t-test and one-factor ANOVA; *P < 0.05. (G) Graphical description of PAR2 inducing ROS-mediated inflammation through Akt-mediated NF-κB and FoxO6 modulation during skin photoaging. (Fig. 5E). These results further confirm the in vivo results of PAR2mediated inflammatory response through modulation of the Akt/NF-κB/FoxO6 signaling pathway.
FoxO6 suppressed PAR2-Akt-mediated ROS production in UVBirradiated HaCaT cells
During photoaging, the biological role of FoxO6 was found to play a role in melanogenesis in the B16F10 murine melanoma cell line [16]. However, the role of FoxO6 in inflammation in keratinocytes has not been investigated. To investigate the signaling pathway of Akt-mediated FoxO6 regulation, we used HaCaT cells and treated cells with the PAR2 agonist, SLIGRL-NH 2 . Furthermore, we demonstrated that phosphorylation of Akt and FoxO6 was increased in a timely manner (Fig. 6A), confirming that PAR2 regulates the Akt-FoxO6 axis. Pretreatment with LY294002, a PI3K inhibitor, followed by a post-treatment with a PAR2 agonist suppressed FoxO6 phosphorylation (Fig. 6B). To further investigate the regulatory role of PAR2-mediated FoxO6 in oxidative stress in keratinocytes, we used the constitutively active form of FoxO6 adenovirus for treatment (Fig. 6C). FoxO6 suppressed PAR2 agonist-indued ROS production (Fig. 6D). Furthermore, FoxO6 by its own expression suppressed intracellular ROS levels (Fig. 6E). When we measured the mRNA levels of catalase and MnSOD and observed that Ad-FoxO6 treatment upregulated the mRNA levels of MnSOD gene (Fig. 6F). these results suggest a beneficial role of FoxO6 in suppressing ROS production in keratinocytes. Furthermore, these data suggest a novel role of FoxO6 in suppressing ROS levels, thus decreasing the inflammatory response in keratinocytes.
Discussion
Acute and chronic exposure to UVB irradiation induces characteristic molecular changes that include ROS formation, DNA and protein damage, and inflammation. These changes cumulatively lead to accelerated skin aging and development of skin cancer [22,23]. Focusing on the inflammatory response, initial biological changes to the skin as a result of UV irradiation are skin redness or erythema due to increased blood vessel dilation and increased vascularization [24,25]. In inflamed skin, angiogenesis and vascular remodeling are characterized by high vasculature permeability, elevated blood flow, inflammatory cell infiltration, and activated vascular endothelial cells expressing cytokines, further exacerbating inflammatory conditions. As such, diverse signaling pathways and action mechanisms have been reported. Here, we show that PAR2 is a critical upstream mediator in both skin inflammation and intracellular oxidative stress during photoaging (Fig. 4E, F and G), based on a phenotype analysis of PAR2-deficient mice. Moreover, our results delineate that Akt-mediated NF-κB and FoxO6 modification is a downstream pathway for the PAR2-induced upregulation of proinflammatory cytokines and downregulation of antioxidative gene transcription in epidermal keratinocytes (Figs. 5 and 6).
PAR2 belongs to a superfamily of GPCRs and can be activated by endogenous and exogenous serine protease enzymes that cleave the extracellular N-terminal domain of the receptor, leaving tethered ligand peptide that acts as an activator of the receptor [26]. Using serine proteases or PAR-specific synthetic peptides, the effects of PAR activation in diverse disease progression were demonstrated in both in vitro and in vivo experimental models. Previous findings demonstrated the biological roles of PAR2 in the function of innate and immune responses and development of inflammatory and allergic responses [27]. Focusing on the inflammatory response during skin photoaging, it has been determined that the immune system, inflammation, and coagulation are simultaneously activated to defend against potentially damaging stimuli in the skin [28,29].
In the canonical signaling pathway, PAR2 couples with PLC to hydrolyze PIP 2 into DAG and IP 3 , which binds with IP 3 R on the ER membrane and subsequently releases ER calcium into the cytosol, whereas DAG activates protein kinase C (PKC) [30,31]. In keratinocytes, the PLC signaling pathway been demonstrated to play a critical regulatory role in skin inflammation using a PLC-deficient mouse model [32,33]. To elucidate the potential downstream signaling pathway of PAR2-mediated PLC activation during photoaging, the role of PI3K-mediated Akt and p65 activation was examined. Akt and p65 activation was observed in the proinflammatory status of mast cells of cutaneous neurofibroma [34]. PI3K/Akt, a well-known downstream signaling kinase of PLC, has been reported to induce cell proliferation and survival. An increase in Akt and NF-κB activation was also reported in both chronologically and extrinsically aged skin with an inflammatory response, whereas in vivo inhibition of Akt activity led to suppression of NF-κB activation [13,35]. Although the role of Akt and NF-κB signaling has been reported, its involvement in the PAR2 signaling cascade during skin photoaging has not been defined. Our results in photoaged skin revealed that interaction of PAR2 with Gαq correlated with an increase in PI3K/Akt activation and NF-κB phosphorylation levels ( Fig. 2F and H), whereas the reversal effects were observed in PAR2 KO mice ( Fig. 4H and I). These reversal effects were confirmed in keratinocytes treated with PAR2 siRNA and irradiated with UVB (Fig. 5B). During UVB irradiation, active metabolism of arachidonic acid to prostaglandin appears to be mediated by the upregulation of COX-2, a rate-limiting enzyme that mediates its metabolic conversion [36][37][38]. To further emphasize the role of COX-2 during accelerated skin aging, it has been demonstrated to play a critical role in the aging process [39]. During inflammation-associated aging progression, the expressions of cytokines such as IL-6 and IL-1β and COX-2 mRNA and protein are upregulated by redox-sensitive transcription factor NF-κB; these molecules act as a free radical or ROS source, leading to increased oxidative stress during aging [40]. Based on our data, the PAR2-Akt axis mediates NF-κB activation, which induces upregulation of cytokines as well as COX-2 (data not shown). These findings emphasize the regulatory role of PAR2-mediated Akt activation in the ROS production and inflammatory response during skin photoaging.
Inhibition of PAR2 activation using GB83 decreased the inflammatory response and oxidative stress and decreased NF-κB/FoxO6 phosphorylation during skin photoaging (Fig. 3). Furthermore, PAR2 deficiency in the skin is known for its anti-inflammatory effects. Using photoaged PAR2 KO mice, we demonstrated that such NF-κB phosphorylation and FoxO6 suppression were reversed ( Fig. 4I and J). In keratinocytes, PAR2 siRNA treatment with UVB irradiation confirmed these in vivo results, whereas PAR2 plasmid overexpression in HaCaT cells irradiated with UVB reversed these effects (Fig. 5). Treatment of adenoviral FoxO6 suppressed PAR2-mediated ROS production (Fig. 6D), emphasizing its protective role against ROS and inflammation in keratinocytes. These data further support the previous report that showed suppression of the inflammatory response and itching in atopic dermatitis by treatment with pepducin, a PAR2 signaling inhibitor [41]. Another study demonstrated the delayed onset of inflammation with defects in P-selectin-mediated leukocyte rolling in PAR2-deficient mice in comparison to WT mice [42]. PI3K/Akt-mediated FoxO6 has been reported to have a redox regulatory role by increasing antioxidative gene expression levels and inhibition of proinflammatory mediators [17,[43][44][45][46]. Akt deficiency led to resistance to H 2 O 2 , which caused premature senescence in mouse embryonic fibroblasts (MEFs); this effect was abrogated by overexpression of loss-of-function FoxO in MEFs [46]. Transcriptionally inactive p-FoxO6 is unable to induce its target antioxidative functional enzymes, MnSOD and catalase, thus failing to protect from ROS production [47]. In photoaged skin, Akt activation suppresses transcriptionally active FoxO6 levels, which subsequently leads to an enhancement of oxidative stress in the skin. In turn, this process leads to microphthalmia-associated transcription factor (MITF)-mediated skin melanogenesis in UVB-irradiated murine melanoma cells [16]. To further elucidate the potential relationship between FoxO6 and proinflammatory mediators, it was previously shown that FoxO6 interacts with NF-κB in endotoxin-induced inflammation through Akt phosphorylation in the liver [17]. For the first time, we examined and demonstrated the upstream regulatory role of PAR2 for the antioxidative transcription factor FoxO6 in the oxidative response during skin photoaging in vivo (Fig. 3I, J and 4J). In the keratinocyte cell line, FoxO6 overexpression using adenoviral FoxO6 upregulated MnSOD and led to suppression of ROS ( Fig. 6E and F). The current study is limited by the detailed mode of FoxO6 activity, including posttranslational modifications, subcellular localization, interaction with coregulators, and stability [47]. Therefore, further studies are necessary to examine PI3K/Akt-mediated FoxO6 modification during skin photoaging. It is of interest to investigate whether FoxO6 could directly interact with proinflammatory mediators to suppress its inflammatory effects as a defense mechanism during skin photoaging.
In summary, for the first time, we showed that PAR2-Gαq mediated the elevation of ROS and inflammation in PI3K/Akt-mediated phosphorylation of NF-κB (S536) in a FoxO6 (S184)-dependent manner, both in vivo and in vitro. Our data demonstrated that PAR2-Gαq couples and induces oxidative stress and the inflammatory response through Akt/NF-κB/FoxO6 phosphorylation, leading to a subsequent increase in proinflammatory cytokine production and a decrease in antioxidative MnSOD enzyme during skin photoaging (Fig. 6G). The significance of the current finding is that the PAR2-Akt signaling axis is a critical upstream regulator of NF-κB and FoxO6 phosphorylation in the ROS-mediated inflammatory response during skin photoaging. The PAR2-Akt signaling axis could therefore be a potential therapeutic target for managing inflammation in skin photoaging.
Declaration of competing interest
There are no conflicts of interest to declare. | 7,379.4 | 2021-05-26T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Guaranteed Stability of Sparse Recovery in Distributed Compressive Sensing MIMO Radar
Low SNR condition has been a big challenge in the face of distributed compressive sensing MIMO radar (DCS-MIMO radar) and noise in measurements would decrease performance of radar system. In this paper, we first devise the scheme of DCS-MIMO radar including the joint sparse basis and the joint measurement matrix. Joint orthogonal matching pursuit (JOMP) algorithm is proposed to recover sparse targets scene. We then derive a recovery stability guarantee by employing the average coherence of the sensing matrix, further reducing the least amount of measurements which are necessary for stable recovery of the sparse scene in the presence of noise. Numerical results show that this scheme of DCS-MIMO radar could estimate targets’ parameters accurately and demonstrate that the proposed stability guarantee could further reduce the amount of data to be transferred and processed.We also show the phase transitions diagram of the DCS-MIMO radar system in simulations, pointing out the problem to be further solved in our future work.
Introduction
Nowadays, detection of targets which are stealth or in strong interference has become an important requirement for radar system.Distributed placed antennas enable the system to view targets from multiple angles, providing spatial diversity and reducing the target radar cross sections (RCS) scintillations.Therefore, distributed MIMO radar can be employed to detect stealth targets.The difference of signals transmitted by each transmitter provides several information channels for distributed MIMO radar, enabling the MIMO radar system to achieve superior spatial resolution as compared to a traditional radar system [1][2][3][4][5].But the amount of data from these information channels is always too huge to be processed, increasing the difficulty in hardware designing.Compressive sensing is used in MIMO radar to estimate the DOA in [6,7] and it is shown that CS-MIMO radar system could estimate targets' parameters accurately, using much fewer data than that needed in conventional MIMO radar.
Viewing targets from different angles with separated antennas, we can detect the stealth targets and estimate targets' parameters as position and velocity [8].The theory of distributed compressive sensing (DCS) was proposed in [9].In a standard DCS scenario, signals measured by sensors are each individually sparse on some basis or all the signals share the locations of nonzero coefficients in the sparse vectors.Under the right conditions, a decoder at the collection point can jointly reconstruct all of the signals precisely.Such property of DCS happens to fit the distributed MIMO radar and in this paper we provide a practical scheme for DCS-MIMO radar system.The sparse targets scene of DCS-MIMO radar is shown in Figure 1.
In the sparse targets scene of DCS-MIMO radar, as Figure 1 shows that the antennas are distributed and placed and they detect the region of interest from different angles.The pentagrams in Figure 1 represent the targets.Using distributed compressive sensing, distributed MIMO radar could precisely reconstruct the sparse scene with considerably lower amount of measurements than required by the Nyquist theorem.Many recovery algorithms have been proposed for compressive sensing in recent years.Algorithms inspired by MP include OMP [10], tree matching pursuit [11], stagewise OMP [12], CoSaMP [13], and IHT [14].Different from algorithms of the match pursuit class, there are also FOCUSS [15] and sparse Bayesian learning (SBL) [16].In DCS scenario, sparse recovery algorithms need to make the most of the common component and innovations of the received signals which are treated as a signal ensemble [9].Many joint sparse recovery algorithms were employed to recover the sparse vector jointly, such as OSGA and SOMP [17].In this paper, we propose a joint orthogonal matching pursuit (JOMP) algorithm to exploit the special structure of the joint sparse vector.It is demonstrated that using DCS with JOMP algorithm is more effective and more accurate than processing signals in each receiver with CS method separately.
Nonetheless, in order to ensure that the DCS-MIMO radar system could be realized in the application of engineering, many problems need to be discussed in depth.One of these problems is finding the least amount of measurements to ensure the stable recovery in low signal-to-noise ratio (SNR) condition.Article [18] establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system.However, the theorem in [18] is too strict for radar application; hence, we can further reduce the amount of data.In this paper, we propose a stability guarantee for stable recovery in DCS-MIMO radar, using the average coherence of sensing matrix.We then find the least amount of measurements for our DCS-MIMO radar system using JOMP algorithm in low SNR condition.
This paper is organized as follows.In Section 2, we introduce DCS-MIMO radar system and give joint sparse model of the received signal ensemble and we process sparse reconstruction with JOMP algorithm.In Section 3, we propose the method to find the least amount of measurements to ensure stable recovery and we give the stability guarantee of DCS-MIMO radar.Numerical results are shown in Section 4 to demonstrate the effectiveness of the scheme of DCS-MIMO radar and the stability guarantee.Phase transitions diagram is shown to raise a new problem to be solved in future.Finally, Section 5 concludes this paper.
DCS-MIMO Radar
2.1.Signal Model.In this section, we describe DCS-MIMO radar system and give joint sparse representation of the received signal ensemble.We consider a system with transmitters and receivers and there are targets in the region of interest.We assume that all targets are moving in a two-dimensional plane.However, without loss of generality, we can also consider the three-dimensional occasion.We further assume that each of the targets contains multiple individual isotropic scatterers.We can express the collection of these scatterers as one point scatterer which represents the RCS center of gravity of these multiple scatterers [8].The RCS center of the th target is located at Suppose the waveform transmitted by the th transmitter is ().These signals travel in space and reflect off the surfaces of the targets and then are captured by the receivers.Further, we assume that the cross correlations between these waveforms are close to zero for different delays [8].Let denote the attenuation corresponding to the th target between the th transmitter and the th receiver.The signal arriving at the th receiver can be expressed as where is the delay and is the Doppler shift corresponding to the th target and () denotes the additive noise in the th receiver: where u and u denote the unit vector from the th transmitter to the th target and the unit vector from the th target to the th receiver, respectively.⟨⋅, ⋅⟩ is the inner product operator.Hence, ⟨k , u ⟩ is the velocity component from the th target and the th receiver, and ⟨k , u ⟩ is the velocity component from the th transmitter to the th target. is the carrier frequency and is the speed of propagation of the wave in the medium.
Joint Sparse Representation.
We define the target state vector = [ , , V , V ]; hence, the important properties of the target (position and velocity) are specified by .The whole target's state space is divided into possible values { , ∀ = 1, 2, . . ., }.Hence, each of the targets is associated with a state vector belonging to this grid.If the presence of a target at contributes to received signal in the th receiver, then we define The sampled outputs of () are given as (), = 1, 2, . . ., .Then, we arrange () into a × matrix : where is the sparse basis corresponding to the th receiver.Then, the joint sparse basis of all receivers can be expressed as If the is the state of the th target, we define = , where denote the attenuation value of the th target observed by the th receiver.Otherwise, the = 0. We arrange into a ×1 vector = [ 1 , 2 , . . ., ] , where [⋅] denotes the transpose of [⋅].We get joint sparse representation of the signal ensemble as In the expression of the measurement vector mentioned above, Ψ is known and only depends on the actual targets present in the illuminated area.The nonzero entries of represent the target attenuation values and the corresponding indices represent the positions and velocities.Further, the indices of nonzero entries of each are always the same; in other words, all signals in each receiver share the location of nonzero entries.So, we call this joint sparse modeling.
Joint Sparse Recovery.
In the previous section, we get the joint sparse representation of the signal ensemble received by receivers.The theory of compressive sensing said that we can reconstruct the vector from far fewer samples than that contained in the vector r.If the measurement matrix is represented by Φ, then the coherence between Φ and Ψ measures the largest correlation between them.Φ must be such that it has as little coherence with Ψ as possible.Since random matrices satisfy low coherence properties, we generate the entries of the ( ) × () dimensional measurement matrix Φ from independent Gaussian distribution as the measurement matrix of the th receiver, where ≪ .Considering the special structure of the joint sparse basis Ψ, we design the joint measurement matrix Φ as Employing this structure of joint measurement matrix, we can measure the signal ensemble simultaneously and design submeasurement matrix according to the different situation of each receiver such as the different SNR.On the other hand, it is much more convenient to optimize the joint measurement matrix Φ without ignoring the independence of the receivers.
So, the new measurement vector in the presence of noise is where Γ = ΦΨ is defined as sensing matrix and e is the measured noise.
In order to find the properties of targets, we need to recover the joint sparse vector from the measurement y.It is an optimization problem with a noisy setting as follows: The recovery of joint sparse vector from measurement y = Γ + e is one of the key points of this paper.Considering the special structure of the joint sparse vector , each has the same location of nonzero entries, which could be treated as joint sparsity.To define joint sparsity, we view as a combination of groups-assumed throughout the paper to be a length -with [] denoting the th group, that is, where = .Entries which are corresponding to the same target state vector are arranged into the group [].Similarly, we can represent Γ as a combination of subgroups Γ[]: We propose an extension of matching pursuit algorithm called joint orthogonal matching pursuit (JOMP) that exploits the knowledge of joint sparsity.Based on orthogonal matching pursuit (OMP) algorithm, we divided the columns of the sensing matrix Γ = ΦΨ into groups as shown in (11).Columns which are in the same group are corresponding to the same information cell (target state vector).In JOMP, we first initialize reconstructed vector (0) = 0 and the residual r (0) = y.In each subsequent iteration , we project the residual vector r ( −1) onto all the subgroups of Γ and pick the Input: y-sampled measurement vector Φ-( CS ) × ( ) measurement matrix Initialize All: reconstruct vector (0) = 0; the residual r (0) = y; the sparsity ; sensing matrix Γ = ΦΨ; the index set Λ 0 = ⌀; Loop: set = 1 and repeat until > (1) Arrange the columns of Γ corresponding to the th cell into the matrix Γ () , get the product of each Γ () ( = 1, 2, . . ., ) and the residual (0) , find the max product and get the corresponding index = arg max =1,2,..., |⟨r −1 , Γ ⟩| (2) Update the index set Λ = Λ −1 ∪ { } and update the set of reconstruct atoms group Γ ( ) that has the highest correlation with the residual.We update the estimated reconstructed vector: We finally update the residual as When the residual was finally updated by calculating the product of these groups and the received signal, we find the groups which are most correlated with the received signal y.Then, we get the reconstructed sparse vector θ.The procedure of JOMP algorithm is shown as in Algorithm 1.
Stability Guarantee for DCS-MIMO Radar
Large amounts of data will bring great difficulties for the design of hardware system.So, it is important to find a lower limit of data amount to guarantee the stability of sparse recovery, providing benefits for implementation of the DCS-MIMO radar system.Hence, the stability guarantee for DCS-MIMO radar is the focus of this paper.
In most practical situations, it is not sensible to assume that the available data y obey precise equality y = Γ, where Γ ∈ R × .A more plausible scenario assumes sparse approximate representation: that there is an ideal noiseless signal x 0 = Γ 0 , but that we can observe only a noisy version y = x 0 + e, where ‖e‖ 2 ≤ .
The concept of coherence of the sensing matrix Γ is usually used as a criterion of the property of the sensing matrix.Assuming that the columns of Γ are normalized to unit 2 -norm, it is defined in terms of the Gram matrix G = Γ Γ.With (, ) denoting entries of this matrix, the coherence is The theorem proposed in [18] said that if a noiseless sparse signal x 0 = Γ 0 satisfies the inequality = ‖ 0 ‖ 0 < (1/ + 1)/4, the deviation of the ( 1, ) representation from The parameter in Donoho's theorem indicates the worst case of the coherence between some columns of the sensing matrix Γ.Since the sensing matrix Γ of DCS-MIMO radar is overcomplete and it could be regarded as a redundant dictionary, some columns would not be chosen during the process of sparse recovery; that is, even if is too large to guarantee the incoherentness of the sensing matrix Γ, DCS-MIMO radar may still estimate targets' positions correctly by choosing the incoherent columns of Γ when the targets are in proper positions.
We simulated the distribution of the normalized cross correlation between the columns of the sensing matrix as Figure 2 shows.According to the definition, the coherence of the simulated sensing matrix is 0.8837.The value of is so large that the simulated sensing matrix seems not qualified according to the theorem in [18].However, in practice, the sensing matrix is incoherent enough for DCS-MIMO radar since most of the cross correlation values are small.It is also demonstrated in a particular experiment as simulation 4.1 that DCS-MIMO radar is able to estimate the targets' parameters accurately based on this sensing matrix.So, the parameter is too strict to assess the coherence of the sensing matrix of DCS-MIMO radar.A new evaluation criterion which could indicate the overall coherence of the redundant dictionary of DCS-MIMO radar is needed.Therefore, we use to denote the average value of the coherence of each pair of columns in the sensing matrix Γ: where is the mean of the nonzero absolute values of these off-diagonal elements in the Gram matrix G. Compared to , indicates the overall coherence of the redundant dictionary rather than the extreme conditions.By calculating the results in Figure 2, we get the average coherence of the simulated sensing matrix which is 0.1792.That is, even if the is large, small could still guarantee DCS-MIMO radar's estimation accuracy.Hence, using to indicate the performance of DCS-MIMO radar is more practical than using .In this paper, we use the average coherence to assess the coherence of the sensing matrix Γ.Then, the stability guarantee for DCS-MIMO radar system with is proposed.
Theorem 1.Let the overcomplete sensing matrix Γ have average coherence (Γ).If the sparse representation of the noiseless signal 0 = Γ 0 satisfies then the deviation of the ( 1, ) representation from 0 , assuming ≥ , can be bounded by Proof.The stability bound can be posed as the solution to an optimization problem of the form max Put in words, we consider all representation vectors 0 of bounded support, all possible realizations of bounded noise, and we ask for the largest error between the ideal sparse decomposition and its reconstruction from noisy data.Defining V = − 0 and similarly = θ − 0 , we can rewrite the above problem as where is the support of nonzero entries in 0 with complement .Therefore, we get a further increase in value by replacing the feasible set in (20) with Writing this out yields a new optimization problem with still larger value: Then, we eliminate the noise vector , using Expanding the feasible set of (21) using this observation gives max where we denote Δ = + .
The constraint ‖Γ‖ 2 ≤ Δ is not posed in terms of the absolute values in the vector , complicating the analysis; we now relax this constraint using incoherence of Γ.The Gram matrix of Γ is G = Γ Γ, and the coherence used in this paper is the average off-diagonal amplitude .Let 1 be the -by- matrix of all ones.Let G min denote the matrix whose offdiagonal elements are the smallest one of the off-diagonal entries of the Gram matrix G. Similarly, let G denote the matrix whose off-diagonal elements are .The constraint can be relaxed: Using this, (25) is bounded above by the value max This problem is invariant under permutations of the entries in which preserve membership in and .It is also invariant under relabeling of coordinates.So, assume that all nonzero entries in 0 are concentrated in the initial slots of the vector; that is, = 1, 2, . . ., .
We can use (16) to find the least amount of measurements for stable recovery and confirm the guaranteed stability of DCS-MIMO radar.Compared to theorem proposed by Donoho et al. in [18], stability guarantee proposed in this paper further reduces the amount of data necessarily to be transferred and processed in DCS-MIMO radar for stable recovery of the sparse targets scene.This stability guarantee gives theoretical foundation for hardware implementation of DCS-MIMO radar and it also demonstrates the feasibility of targets' parameters estimation in low SNR condition by DCS-MIMO radar.
International Journal of Antennas and Propagation
Numerical Results
4.1.Targets' Parameters Estimation.In order to prove the effectiveness of the DCS-MIMO radar, we simulated a small scene with 2 transmitters and 2 receivers.We use the common Cartesian coordinate system.The transmitters are located at t 1 = [100, 0] m and t 2 = [200, 0] m, respectively.The receivers are located at r 1 = [0, 100] m and r 2 = [0, 200] m, respectively.The sample rate is = 100 MHz.We choose the number of samples in each receiver to be = 80; therefore, y has 160 entries.We divided the position space of the target into 11 × 13 grid points and the target's velocity space into 5 × 5 grid points.Therefore, the total number of possible target states = 3575.We consider the presence of 2 targets.Hence, the sparse vector formed by 7150 entries has only × = 4 nonzero entries corresponding to the targets.The positions and the velocities of the targets are given as The attenuations corresponding to the two targets are We assume the SNR for the receivers are SNR1 = 2 dB and SNR2 = 3 dB.
From Figure 3, we can see that, using JOMP algorithm, the DCS-MIMO radar system is able to estimate the positions and velocities of the targets accurately in the presence of noise.Since it is not possible to plot the position and velocity on the same plot, we plotted the estimates of position and velocity separately.In order to compare with the conventional CS method that uses CS to reconstruct the sparse vector in each receiver separately, we use 1000 independent Monte Carlo runs to generate these results.Here, we define that a successful estimation is that every receiver estimates the positions of targets accurately.Then, in Figure 4, we can see that reconstruction using joint sparse modeling has a higher reconstruction probability than the conventional CS approach.Therefore, we demonstrate the advantages of the reconstruction using joint sparse modeling and JOMP algorithm.the relationship between and the number of measurements .For comparison, we also show the relationship between and the number of measurements according to Donoho's theorem in [18].Considering the joint sparsity of the received signal ensemble, we get the stability guarantee of the DCS-MIMO radar system based on the theorem proposed in Section 3. In Figure 5, when the value of the average coherence or the coherence is lower than the stability guarantee, stable recovery of sparse targets scene could be achieved.By comparison of Figures 5(a) and 5(b), we can find that the necessary number of measurements reduced from 120 to 80 in our simulation settings by the stability guarantee in this paper.
Guaranteed Stability of
Figure 6 shows the probability of reconstruction with different SNR.We can see that when the measurements reach 80 which satisfy the stability guarantee proposed in this paper, the reconstruction probability is close to that which is corresponding to the 120 measurements.Therefore, it is demonstrated that our stability guarantee further reduces the necessary measurements for stable recovery of DCS-MIMO radar in low SNR condition, compared to Donoho's stability guarantee.
Phase Transitions Diagram.
In order to further study the impact of the joint sparsity and the amount of measurements on the performance of sparse reconstruction, we showed the phenomenon of phase transitions of joint sparse recovery in our DCS-MIMO radar system based on a large number of experiments.The phase transitions diagram would visually show the relationship between the joint sparsity, the amount of measurements, and the probability of sparse reconstruction.
The in Figure 7 denotes the ratio of the length of signal to the number of measurements .Success rates of 90%, 50%, and 10% are indicated by the lower set of blue, green, and red curves, respectively.When the [, ] drops in the area under these curves, the DCS-MIMO radar system could reconstruct the sparse scene by the probability of the curve or higher probability.
We can find that when DCS-MIMO radar system is in the condition of low SNR, the performance falls rapidly while the joint sparsity increases.Hence, the new problem appears and we will somehow need to solve the joint sparse recovery with higher joint sparsity.This is the main content of our latter part of study.
Conclusion
In this paper, we have devised the scheme of the DCS-MIMO radar system and proposed the joint sparse modeling to get joint sparse representation of the received signal ensemble.This scheme provided us with the method for processing signal from different channels simultaneously and getting better performance in low SNR condition.We also proposed a modificatory stability guarantee for sparse recovery in the DCS-MIMO radar, employing the average coherence of the sensing matrix.This stability guarantee is demonstrated to be effective for the DCS-MIMO radar and could further reduce the necessary amount of measurements for stable recovery.On the other hand, since the stability guarantee has given us the least amount of measurements which is needed for stable recovery in low SNR condition, we will have theoretical foundation when designing the hardware of the DCS-MIMO radar.We have provided analytical results to show the feasibility of the DCS-MIMO radar system and the reliability of the proposed stability guarantee.At last, we raised a new problem with higher joint sparsity by analyzing the phase transitions diagram of DCS-MIMO radar, which points out the next research priorities in our future work.
4 )
Update the residual r = y − Θ ŝ , = + 1 End loop Output: the index set Λ and the nonzero value ŝ Algorithm 1: The procedure of JOMP algorithm.
Figure 2 :
Figure 2: The distribution of normalized cross correlation between the columns of Γ.
Figure 4 :
Figure4: Reconstruction probability for the system using joint sparse modeling and the system using CS in each receiver separately as a function of SNR.
Figure 5 :
Figure 5: Coherence and average coherence of the sensing matrix of different number of measurements with the different stability guarantee.
Figure 6 :
Figure 6: Reconstruction probability for single target under different SNR condition.
With the noise of 2 dB
Note that if is the minimizer of ‖ 0 + V‖ 1 under these constraints, then relaxing the constraints to all satisfying ‖ 0 + ‖ 1 expands the feasible set.However, this is true only if ≥ since otherwise V = 0 is not a feasible solution.
Joint Sparse Recovery.According to the theorem in Section 3, we simulated the Gram matrix of the sensing matrix of the DCS-MIMO radar system.After calculating the average off-diagonal amplitude , we show | 5,705 | 2015-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
How to Reasonably Wait for the End of the World: Aquinas and Heidegger on the Letters to the Thessalonians
For Christians, coping with a crisis requires a proper expectation of the end of the world. This article will discuss the experience of Thessalonians’ persecuted community, who receive solace and orientation from Saint Paul’s eschatological teaching. I will focus on Aquinas’ and Heidegger’s reading of Saint Paul’s letters to the Thessalonians. Their interpretation reveals two opposite ways of waiting for the end of the world, which pertain to two different modes of human rationality: (1) the calculative reasoning of those who claim to know when the end will happen, and (2) the lucidity and sobriety of the true believers, who accept that nobody can know the day of the second coming of Christ and thus we have to continually prepare and be ready for it. Which way is the most suited to handle a crisis like the current pandemic? The calculative reason is necessary for fields like medicine, which are crucial in defeating the pandemic. However, when dealing with a crisis that brings us unexpected and unknown circumstances, we also need the virtuous, sober, and awakened attitude promoted by Saint Paul in his letters and highlighted by Aquinas and Heidegger.
from Saint Paul: the "demonstration of the Spirit and power" (1 Corinthians, 2:4), namely the manifestation of Christian faith in Paul's relationship with the Thessalonians. I will highlight this experiential demonstration against the backdrop of Heidegger's phenomenology of the Christian life. For this reason, I will proceed not in chronological but in methodological order, starting with Heidegger then moving to Aquinas.
Heidegger's reading of the Letters to the Thessalonians
In The Phenomenology of Religious Life, Heidegger analyzes the early Christian life depicted by Saint Paul and Saint Augustine. For him, the Christian life is the paradigm of human factical life and substantially informs his analytic of human existence in Being and Time. The historical is the main feature of this paradigm that will carry on in Heidegger's later work. Human life belongs to a history of salvation in which Jesus' Incarnation and Resurrection are the basis for His second coming and the world's end. These historical dynamics influence even the later phase of his ontology when he is mostly concerned with Being and the history of Being. This article will not engage with Heidegger's later eschatology of Being,³ but will focus on his early commentary on Thessalonians. The matter at hand here is Heidegger's insight into what an authentic Christian life looks like in expectation of the parousia. His critique of the inauthentic eschatological attitude reveals some issues that we see in the current pandemic.
What does a crisis⁴ have to do with the end of the world? In Heidegger's reading of the Thessalonians, human tribulations are a foretaste of the apocalypse. The enduring of affliction and the preparedness for the end of the world are intertwined: The awaiting of the παρουσία of the Lord is decisive. The Thessalonians are hope for him not in a human sense, but rather in the sense of the experience of the παρουσία. The experience is an absolute distress (θλιπις) which belongs to the life of the Christian himself. The acceptance (δεχεσθαι) is an entering-oneself-into anguish. This distress is a fundamental characteristic, it is an absolute concern in the horizon of the παρουσία, of the second coming at the end of time.⁵ To clarify this intertwining, Heidegger presents the Christian proclamation, the kerygma, as a phenomenon described according to the phenomenological method: "The content proclaimed, and its material and conceptual character, is then to be analyzed from out of the basic phenomenon of proclamation."⁶ Heidegger distinguishes three directions of sense in a phenomenon: (1) the content-sense (Gehaltssinn), what is experienced in it; (2) the relational sense (Bezugssinn) that regards the person who experiences it; and (3) the enactment-sense (Vollzugssinn), namely how the relational meaning is enacted.⁷ Thus for Heidegger, the kerygma's content can only make sense in its enactment in the everyday lives of Christians. In his eyes, Paul condemns the laziness of some Thessalonians because it breaks these directions of sense. These Thessalonians withdraw from enactment and detach themselves from what is calling them to transform their lives. They lose themselves in speculations about the end's date and behave as if they would be observers, not players in the history of salvation.⁸ Enactment has the value of proof understood in a phenomenological-religious sense. In such a sense, the proof is not a syllogism that conduces to theoretical ideas, like the idea of God's existence or the justification of evil. The proof that Heidegger aims at should, on the contrary, regain the existential basis of Christianity constituted by the Gospel's message as the early communities live it. Such proof is not a Beweis, a proof in the theoretical sense, which offers ideas captured in insight, but rather an Erweis, a showing that takes place in the enactment of faith. Heidegger uses the term proof (Beweis) in quotation signs to indicate that he does not understand proof in the theoretical mode: "the 'proof' [Beweis] and the showing [Erweis] of what is proclaimed lie not in having-had insight; rather, the proclamation is 'showing' (apodeixis) of the 'spirit,' 'force.' /…/ Communication of existence; and the apostle is tool of this showing."⁹ Heidegger refers here to Paul's first letter to Corinthians, where Paul distinguishes between words of wisdom and demonstration of the Spirit and power: When I came to you, brothers, proclaiming the mystery of God, I did not come with sublimity of words or of wisdom. For I resolved to know nothing while I was with you except Jesus Christ, and him crucified. I came to you in weakness and fear and much trembling, and my message and my proclamation were not with persuasive [words of] wisdom, but with a demonstration of spirit and power, so that your faith might rest not on human wisdom but on the power of God (I Corinthians, 2:1-5).
The contents of the Christian kerygma are thus not separated from the experience of faith. It is not because the experience creates the contents but because the contents do not make sense outside this experience. The letters of Saint Paul are significant in this regard because their doctrinal content does not stand apart from Paul's concrete experiential situation with the Thessalonians. The epistolary style is indeed the expression of their writer and his situation.¹⁰ Such a situation is neither a static complex of conditions (for example, Paul's age, the time, space, etc.), nor a flow of events, but reflects the common experience of Paul and the Thessalonians: "How does Paul, in the situation of a letter-writer, stand to the Thessalonians? How are they experienced by him? How is his communal world given to him in the situation of writing the letter?"¹¹ Although Paul's environment is foreign to us today, we can still empathize with his situation, which Heidegger analyzes in existential terms, detached from its material character. This kind of approach is what he calls formal indicationnot formal in an abstract sense, but in the sense of pointing in the direction of something that each personal life must enact. In the case of formally indicative concepts, Heidegger shows, "the meaning-content of these concepts does not directly intend or express what they refer to, but only gives an indication, a pointer to the fact that anyone who seeks to understand is called upon by this conceptual context to undertake a transformation of themselves into their Dasein."¹² The formal indication opens access to the Christian message from the standpoint of religious experience.¹³ Paul's situation lies beyond the distinction static-dynamic and is a having-become (in Greek genestai) that transforms Paul and the Thessalonians. Paul's arrival transforms the life of the Thessalonians. At the same time, Paul's faith is transformed and uplifted by their having-become: "[…] for Paul the Thessalonians are there because he and they are linked to each other in their having-become."¹⁴ Heidegger stresses that this having-become is not accidental but is incessantly co-experienced, such that their Being (Sein) now is their having-become (Gewordensein). The having-become also triggers the self-awareness of Paul and the Thessalonians. Heidegger remarks that the frequent use of the word genestai together with words such as "you remember" and "you know" indicates that the knowledge of the Thessalonians arises from the situational context of their Christian life experience.
Paul and the Thessalonians' having-become is an acceptance of the kerygma both in distress and joy: the anguish over a cataclysm that will end the world and the joy of renewal and resurrection. Their acceptance triggers an absolute turning-around, turning toward God in two directions: serving God and waiting for the end of the world. The distress and the joy pertain to the very contents of the proclamation and make up the horizon within which one can understand the end of the world. The obstacles and suffering that Paul and the Thessalonians have endured are part of this distress. Paul speaks indeed out 14 Ibid., 65. of weakness and distress: "You yourselves know, brothers and sisters, that our coming to you was not in vain, but though we had already suffered and been shamefully mistreated at Philippi, as you know, we had courage in our God to declare to you the gospel of God in spite of great opposition." (I Thessalonians, 2: 1) The awaiting of the second coming of Christ is thus entering into anguish. One cannot grasp the concept "end of the world" without entering into life's anguish and eliminating all false securities and illusions. In our COVID time, we must avoid a passive acceptance of merely reassuring explanations, like "The virus is a punishment for our sins," or "God will turn this for good." The British theologian N. T. Wright thinks that our first reaction to the COVID crisis must be deeply personal and engage our relationship with Jesus. Instead of rushing into edifying explanations, the Christian must, first of all, acknowledge the frailty of human beings and the hardship of earthly life. Praying and lamenting about the present situation are human reactions that stem from the faith in the God who suffered and sacrificed Himself on the Cross: "It means that, when the world is going through great convulsions, the followers of Jesus are called to be people of prayer at the place where the world is in pain."¹⁵ Like Heidegger, Wright places the experience of distress at the heart of Christian life. He observes that one-third of Psalms lament the sorrow of earthly life, and we should thus not be ashamed to lament about COVID.
Lamenting does not entail giving up on the action. On the contrary, believers are called, amid distress, to work relentlessly, to carry on with their job. Although He lamented human death and sin, Jesus still intervened to alleviate suffering and bring redemption. In the same way, Wright thinks, Christians should remain present on the front battle against COVID and not retreat in the background. Throughout centuries of plagues, preachers and pastors remained at their posts and helped, rather than retreat in a passive consolation about the coming kingdom of God.¹⁶ For this reason, the calculation of the "When" is alienating, excluding a life transformation. The authentic waiting for the end of the world is not a mere awareness of a future event but assimilation of this proclamation into the present life. The uncertainty about the "When" is constitutive for this waiting, as it requires to be awake and sober. Christians must be aware that the day of the Lord will come like "a thief in the night" and must vigilantly prepare for that moment. Those who believe they know the moment indulge in the peace and security of their knowledge. They are absorbed by what life brings them, remaining stuck in the worldly.
There is no security for the Christian life; the constant insecurity is also characteristic for what is fundamentally significant in factical life. The uncertainty is not coincidental; rather, it is necessary. In order to see it clearly, one must reflect on one's own life and its enactment. Those "who speak of peace and security" (5:3) spend themselves on what life brings them, occupy themselves with whatever tasks of life. They are caught up in what life offers; they are in the dark, with respect to knowledge of themselves. The believers, on the contrary, are sons of the light and the day.¹⁷ For Heidegger, as for Aquinas, those engaged in speculations about the moment of the second coming wait inauthentically for the end of the world. They see it as a "What," not as a "How," and do not engage in existential enactment to transform their lives. "Paul's answer to the question of the When of the παρουσία is thus an urging to awaken and to be sober. Here lies a point against enthusiasm, against the incessant brooding of those who dwell upon and speculate about the "when" of the παρουσία. They worry only about the "When," the "What," the objective determination, in which they have no authentic personal interest. They remain stuck in the worldly."¹⁸ Moreover, the "When" of the parousia is related to the "How" insofar as only the authentic believers will be able to recognize the Antichrist who pretends to be divine. The second letter highlights that the Antichrist will test the believers. Those who will be deceived are those who did not accept the enactment of the kerygma, which requires one to enter the anguish of life. They get lost "in their highest bustling activity with the 'sensation' of the Parousia, and fall from their original concern for the divine. /…/ The appearance of the Antichrist in godly robes facilitates the falling-tendency of life; in order not to fall prey to it, one must stand ever ready for it."¹⁹ The deception is thus the result of this lack of enactment that undermines the reception of the proclamation.
The temporality encapsulated in the proclamation is not a linear projection of a future event but rather becoming and transformation toward God. For the Thessalonians, the enactment is a rehearsal of the end of the world and a glimpse into eternity. As Heidegger puts it: "The obstinate waiting does not wait for the significances of a future content, but for God. The meaning of temporality determines itself out of the fundamental relationship with Godhowever in such a way that only those who live temporality in the manner of enactment understand eternity."²⁰ Thus the Christian experience of time is qualitative, not quantitative. Its ground is the kairos,²¹ the appointed time, which is not a mere display of chronological coordinates, but the irruption of God into earthly life. How do the various extreme attitudes in the current pandemic fare with this vision of time?²² Those who dismissed the gravity of the crisis rely, perhaps, on the afterlife and consider historical time irrelevant. They certainly believe in the end of the world, but they cannot fully engage themselves in the actual moment; they let it slip away. At the same time, those who have an attitude of exaggerated panic have already decided that this is the end of the world. Their panic seems to lack hope and sobriety.
Heidegger briefly mentions the difference between the first and second letters to the Thessalonians. In the first letter, Paul suggests that the second coming will happen during his lifetime, although he stresses that the day will arrive like "a thief in the night." In the second letter, that imminence weakens, and Paul makes room for even more uncertainty regarding the time. Nevertheless, for Heidegger, this discrepancy is irrelevant. The Christian must prepare regardless of whether the end of the world arrives during his lifetime or after death. In this sense, the increase in uncertainty impacts the life comportment of the Thessalonians. Heidegger notes that the response of the Thessalonians to the proclamation intensifies after Paul's first letter. This intensification starts from controversies over the day of the Lord, which some think is at hand. Those who work tirelessly for salvation are concerned about fulfilling their work and resisting until the final day. Paul reminds them that God calls them to holiness. He opposes them to those who, thinking that the final day has already arrived, ceased working, and are sitting idle. These comportment issues confirm the tight connection between the "When" and the "How" of the parousia in Heidegger's eyes. They do not contradict the first letter but intensify, through increased uncertainty, the tension between anguish and joy that grounds Christian work and sobriety.
In the last part of his life, in his famous interview with the German magazine Der Spiegel, Heidegger reaffirms the Christian idea of readiness for God. This time, he assigns poetry and thinking the task to prepare for such readiness and to stimulate an awakening: "Only a god can save us. The only possibility available to us is that by thinking and poetizing, we prepare a readiness for the appearance of a god."²³ His German fellow Joseph Ratzinger (Pope Benedict XVI) appreciates this call for readiness as a genuine insight into eschatology's depths. The readiness for God is transforming, especially if one is not waiting in front of a void, but "goes forth to meet the One whom it encounters in his signs such that, precisely amid the ruin of its own possibilities, it becomes certain of his closeness."²⁴ To sum up Heidegger's take on Paul's eschatology, the proof of the proclamation rests on the enactment of anguish and joy, representing the two sides of the end of the world: the cataclysm, on the one hand, and the resurrection and renewal on the other hand. The uncertainty of the apocalyptic moment is paramount for an authentic enactment. Those who delude themselves to know when the moment will arrive fall either into despair, false security, or idleness. We have also seen these extremes playing out during the current COVID pandemic when underestimating the danger mixed with political hyperbole.
2 Aquinas' reading of the Letters to the Thessalonians Aquinas shares, I think, Heidegger's moderate eschatological attitude in the reading of the Thessalonians. For Aquinas, too, the end's uncertainty and the authenticity of faith go hand in hand. Not knowing the "When" is part of God's call to humans to renew their life and grow in their faith. Aquinas combats eschatological calculation on several occasions: in his commentaries to Thessalonians, in Summa Theologiae III, supplemental question 88, article 3, and in De potentia, question 5, articles 5 and 6. His insistence on the uncertainty of the second coming is motivated by the same Christian life model that also motivates Saint Paul and Heidegger.
Aquinas holds that the end of the world is a religious datum experienced in faith, not a matter that can be demonstrated by reason: "We should say that we, following the example of the saints, hold that heavenly motion will at some time cease, although one holds this by faith rather than one can demonstrate it by reason."²⁵ The impossibility of fully demonstrating eschatology aligns with Aquinas's complex conception of the soul's immortality, which blends Platonic and Aristotelian anthropology with Christian revelation. As Leo Scheffczyk shows, for Aquinas, the fate of human beings after death is the object of an indication (sign), not proof, a distinction also made by Heidegger.²⁶ When addressing the immortality of the soul, Aquinas refers to Plato's arguments but also speaks about the natural desire of the soul for God. This desire is not proof but a sign of human immortality.²⁷ Whereas the Greek arguments demonstrate the incorruptibility of the soul, humans' very future after death remains, for Aquinas, a matter of indication because it rests on their relationship with God.
Aquinas' approach in his reading of Thessalonians is thus experiential, not theoretical. He distinguishes between theoretical arguments and experiential demonstration, referring to the same Biblical passage quoted by Heidegger, namely 1 Corinthians 2:4. In that passage, as we have seen earlier, Saint Paul differentiates his preaching from the words of man's wisdom and defines it as a "demonstration of the Spirit and of power." Aquinas uses this distinction to explain a passage from 1 Thessalonians, 1:5: "For our gospel did not come to you in word alone, but also in power and in the Holy Spirit and with much conviction." This passage refers to an experiential kind of knowledge that occurs not through arguments but the power of signs, the Holy Spirit's gifts, and the example of a virtuous life. Aquinas often explains one Biblical passage through other passages from different parts of the Bible. Thus in reading this passage, he refers, besides 1 Corinthians, also to several other Biblical passages, building a sort of textual collage: Powerfully, because he came not in loftiness of speech, but in power: and my speech and my message were not in plausible words of wisdom, but in demonstration of the Spirit and power (1 Cor 2:4). For the kingdom of God does not consist in talk but in power (1 Cor 4:20). Now, this may have reference either to the authentication of his preaching or to the manner of his preaching. If it is the first alternative, then Paul's preaching to them was authenticated not by arguments but by the power of signs, and so it is said: the Lord worked with them and confirmed the message by the signs that attended it (Mark 16:20); and by the giving of the Holy Spirit; so Paul says, and in the Holy Spirit. While Peter was still saying this, the Holy Spirit fell on all who heard the word (Acts 10:44). While God also bore witness by signs and wonders and various miracles and by gifts of the Holy Spirit (Heb. 2:4). /…./ But if it is the second alternative, then in power seems to mean showing you a virtuous life. Jesus began to do and teach (Acts 1:1). And in the Holy Spirit who bring things to mind; for it is not you who speak but the Spirit of your Father speaking through you (Matt 10:20).²⁸ The similarity between Aquinas's experiential approach and Heidegger's phenomenological method is striking if we consider Heidegger's controversial critique of Aquinas and Christian Scholastics in general.²⁹ This critique has two main points. First, the Scholastics, especially Aquinas, are guilty of what Heidegger calls onto-theology, namely a type of conceptual system that obliterates the difference between Being and entities, confusing the very event of Being with an entity. Second, they have reduced the Christian message to theoretical thinking elaborated with Aristotelian metaphysical concepts. These concepts, however, cannot capture the novelty of the Christian Revelation and the concreteness of the religious experience. Matters like Jesus' Incarnation or salvation cannot be grasped through Aristotle's naturalistic theoretical metaphysics. "Scholasticism, within the totality of the medieval Christian world of experience, severely endangered precisely the immediacy of religious life, and forgot religion in favor of theology and dogma."³⁰ For several decades, Heidegger and Aquinas scholars have analyzed this critique, often trying to defend Aquinas against Heidegger's accusations.³¹ No matter where one stands in the Heidegger-Aquinas debate, it is evident that in the commentary on the Thessalonians, Aquinas does not employ the reductive theoretical thinking attributed to him by Heidegger. Indeed, his main focus is the experience of the Thessalonians in light of Saint Paul's message of salvation in the life to come.
Aquinas starts from the tribulations that Thessalonians undergo because of their faith. The commentary opens with a prologue about the story of Noah's ark in Genesis, which he compares with Thessalonians' tribulations: "And the flood was forty days upon the earth, and the waters increased, and lifted up the ark on high from the earth" (Genesis 7:17). For Aquinas, this story is an allegory in which the ark signifies the Church, and the flooding waters signify tribulations. As the ark is lifted up by the raising waters, so too the Church raises through human tribulations: Therefore, the Church is not destroyed but uplifted: first, by lifting the mind to God, as is clear from Gregory: the evil things which bear down upon us here compel us to go to God. And in their distress they seek me (Hos 6:1). Second, the Church is raised up through spiritual consolation: when the cares of my heart are many, your consolations cheer my soul (Ps 94:19); for as we share abundantly in Christ's sufferings, so through Christ we share abundantly in comfort, too (2 Cor 1:5). Third, the Church is upraised by increasing the number of the faithful; for God has spread the Church in time of persecution; but the more they were oppressed, the more they multiplied and the more they spread abroad (Exod 1:12).³² Noah's ark is an appropriate allegory for the Thessalonians' situation because they stood firm through (and despite) many tribulations.
In his commentary, Aquinas highlights Paul's comforting of the Thessalonians. Like Paul, he probably intended to comfort his contemporaries about the grace of Christ in a Church facing present and future hardships.³³ As Randall Smith observes, Aquinas' Biblical commentaries are not merely exegetical exercises but grow from the practice of preaching.³⁴ Showing that Paul sends Timothy to encourage the Thessalonians tary, 199. and strengthen them in their tribulations, Aquinas interprets his comforting in two directions. First, it shows that their tribulations are part of a divine plan aiming at salvation: "God ordained that you shall enter into heaven through tribulations."³⁵ Second, it entails predictions concerning the future because anticipated difficulties are less harmful. Paul has already predicted that Thessalonians will undergo tribulations: "for even when we were with you, we foretold you that we should suffer tribulations." (I Thessalonians, 3:4) The advice of Paul is to treat prophecies with discernment: "Do not despise prophecies but prove all things: hold fast that which is good. From all appearance of evil, refrain yourselves" (I Thessalonians,5:20). Prophecy is, for Aquinas, a divine gift that one must exercise if one has it or follow if one hears it from somebody else. However, in the latter case, we need prudence, as we should carefully discern between good and evil. Aquinas includes here a reference to Romans, where Paul calls this kind of discernment "reasonable service:" Then when he says, but prove all things, he shows how they ought to behave towards everything; and one piece of advice is that they should make use of discretion in all matters. Your reasonable service (Rom 12:1). In this matter there should be a careful examination of the election of the good, and the rejection of the evil.³⁶ At the same time, although he supports prophecies, Aquinas explains why human beings cannot know when the end of time will arrive. We can foreknow something either through natural knowledge or by revelation. In the first case, we foreknow future things through knowledge of natural causes. By knowing the cause, we can foreknow its effect. However, the world's end will come by no created cause but exclusively through a divine action. This is because heavenly motion, unlike the motion of elementary material substance, has its active source outside of itself, namely a separate substance: "But a heavenly body by its movement does not arrive at a place toward which it inclines by its nature, since any place is the starting point and end of its motion."³⁷ Furthermore, God moves the heavens as an instrument. Thus the end of the heavenly motion is something outside of itself, not the perfection of itself. The natural thing nobler than the heavenly body is the rational soul. It follows that the heavenly motion is not for its sake but for the sake of filling up the fixed multiplicity of rational souls. Once this number obtains, the heavenly motion will cease. In conclusion, only God can know when the end of the world will come because He causes the heavenly bodies to move, and He decides the number of rational soulsin particular of the elect.
In the second case, although he grants the possibility of knowing the "When" of the apocalypse through revelation, Aquinas thinks it does not fit humans to receive such revelation. First, since the world will end when the number of the elect is complete, this moment is only known by the one who fulfills the divine predestination of humans, namely Jesus Christ. Second, we should distinguish between the temporal expectation of the first coming and that of the second coming of Christ. At His first coming, Christ came secretly. Thus, believers needed to know beforehand the time of His arrival so that they could recognize Him. In contrast, at the second coming, He will come openly so that there will be no error in recognizing Him.
Aquinas admits that there are several indications about the end of the world in the Bible, but he denies that they point to a precise moment. For instance, the syntagms "last days" and "last hour" that appear in the Scripture indicate only the last state of the world, but they do not offer a temporal datum. Indeed, we are now in the last state of the world regarding the progressive succession of laws. Following the Old Law stage, the New Law stage began at Jesus' Incarnation and will last until the end of the world. There will be no other law because no stage of the present life can be more perfect than the New Law stage. The New Law pertains to Jesus the Redeemer, by whom we will all be saved.³⁸ However, although we know that the New Law is the last stage, and we live in it, we do not know precisely when this stage will end. Moreover, the signs that indicate that the world will end do not manifest the fixed time of the moment. Intense calamities will precede the second coming, but it is difficult to decide how many of them and what intensity will point to the precise moment. Even in the early Church, the number and amplitude of calamities were extremely high, making some believe that the end of the world was near. Thus, the measure of signs indicating the final moment is not revealed to us: "But it cannot be manifest to us what is the measure of these signs about the end of the world."³⁹ Finally, and most relevant to our discussion, Aquinas shows that it does not fit humans to foreknow the final moment by revelation because that would negatively affect their waiting. First, the belief that the day of the Lord is at hand can encourage deception. It allows liars to claim that they are Christ. It also makes men vulnerable to demons who pretend to be Christ.
Second, the temporal stretching of the expectation can harm the authenticity of faith. If we believe that Christ will come later, we can fall prey to self-indulgence. Therefore, the expectation should have a certain sense of urgency so that Christians do not linger too much on worldly affairs and prepare for the end. In this sense, Aquinas quotes 1 Corinthians 7:31: "Let those who enjoy this world be as if they do not enjoy it, since the form of this world is passing away." At the same time, if we believe that Christ will come quickly, we can fall into despair. As time passes by and nothing happens, people might doubt the Scripture. This second belief in the imminent arrival is for Aquinas the most dangerous: "But of two who say they know, the statement of the one who says that Christ will come shortly, or that the end of the world is imminent, is more dangerous since this can be an occasion to lose all hope that it will come if it will not occur at the time when it is predicted to happen."⁴⁰ Aquinas' concerns about the authenticity of faith match Heidegger's concerns. In his description of the inauthentic waiting for the end of the world, we can recognize, I believe, the harmful attitudes frequent in the current pandemic, ranging from self-assured negligence to destructive panic. The way Thessalonians cope with their tribulations and the comfort that Saint Paul gives them, to encourage them to persist in their enduring faith, oppose these attitudes.
In his commentaries on Thessalonians, Aquinas associates the growth in faith of Saint Paul and the Thessalonians with the expectation of the parousia, whose day is and must remain uncertain. Like Heidegger, Aquinas highlights Paul and the Thessalonians' tribulations, which make their endurance in faith even more remarkable. For him, too, Paul and the Thessalonians are a model for how to wait for the end of the world. Like Heidegger's phenomenological description, Aquinas shows that the Thessalonians enact the kerygma. Their strength of faith, their joy in the Holy Spirit, and their resilience through tribulations are intimations of the world's renewal, expected with anxiety and hope. Their preparation for the end of the world starts with awareness of uncertainty and amounts to perseverance in work. For Aquinas, even more than for Heidegger, the core of Paul's letters is the Thessalonians' state of grace. While Heidegger insists on the anguish of life,⁴¹ Aquinas focuses on the virtues and gifts of the Thessalonians, which are signs of their divine election. His commentary highlights their blessed life as a mirror of the coming, new life. This approach attests to the crucial role of beatitude, virtues, and gifts in Aquinas' eschatology.⁴² Beatitude is the fulfillment of the human desire to be united with God. Humans can only attain the vision of God in the afterlife, but their faith, work, and virtue grow from a relationship with Christ in this earthly life. The eschatological calculation bypasses this relationship. Those placing the end at hand are either looking for a quick fix or cheering for the annihilation of evil humanity. Neither version is the right way of waiting for the end of the world. The door into the afterlife is a relationship with Christ, who already came once and sacrificed Himself for humanity's redemption. "I am the door. If anyone enters by Me, he will be saved and will go in and out and find pasture. The thief does not come except to steal, and to kill, and to destroy. I have come that they may have life, and that they may have it more abundantly." (John, 10: 9-10) Thomas Aquinas," 225. Aquinas evaluates the proofs that the Thessalonians are elected by God against the backdrop of Paul's preaching of the kerygma. Both issues are related, as the contents of Paul's preaching emerge in their enactment in the life of Paul and the Thessalonians. Aquinas analyzes two types of proof: the evidence in preaching and the evidence in faith.⁴³ As in Heidegger, these proofs pertain not to a deductive demonstration but a demonstration of spirit and power. They are indications and signs.
Paul's preaching relies on his relationship with the Thessalonians,⁴⁴ which is grounded in faith. For this reason, Aquinas stresses that the success of Paul's preaching is proof that the Thessalonians are elected. He explains Paul's claims speaking in his voice: "And I know this because God granted me abundant evidence of this in preaching, that is, that those to whom I preach are chosen by God. For God gives them the grace to listen profitably to the word preached to them; or else, God gives me the grace to preach rewardingly to them."⁴⁵ Paul's faith and virtue practiced while he lived among the Thessalonians represent the enactment of his message. This faithful and virtuous preaching is all the more significant as it persevered against the suffering and shame that afflicted Paul before he met the Thessalonians: "But having suffered many things before and having been shamefully treated, (as you know) at Philippi, we had confidence in our God, to speak unto you the Gospel of God in much carefulness." (I Thessalonians 2:2). The term "carefulness" signals for Aquinas the doctrinal soundness of Paul's preaching. This soundness is not tainted by errors or deceit. Indeed, Paul's preaching does not bring any pleasant promises nor any flattery. It does not pursue his glory or personal favors, as was the case with those who preached heretically to the Thessalonians. On the contrary, Paul acknowledges that he might have been burdensome and became little, that is, humble.
The second evidence for the election of the Thessalonians regards their authentic faith. Just like Paul, the Thessalonians were virtuous and faithful despite tribulations. Their misfortune did not preclude them from receiving the joy of the Holy Spirit. This blend of suffering and joy, also highlighted by Heidegger, is vital for the eschatological experience. The Thessalonians follow Paul's life testimony by imitating him. However, as Aquinas points out, they did not imitate him in his human failings, but in his fellowship of Christ: "And you became followers of us and of the Lord." (I Thessalonians 1:6) Aquinas associates this passage with another passage from Paul's first letter to Corinthians: "be imitators of me as I also am of Christ." (I Corinthians 4:16) Faith, labor, charity, and hope are the pillars of this experience, which becomes exemplary to all believers. Those from Macedonia and Achaia have imitated the Thessalonians, and their faith has gone forth well beyond their neighbors' confines.
Aquinas warns though that the exemplary life of the Thessalonians is not a finished task: "Paul remarks: although you are good, nevertheless you shall grow markedly and improve through the repeated practice of the precepts and counsels. /…/ For charity is so encompassing that there will always be something left through which one might improve himself."⁴⁶ In this sense, Aquinas explains the temporality of the Christian life in terms of the overflowing character of spiritual goods. Such goods have an abundance that calls for progress in virtue, not because every step is imperfect, but because spiritual fullness is never exhausted in a given state: Why? Because spiritual goods grow exceedingly. For such goods are not safely guarded unless a man progresses in them. Now among these gifts of God the first is faith, through which God dwells in us, and our progress in faith is in connection with the understanding. /…/ And so a man progresses through knowledge, devotion, and adherence. The second is charity, through which God is present in us by his effect. /…/ And for this reason he says, and the charity of every one of you towards each other abounds.⁴⁷ The progress in the faith of the Thessalonians requires patience and perseverance. However, some fall prey to the misunderstanding of the kerygma. Aquinas thinks that this misunderstanding also comes from Paul's use of the expression "we who are alive" in the first letter, which gives the impression that Paul and the Thessalonians will be alive at the second coming, and thus that parousia will happen during their lifetime: "For this we say unto you in the word of the Lord, that we who are alive, who remain unto the coming of the Lord, shall not prevent those who have slept." (I Thessalonians, 4:15) Aquinas shows that the expression does not mean Paul and his contemporaries but means whoever will be alive at the second coming.⁴⁸ By that, Paul assures the Thessalonians that the dead will not have a delayed or a "lesser" resurrection than those who will be alive at the second coming: "But he is not talking at present about himself and his contemporaries, but about those who shall be found alive in the time of Christ's coming. We who remain, that is, those who shall be left after the persecution of the Antichrist, shall not prevent those, that is, those who are living shall not receive their consolation first."⁴⁹ Aquinas also combats the idea that those who will be alive at the second coming will remain alive. They will, on the contrary, first die and then resurrect. Because the time between their death and their resurrection will be very short, they are regarded as living.⁵⁰ Besides, they will resurrect at the same time as the ones who died before: "For when the Lord does come, first those who are found alive will die and then, immediately together with those who have died before, they will rise up and be taken up into the clouds to meet Christ, as Paul says."⁵¹ Aquinas makes efforts, even more than Heidegger, to show that we cannot know the moment of the end. This uncertainty comes from the difference between divine and human knowledge but also the experiential nature of the eschatological expectation. The human calculation of the final moment results in inauthentic attitudes toward the parousia, which impede the believer from facing her suffering and faithfully working and worshiping God. This lesson holds also in our current pandemic. Those who thought how the pandemic fares in apocalyptic calculation remained trapped in negligence or panic. They either failed to take it seriously (the end is always later, it cannot be now) or panicked to the point of self-destruction (the end is here). The right attitude, described, as we have seen, also by N.T. Wright in his recent God and the Pandemic, is to reckon with one's tribulations and still live the present moment with labor and hope.
Conclusion
The last moment's uncertainty qualifies the authentic waiting for the second coming of Christ against the inauthentic expectation of calculative reasoning. Indeed, Aquinas' and Heidegger's most significant concern in reading Thessalonians seems to be not so much the lack of faith, but rather the authenticity of faith. As we have seen, this authenticity is opposed to false security or destructive panic. It entails sobriety, work, and virtuous actions while reckoning with suffering. Instead of relying on a precise date, the Christian must always be ready through a personal relationship with Christ. Aquinas and Heidegger offer, hence, an existential view on eschatology, in which the earthly human existence participates in the future life to come and has, so to speak, a foretaste of it. In light of this view, the extreme attitudes playing out during the current pandemic (underestimation or panic) seem to rest on deficient eschatological expectations.
Waiting for the end of the world engages a mode of rationality different from calculative reasoning. The rationality required for the authentic expectation is lucidity shaped by apocalyptic anguish and salvation's joy. Therefore, dealing reasonably with a crisis means taking it seriously but at the same time keeping the 48 Eleonore Stump remarks that Aquinas is not interested in adopting philological and historically critical tools in interpreting the Bible and focuses on developing insights and arguments of philosophers and theologians. See Stump, "Biblical Commentary and Philosophy," 256. In this case, although he thoroughly comments the Biblical text line by line, Aquinas works on an eschatology that reflects the Christian life and Church doctrine. promise of salvation in mind. In the current pandemic, we need to find a sober middle way between "It is just the flu" and panic. We are called, here and now, to reasonable service, a term Aquinas borrowed from Paul. Enduring through the pain of the pandemic, we must continue working and leave the certitude of the end to the One who redeems us. | 10,053.4 | 2021-01-01T00:00:00.000 | [
"Philosophy"
] |
2-Hydroxy-N-(2-hydroxyethyl)benzamide
In the title compound, C9H11NO3, a derivative of salicylamide, the intracyclic C—C—C angles span the range 117.96 (13)–121.56 (14)°. An intramolecular O—H⋯O hydrogen bond occurs. In the crystal, intermolecular O—H⋯O and N—H⋯O hydrogen bonds occur and C—H⋯O contacts connect the molecules into a three-dimensional network. The closest intercentroid distance between two π-systems is 3.8809 (10) Å.
Related literature
For the crystal structure of N-acetylsalicylamide, see: Vyas et al. (1987). For graph-set analysis of hydrogen bonds, see : Etter et al. (1990); Bernstein et al. (1995). Structures containing similar dihedral angles were retrieved from the Cambridge Structural Database (Allen, 2002). For the use of chelating ligands in coordination chemistry, see: Gade (1998).
Comment
Chelate ligands have found widespread use in coordination chemistry due to the enhanced thermodynamic stability of resultant coordination compounds in relation to coordination compounds exclusively applying comparable monodentate ligands (Gade, 1998). Combining different donor atoms, a molecular set-up to accomodate a large variety of metal centers of variable Lewis acidity is at hand. In this aspect, N-(2-hydroxyethyl)-salicylamide seemed of interest due to its possible use as a strictly neutral or, depending on the pH value, as an anionic or cationic ligand. In addition, due to the set-up of its functional groups, it may act as mono-, bi-, tri-or even tetradentate ligand offering the possibility to create chelate rings of various size. The intriguing combination of a secondary amino group, a keto group as well as an aliphatic and an aromatic hydroxyl group classifies the title compound as a highly versatile ligand. To enable comparative studies in terms of bond lengths and angles in envisioned coordination compounds, we determined the molecular and crystal structure of the title compound. Information about the crystal structure of N-acetylsalicylamide (Vyas et al., 1987) is available in the literature.
Due to the possible resonance between the amide group and the aromatic system, a projection of the molecule shows nearly all atoms to reside in the same plane. The only marked exception from this finding is the aliphatic hydroxyl group which adopts a staggered conformation with respect to the plane of the phenyl moiety. Intracyclic C-C-C angles span a range from 117.96 (13)-121.56 (14) °. The least-squares planes defined by the carbon atoms of the aromatic system on the one hand and the CON-motif of the amide group on the other hand intersect at an angle of 11.71 (20) ° (Fig. 1). This finding is in good agreement with values reported for other salicylic acid-derived amides whose crystal structural data have been deposited with the Cambridge Structural Database (Allen, 2002;Fig. 2).
In the crystal structure, intra-as well as intermolecular hydrogen bonds are obvious. The intramolecular hydrogen bond is formed between the hydrogen atom of the hydroxyl group bonded to the aromatic system and the oxygen atom of the keto group, with the latter one also serving as acceptor for one intermolecular hydrogen bond stemming from the aliphatic hydroxyl group. The amino group acts as donor in a hydrogen bond applying the aliphatic hydroxyl group's oxygen atom as acceptor. Apart from these classical hydrogen bonds, C-H···O contacts are observed whose range falls by more than 0.1 below the sum of van-der-Waals radii of the atoms participating. These contacts are manifest between the CH group in ortho position to the hydroxyl group on the aromatic system and the O atom of this hydroxyl group in the neighbouring molecule thus connecting the molecules to centrosymmetric dimers. A second C-H···O contact can be observed between one of the aromatic CH groups and the O atom of the aliphatic hydroxyl group (Fig. 2). In terms of graph-set analysis (Etter et al., 1990;Bernstein et al., 1995), the descriptor for the classical hydrogen bonds is S(6)C 1 1 (7)R 2 2 (10) on the unitary level while a description of the C-H···O contatcs necessitates a C 1 1 (8)R 2 2 (8) descriptor on the same level. In total, the molecules are connected to a three-dimensional network. The closest intercentroid distance between two π-systems was found at 3.8809 (10) Å.
The packing of the title compound is shown in Figure 4.
supplementary materials sup-2 Experimental
The compound was obtained commercially (Aldrich). Crystals suitable for the X-ray diffraction study were taken directly from the provided compound.
Refinement
Carbon-bound H atoms were placed in calculated positions (C-H 0.95 Å for aromatic C atoms and C-H 0.99 Å for methylene groups) and were included in the refinement in the riding model approximation, with U(H) set to 1.2U eq (C).
The H atoms of the hydroxyl groups as well as the amine group were located on a difference Fourier map and refined with individual thermal parameters. Fig. 1. The molecular structure of the title compound, with atom labels and anisotropic displacement ellipsoids (drawn at 50% probability level). | 1,108.8 | 2011-07-23T00:00:00.000 | [
"Chemistry"
] |