text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A Cluster-Based Energy Optimization Algorithm in Wireless Sensor Networks with Mobile Sink Aiming at high network energy consumption and data delay induced by mobile sink in wireless sensor networks (WSNs), this paper proposes a cluster-based energy optimization algorithm called Cluster-Based Energy Optimization with Mobile Sink (CEOMS). CEOMS algorithm constructs the energy density function of network nodes firstly and then assigns sensor nodes with higher remaining energy as cluster heads according to energy density function. Meanwhile, the directivity motion performance function of mobile sink is constructed to enhance the probability of remote sensor nodes being assigned as cluster heads. Secondly, based on Low Energy Adaptive Clustering Hierarchy Protocol (LEACH) architecture, the energy density function and the motion performance function are introduced into the cluster head selection process to avoid random assignment of cluster head. Finally, an adaptive adjustment function is designed to improve the adaptability of cluster head selection by percentage of network nodes death and the density of all surviving nodes of the entire network. The simulation results show that the proposed CEOMS algorithm improves the cluster head selection self-adaptability, extends the network life, reduces the data delay, and balances the network load. Introduction Wireless Sensor Networks (WSNs) are composed of thousands of sensor nodes, which are characterized by small size, low cost and low power consumption. Therefore, WSNs are widely used in military reconnaissance, climate and environment monitoring, natural disaster warning and treatment, intelligent medical technology and other fields [1][2][3]. WSNs can be divided into static wireless sensor networks (static WSNs) and mobile wireless sensor networks (mobile WSNs) according to the types of sensor nodes [4,5]. In the static WSNs, the location of sensor nodes is fixed once it's deployed. Since nodes with limited energy nearby the static sink node may be assigned as cluster head frequently, leading to high energy consumption of those nodes, thus may cause premature death of those sensor nodes. In mobile WSNs, sensor nodes can move according to specific mission, thus mobile sink may be introduced to alleviate energy consumption of nodes nearby sink node [6][7][8][9][10]. Supposing links between mobile sink and other nodes in a time slot are unchangeable, mobile WSNs can be simplified to static WSNs in this time slot, thus efficient and energy-saving routing algorithms based on static WSNs may be extended to mobile WSNs [11][12][13]. The Low Energy Adaptive Clustering Hierarchy Protocol (LEACH) proposed by Heinzelman et al. [14,15] is the most classic hierarchical routing algorithm. This algorithm divided network nodes into different clusters firstly, and then used periodically replacing cluster heads to balance network Energy consumption, leading to network life cycle prolongation. In addition, A Hybrid, Energy-Efficient, Distributed Clustering Approach (HEED) [16], A Stable Election Protocol for Clustered (SEP) [17], and LEACH-Centralized Related Work Since mobile sink may change topology of data transmission in mobile WSNs, using traditional routing algorithms of static WSNs in mobile WSNs may deteriorate energy balance of mobile WSNs, result in a short network life cycle. Some scholars proposed routing protocols which use mobile sink to balance energy consumption of network. According to distance with mobile sink or residual energy, a node may be assigned as a rendezvous point to collect data of nearby nodes and then to communicate with mobile sink [19,20]. Using mobile data collector and hierarchical clustering technique, Zhang et al. proposed a data collection routing algorithm which assigns node with maximum density as cluster head to avoid long-distance transmission energy consumption, leading to network resources balance [21]. Zhu et al., proposed a data collection algorithm based on tree clusters technique, which selected specific rendezvous points and sub-rendezvous points according to the remaining energy of the node and the number of multi-hops respectively, and collected the data of each rendezvous point by mobile sink [22]. Although using mobile sink leading to network life prolongation, above algorithms may introduce mutual interference of data transmission, resulting in data deviation and data delay. Moreover, randomly selecting cluster heads may cause some nodes be assigned as cluster heads frequently, leading these nodes to dead premature, thus degrading the performance of the whole network. To tackle problems caused by mobile sink such as cluster head uneven distribution, data redundancy and delay, researchers proposed some clustering routing algorithms. Jing et al. proposed an improved LEACH protocol (ILEACH) [23] to select nodes as cluster heads according to remaining energy of node, thus enhance selected probability of nodes with high remaining energy. Sharma et al. also proposed an improved LEACH algorithm called Distance Based Cluster Head (DBCH) [24], which selects node as cluster head following some criterion such as distance between node and the sink node, the maximum and minimum distance between nodes and sink node, the remaining energy and so on. Integrating underlying factors related to energy balance of WSNs such as the remaining energy of the node, the distance between the node and the sink node, and the average distance of all nodes with the sink node together, Darabkh et al. proposed an improved algorithm called LEACH-Distance Based Thresholds (LEACH-DT) [25]. Obviously, reasonable use of information of location and velocity related to mobile sink to select cluster-head may contribute to balance energy consumption of WSNs [26]. Using mobile sink technique, Wang et al., proposed a stable election protocol based on improved SEP [11,17]. This algorithm classified sensor nodes firstly and then constructed threshold function of cluster head selection based on the remaining energy and initial energy of nodes. Kushal et al., clustered sensor nodes according to the location of nodes firstly and then assigned cluster head of each cluster according to the remaining energy of nodes and the distance between nodes [27]. Moreover, using location information of mobile sink broadcasted by itself, Kushal's method constructs the shortest multi-hop between cluster head and mobile sink to reduce energy consumption of WSNs. We can find that above algorithms use remaining energy information and location information of nodes to balance distribution of cluster heads, thus extending life cycle of WSNs and reducing data transmission delay. However, supposing the velocity of the mobile sink is unchangeable, above algorithms may unsuitable in the scene of mobile sink variable-speed moving. Contributions Summarizing, the current energy-saving algorithms focus less on energy consumption and transmission delay caused by mobile sink moving in WSNs. This paper overall considers multi-factors related to energy balance of WSNs including remaining energy rate of WSNs, density of nodes, location changing of mobile sink and mortality rate of nodes to propose a new network energy optimization algorithm called Cluster-Based Energy Optimization with Mobile Sink (CEOMS). The contributions of this paper are as follows: (1) The energy density function containing variables such as residual energy rate and density nearby sensor nodes is constructed to assign nodes with high remaining energy as cluster heads. Compared with the traditional clustering algorithm, proposed algorithm fully considers the remaining energy of network nodes and the neighborhood density of nodes, thus extending the network life. (2) The motion performance function containing variables such as velocity of mobile sink, distance between mobile sink and node is constructed to enhance probability of the remote node be assigned as cluster head. Since fully considering underlying factors related to network energy balance such as moving distance and direction of mobile sink, the proposed algorithm has better ability to balance network energy than that of the traditional clustering algorithm. (3) The cluster head selection contains two independent functions, including energy density function and motion performance function. Moreover, the adaptive adjustment function is introduced to adjust the weight parameters of the energy density function and the motion performance function. The three functions constitute the adaptive cluster head selection threshold, which can avoid nodes' premature decay, leading to network life extending. Paper Organization The content of this paper is organized as follows. Section 2 gives the network model of the algorithm and motion model of mobile sink; Section 3 elaborates principle and implementation process of the CEOMS algorithm; Section 4 verifies the feasibility and effectiveness of the CEOMS algorithm through simulation. Section 5 summarizes the paper. System Model Based on classical clustering routing LEACH protocol, the system model in WSNs with mobile sink is constructed in this paper. Besides, the motion model of mobile sink is also presented. Network Model WSNs composed of sensor nodes, mobile sink, satellites, Internet, control center and users is shown in Figure 1. This figure is a typical example of WSNs applied to smart urban planning [28,29]. The sensor nodes are randomly distributed in the monitoring area, and the mobile sink moves according to a specific motion model and collects data from all sensor nodes. However, when all sensor nodes send data directly to the mobile sink, some nodes farther away from the mobile sink consume more energy. Therefore, this paper proposes a distributed network model to collect data based on LEACH architecture. It can be clearly seen from Figure 1 that all sensor nodes are divided into different clusters. There is a sensor node in each cluster as the cluster head to collect and process the monitoring data of the sensor nodes in the cluster. Then, the cluster heads send the data to the mobile sink. Finally, the mobile sink sends data to the control center and users via satellites and Internet. Moreover, six assumptions related to model of WSNs are as follows: (1) Position of each sensor node with unique ID is fixed once deployed. (2) Sensor nodes are equipped with GPS device and are location-aware. (3) Each sensor node has limited energy, and their initial energy is the same. (4) The propagation channels are symmetric, i.e., two nodes can communicate using the same transmission power. (5) The mobile sink has unlimited energy, powerful information processing ability , data storage capacity and can move according to a specific motion model. (6) Each sensor node has a fixed number of transmission power levels, and there is no error in signal transmission [16]. Motion Model of Mobile Sink Assume that in a two-dimensional monitoring area, the motion model of mobile sink is as following: T is the state of mobile sink at time t; x(t),ẋ(t), y(t),ẏ(t) are the coordinate and speed of the mobile sink in x and y direction at time t; F is the state transition matrix; Γ is noise coefficient matrix; W(t) ∼ N(0, Q(t)) is the Gaussian noise with covariance Q(t); t is sampling time; ω is angular velocity. The mobile sink moves according to the above motion model and collects monitoring data of all sensor nodes [1]. When the mobile sink has collected all the data of sensor nodes, it indicates that the one round is completed. As shown in Figure 2, at the beginning of the rth round, the mobile sink is located at (x(r), y(r)) to start collecting monitoring data. When the mobile sink collects data from all the nodes, the rth round completes. Then, the mobile sink starts the r + 1th round data collection, as shown in Formula (3). As can be seen from the Figure 2, the position of mobile sink can be described as: where, (x(r + 1), y(r + 1)) is the coordinate position of the mobile sink at the beginning of the r + 1th round; (∆x, ∆y) is the position change of the mobile sink from the rth round to the r + 1th round. CEOMS Algorithm In this section, this paper selects some nodes nearby each sensor node according to energy consumption model to form neighborhood set of this node and then design energy density function related to this set. Moreover, the threshold of cluster head selector is determined by energy density function and motion performance function which is constructed by motion parameters of mobile sink. Finally, the threshold of cluster heads selector may be adaptive adjusted according to mortality of nodes. Construction of Energy Density Function Based on Neighborhood Nodes Assume that N sensor nodes are randomly deployed in a monitoring area. K opt is the number of nodes to be selected as cluster heads in each round. The desired percentage of cluster heads against the all nodes can be described as: Construction of Sensor Node Neighborhood Set As it is known, the energy consumption of data communication is higher than that of sensing data and data processing. This paper uses first order radio model to described energy consumption of node, as shown in the Figure 3 [30]. Assuming that each packet has k bit data, the energy consumption of transmitting k bit data E TX and the energy consumption of receiving k bit data E RX can be expressed as follows: where, E TX−elec , E RX−elec and E TX−amp are the energy consumption of transmitter, the energy consumption receiver and energy consumption amplifier, respectively. ε f s and ε mp are coefficient amplify of free space and coefficient amplify of multi-path, respectively. E elec is the energy consumption of 1 bit data processing related to node, k is bits of data, d is the distance between transmitter and receiver, d 0 is the critical communication distance. It is clearly that energy consumption exponentially increases with distance between the transmitter and the receiver. Thus, the communication range of the node s i may be constrained lower than d 0 to save energy of WSNs. So, the neighboring node threshold T s i s j of the node s j described in Formula (8) is introduced to determine whether or not node s j should be put in neighborhood set of the node s i . where, Θ is the neighborhood region of node s i , d ij is distance between node s i and node s j . Nodes located in the neighborhood region Θ of node s i join neighborhood node set N, as shown in Figure 4. Construction of Energy Density Function In order to enhance the probability of nodes with high remaining energy to be assigned as cluster heads, the neighborhood nodes remaining energy rate f e (s i ) is introduced as Formula (9). where, E r (s i ) is the remaining energy of the sensor node s i , n is the number of neighborhood nodes of the sensor node s i , E avg is the average remaining energy of the neighborhood node of the sensor node s i . To describe relation between energy consumption of cluster head and nodes density inside cluster, the neighborhood nodes density function f ρ (s i ) is introduced here. Combining f e (s i ) and f ρ (s i ), the energy density function f eρ (s i ) of node s i can be described by Formula (12). Construction of Motion Performance Function Although the position of sensor nodes remains unchangeable once they are randomly deployed, the distance between the sensor node and the mobile sink changes with mobile sink moving, leading to the energy consumption changing of the node, as shown in Figure 5. In Figure 5, when the location of mobile sink is (x(r − 1), y(r − 1)), some nodes near the mobile sink will consume less energy during data transmission. When the mobile sink moves to (x(r), y(r)) , the relative distance between the sensor node and the base station changes, leading to changes in the energy consumption of the nodes. This paper introduces relative distance ∆d s i to describe distance changing between node and mobile sink. where, (x s i , y s i ) is the axis position coordinates of the sensor node s i . d(r), d(r − 1) are shown in Figure 6. Moreover, the motion performance function f d (s i ) described in Formula (16) is also introduced to normalize ∆d s i . It is found that f d (s i ) is positive relation with ∆d s i , thus f d (s i ) will increase with ∆d s i , vice versa. Construction of Adaptive Adjustment Function Now, f eρ (s i ) , f d (s i )and P opt are combined to construct initial cluster head selection threshold T (s i ) described in Formula (17). where, α, β, γ are the weight parameters of T (s i ). Data Transmission Once initial cluster head selection threshold T (s i ) is determined, the LEACH protocol is used to cluster the nodes and to transmit data. The details are shown in Figure 7. It can be clearly seen in Figure 7, the entire process of cluster construction and data transmission is divided into four stages. In the first stage, multiple sensor nodes are randomly deployed in the WSNs monitoring area, and the positions of the nodes remain unchanged, as shown in Figure 7a. In the second stage, T (s i ) of each node should be compared with T rand (s i ), which is a rand number uniformly distributed random in the [0, 1]. Then, described in Formula (18), if T rand (s i ) < T (s i ) , then cluster head indicator T c related to this node was set to be 1 and put this node into cluster head set C , else cluster head indicator T c related to this node was set to be 0 and put this node into non-cluster head set C . The details are shown in Figure 7b. In the third stage, each cluster head broadcasts a message to all nodes, and each node chooses to join corresponding clusters according to the strength of received signal, as shown in Figure 7c. When all sensor nodes join the cluster, each sensor node informs its selection through the carrier sense multiple access (CSMA) MAC protocol. Moreover, mobile sink can act as cluster heads of some nodes nearby mobile sink, and then each cluster head creates a time division multiple access (TDMA) schedules for its cluster members. In the fourth stage, the data transmission process of WSNs is shown in Figure 7d. The sensor nodes generate monitoring data in each round and send data to the cluster heads within the allocated transmission time. Cluster members are dormant at the nonallocated transfer time. While the cluster heads are always active to collect data of all nodes in the cluster. Once the cluster head collects all the data completely, it would fuse these data and then forward the fused data to the mobile sink. The current round will be over once all data are collected by the mobile sink completely. Construction of Adaptive Adjustment Function Based on Node Death Percentage Since energy consumption of the sensor nodes increase with working time of WSNs, leading to surviving sensor nodes reducing. Thus, parameters related to cluster head selection threshold should be adjusted to balance energy of WSNs. This paper embeds an adjustment function g(P dead ) described in Formula (20) into the sigmoid function [31] described in Formula (19). where, here, P dead is the percentage of nodes death; N dead is the number of nodes death; N is the total number of sensor nodes. It is found that the number of node deaths increases with P dead . The curve of g(P dead ) is shown in Figure 8. Adaptive cluster head selection threshold T(s i ) described in Formula (22) can be designed by combining P opt , f eρ (s i ), f d (s i ) and g(P dead ). Observing Formula (22), it can be found that the adaptive cluster head selection threshold T(s i ) is positively correlated with the desired percentage of cluster head P opt , the motion performance function f d (s i ) and the energy density function f eρ (s i ). As the network runs, the cluster head selection threshold T(s i ) decreases with the increase of the number of dead nodes. Moreover, when the construction of the adaptive cluster head selection threshold is completed, the adaptive cluster head selection threshold T(s i ) is calculated for all sensor nodes in the network to select the cluster head in each round. Then, nodes selected as the cluster head are added to the cluster head set C, and the nodes not selected as the cluster head are added to the non-cluster head set C . The symbolic instructions in the adaptive cluster head selection threshold building steps above are shown in Table 1. CEOMS Algorithm From above description, it can be concluded that proposed CEOMS algorithm combines underlying factors related to energy balance WSNs including energy, density of nodes and motion parameters of mobile sink to adaptive adjust threshold of cluster head selection. The flow chart of CEOMS algorithm is shown in Figure 9. Calculate neighborhood nodes threshold T s i s j according to Formula (7) 4: if s j ∈ Θ then 5: s j joins the neighborhood set N of node s i 6: end if 7: for ∀s j ∈ N do 8: Calculate energy density function f eρ (s i ) according to Formula (11) 9: end for 10: Calculate motion performance function f d (s i ) according to Formula (14) 11: Calculate initial cluster head selection threshold T (s i ) according to Formula (15) and (3) 12: Calculate adaptive adjustment function g(P dead ) according to Formula (18) 13: Construct the adaptive cluster head selection threshold T(s i ) according to Formula (20) 14: Perform cluster head selection and data transfer 15: if r < r max then 16: r = r + 1, return to Step 3 17: end if 18: end for Experimental Parameters Assume that WSNs composed of N sensor nodes were randomly distributed in the monitoring area S M . The initial position of the mobile sink was located in the monitoring area with coordinates (0 m, 0 m). The moving trajectory of mobile sink is shown in Figure 10. Simulation parameters related to the experiment are shown in Table 2. Simulation Results and Analysis This paper used some indicators including survival time of network nodes, total remaining energy, and the balance of network energy consumption to verify the feasibility and effectiveness of the proposed CEOMS algorithm by comparative experiment related to some algorithms such as the ILEACH algorithm [23], DBCH algorithm [24] and LEACH-DT algorithm [25]. For a fair comparison, the performances of the four algorithms above were compared under the same environment conditions using MATLAB2019a equipped with Windows 10-64bit on Intel(R) Core (TM) i5-6500H CPU and 8 GB RAM. Survival Time Analysis of Network Nodes When the monitoring data of all sensor nodes were collected, it indicated that one round had been completed, and then the next round would be entered. In each round, different algorithms had different energy consumption optimization strategies. Therefore, in each round, the remaining energy of the nodes was different, resulting in a different number of surviving nodes. The variation trends of survival nodes of the four algorithms are shown in Figure 11. It can be seen from Figure 11 that there were 100 survival nodes at the beginning of the network. With the continuous running of the network, the corresponding curves of the four algorithms all showed a downward trend. With the increase of energy consumption, the number of surviving nodes decreased and death nodes gradually appear. The round of first death node with respect to ILEACH, DBCH, LEACH-DT and CEOMS algorithm were 99th, 73th, 104th and 144th, respectively. Compared with the other three algorithms, first death node time of CEOMS algorithm was extended by 45.4%, 97.3% and 38.5%, respectively. The round of all death nodes with respect to ILEACH, DBCH, LEACH-DT and CEOMS algorithm were 1111th, 1225th, 1199th and 1645th, respectively. Compared with the other three algorithms, the life cycle of the CEOMS algorithm extended by 48.1%, 34.2% and 37.1%, respectively. The relationship between the number of node deaths and the number of running rounds of the four algorithms is shown in Figure 12. As can be seen from the above Figure 12, compared with the other three algorithms, the CEOMS algorithm took into account the remaining energy of nodes and the mortality of nodes when selecting cluster heads, so CEOMS algorithm had the longest working time and could effectively extend the network life cycle. Analysis of Total Remaining Energy of Nodes With the running of WSNs, the total remaining energy of WSNs reduced gradually. Total remaining energy of all nodes of the four algorithms versus the rounds increasing are plotted in Figure 13. In this figure, the number of nodes was set to be 100 and initial energy of each node was set to be 1 J. It can be found that the curve corresponding to CEOMS algorithm was higher than other three algorithms, which indicated that the total residual energy of WSNs corresponding to CEOMS algorithm was greater than that of other three algorithms. When the total remaining energy of the WSNs was 0, the round of running of ILEACH algorithm, DBCH algorithm, LEACH-DT algorithm and CEOMS algorithm was 1111th, 1225th, 1199th and 1645th, respectively, which indicated the entire network running time of CEOMS algorithm was the longest. Different from the other three algorithms, the CEOMS algorithm took the average remaining amount of neighborhood nodes and the current remaining energy as key factors when selecting cluster heads. So, the CEOMS algorithm could effectively save the network energy and prolong network lifetime. Comparative Analysis of Remaining Energy Distribution of Network Nodes It is clear that remaining energy of the node was in relation to the position of the mobile sink. In the 300th round, the difference of remaining energy of each node between four algorithms and classic LEACH is shown in Figure 14, respectively. It can be found that WSNs which used CEOMS algorithm had a larger surface fluctuation, which meant using the CEOMS algorithm could save the energy of the nodes. Moreover, at the 100th, 300th, 500th, 800th and 1000th round, the percentage of energy saving related to the four algorithms is described in the Figure 15. It can be found that the curve corresponding to the CEOMS algorithm was higher than other three curves. It also demonstrates that the CEOMS algorithm had superior in energy saving. Analysis of Variance of Nodes Remaining Energy In WSNs, static sink node will lead to excessive load and unbalanced energy consumption of nodes near the sink node. The proposed CEOMS algorithm introduced mobile sink and considered the influence of motion parameters on the selection of cluster heads to design motion performance function to alleviate the imbalance of energy consumption. Now, variance of nodes remaining energy (Figure 16) was introduced here to describe the balance degree of WSNs. Figure 16 shows that the initial remaining energy variance of four algorithms was 0, which meant the initial energy distribution of WSNs was entirely uniform. With the running of WSNs and changing of mobile sink location, the curve corresponding to CEOMS algorithm was always under the curves corresponding to other three algorithms. Moreover, variance of residual energy of ILEACH, DBCH, LEACH-DT and CEOMS algorithm reduced to 0 at the 1111th, 1225th, 1199th and 1645th round, respectively, which indicated the CEOMS algorithm not only effectively balanced network load but also extended network life. Applicability Analysis of Network Lifetime In order to further verify the applicability of the proposed CEOMS algorithm to prolong the life cycle of the network, the location of the network nodes was randomly generated for many experiments. Figure 17 shows the network life cycle of the ILEACH algorithm, DBCH algorithm, LEACH-DT algorithm, and CEOMS algorithm when the node position was randomly generated. Several experiments were performed by randomly generating node positions. From the results of the five experiments shown in Figure 16, the total death times (network life cycles) of nodes in the network corresponding to the CEOMS algorithm were: 1642 rounds, 1671 rounds, 1743 rounds, 1674 rounds and 1697 rounds. Compared with the network life cycle data of the ILEACH algorithm, the DBCH algorithm and the LEACH-DT algorithm, the CEOMS algorithm had the longest network life cycle, indicating that the CEOMS algorithm could effectively extend the network life cycle and had applicability. Conclusions Some factors including remaining energy and density within the neighborhood radius of sensor nodes, the location and velocity of mobile sink and the number of dead nodes may impact on energy balance of WSNs. This paper proposed a novel cluster-based energy optimization algorithm-CEOMS, to select cluster head by comprehensively considering above factors impact on energy balance of WSNs. The proposed algorithm firstly introduced the energy density function by considering the residual energy rate and density within the neighborhood radius of nodes to reduce the randomness of the cluster head selection. Secondly, the motion performance function was designed based on the variables of the motion parameters of the mobile sink, which effectively balanced the network load and reduced the data delay. Finally, an adaptive adjustment function related to node mortality was proposed to adjust the factors of the cluster head selection threshold, which prolonged the network life. The energy density function, motion performance function and adaptive adjustment function worked together to improve the self-adaptability of cluster head, balance network load, reduce the data delay and prolong the network life cycle. In addition, the proposed algorithm only uses a mobile sink, which will lead to partial data loss and delay when the monitoring area is large. Therefore, in the future work, how to apply multiple mobile sink to collect data in WSNs should be taken into consideration [32].
6,854.4
2021-04-01T00:00:00.000
[ "Computer Science" ]
Direct PCR offers a fast and reliable alternative to conventional DNA isolation methods for animal gut microbiomes The gut microbiome of animals is emerging as an important factor influencing ecological and evolutionary processes. A major bottleneck in obtaining microbiome data from large numbers of samples is the time-consuming laboratory procedures, specifically the isolation of DNA and generation of amplicon libraries. Recently, direct PCR kits have been developed that circumvent conventional DNA extraction steps, thereby streamlining the laboratory process by reducing preparation time and costs. However, the reliability and efficacy of the direct PCR method for measuring host microbiomes has not yet been investigated other than in humans with 454-sequencing. Here, we conduct a comprehensive evaluation of the microbial communities obtained with direct PCR and the widely used MoBio PowerSoil DNA extraction kit in five distinct gut sample types (ileum – caecum – colon – faeces – cloaca) from 20 juvenile ostriches, using 16S rRNA Illumina MiSeq sequencing. We found that direct PCR was highly comparable over a range of measures to the DNA extraction method in caecal, colon, and faecal samples. However, the two methods recovered significantly different microbiomes in cloacal, and especially ileal samples. We also sequenced 100 replicate sample pairs to evaluate repeatability during both extraction and PCR stages, and found that both methods were highly consistent for caecal, colon, and faecal samples (rs > 0.7), but had low repeatability for cloacal (rs = 0.39) and ileal (rs = −0.24) samples. This study indicates that direct PCR provides a fast, cheap, and reliable alternative to conventional DNA extraction methods for retrieving 16S data, which will aid future gut microbiome studies of animals. microbiomes relies on large sample sizes, and so it is important to find fast, cost effective, 64 and reliable ways of processing microbiome samples. 66 The conventional way of generating amplicon libraries for microbiome studies is to first 67 extract and purify DNA, for example, using kits such as the MoBio PowerSoil DNA isolation 68 kit. This procedure is recommended by the Earth Microbiome Project (Caporaso et al. 2012; 69 Gilbert et al. 2014), and is widely used in human and non-human animal microbiota studies. 70 The DNA extraction protocol involves mechanical and chemical lysis of cells, and a DNA 71 purification procedure, which adds up to 32 separate steps (Table 1) control and 2 blank samples, 40 extraction replicates, and 10 PCR replicates, that were all 220 prepared both with the DNA extraction and the direct PCR method (Table S1). An additional 221 11 control and 2 blank samples from a subsequent run were also evaluated to increase the 222 number of controls (Table S1). Sequencing of blank (negative) samples resulted in extremely few sequence reads: three 267 blank samples had < 320 reads and the other three had < 3,000 reads, compared to an average 268 of 10,689 reads for the other sample types ( Figure S1). Control swabs showed highly 269 dissimilar microbial composition to all other samples (see Videvall et al. 2017) and therefore 270 we did not include these in any further analyses. 271 To evaluate bacterial abundances, we first filtered out all OTUs with less than 10 272 sequence reads and then, using DESeq2 (v. 1.14.1), counts were modelled with a local 273 dispersion model and normalised per sample using the geometric mean (Love et al. 2014). and Hochberg false discovery rate for multiple testing (Benjamini & Hochberg 1995). OTUs 279 were labelled significant if they had a corrected p-value (q-value) < 0.01. 280 We examined the repeatability of the two methods by evaluating the strength of the 281 correlation in normalised OTU abundance between paired sample replicates. This was done 282 separately for the two methods, and for the two replicate sets (extraction replicates and PCR 283 replicates). Correlation coefficients were calculated using Spearman's rank correlations on all 284 OTUs with non-zero abundances. 285 286 Practical aspects of direct PCR and DNA extraction 289 The total time spent extracting DNA using the direct PCR method was considerably shorter 290 (45 minutes) compared to the conventional DNA extraction method (8 hours) ( Table 1). The 291 cost of using the direct PCR method was also lower, as were the number of steps in each 292 protocol (Table 1). Nevertheless, the number of sequence reads obtained per sample (mean = 293 10,689) did not differ between the DNA extraction method and the direct PCR method (two 294 sample t-test: t = 1.25, df = 290.7, p = 0.21) (Figure S1), as expected since equimolar PCR 295 products from the samples were combined before sequencing. 296 297 298 Description of the microbiomes obtained with direct PCR and DNA extraction 299 Samples clustered strongly according to sample type both in the Principal Coordinates 300 Analysis (PCoA) ( Figure 1A) and the network analysis ( Figure 1B), although some minor 301 separation between the direct PCR and DNA extraction methods was evident. The two library 302 preparation methods yielded fairly consistent patterns for the total number of OTUs per 303 bacterial class and sample type, but also reflected some notable differences in taxa 304 composition ( Figure 1C). Specifically, Bacilli were slightly more abundant in all sample 305 types with the DNA extraction method relative the direct PCR method, and the ileum in 306 particular showed the largest class differences, with a higher abundance of Mollicutes, 307 Gammaproteobacteria, and Bacteroidia with the direct PCR method ( Figure 1C). 308 To further investigate if the microbiota of samples from the direct PCR and DNA 309 extraction methods differed depending on gut site, we performed separate PCoAs for each of 310 the five sample types. The caecum, colon, and faeces showed very high correspondence in 311 beta diversity for identical samples prepared using direct PCR and DNA extraction, as they 312 clustered by individual and not method, whereas differences were much greater for cloacal 313 and ileal samples (Figure 2). 314 315 316 Differences in microbiomes obtained with direct PCR and DNA extraction 317 We next evaluated differences in alpha diversity (OTU richness) between direct PCR and 318 DNA extraction methods for the different sample types ( Figure 3A). There were no 319 differences in alpha diversity between the two methods in the colon, faeces, and cloaca 320 (paired Wilcoxon signed rank test: npairs = 20 per type, p > 0.4) ( Figure 3A). However, alpha 321 PCR caecal samples (npairs = 20, V = 199, p = 0.0001) ( Figure 3A). Correlation analyses of 324 alpha diversity between the two methods also showed higher diversity for ileal direct PCR 325 samples and slightly lower diversity for caecal direct PCR samples, but the strength of the 326 correlations between methods were generally high for all sample types (r = 0.56-0.91) 327 ( Figure S2). 328 Dissimilarities in microbiome composition between samples, as calculated by the 329 Bray-Curtis distance measure, showed significant effects of method, sample type, individual, 330 and the interaction between method and sample type (PERMANOVA: all effects: p < 0.001). 331 The overall variance explained by method and method*sample type was, however, extremely 332 small (R2 = 0.014, and R2 = 0.019, respectively), whereas the variance explained by host 333 individual (R2 = 0.283) and sample type (R2 = 0.201) were substantially larger. 334 Examining the Bray-Curtis distances between the two methods within sample types 335 revealed that while the caecum, colon, and faeces all showed relatively low distances (mean: 336 0.23, 0.25, and 0.29, respectively), the cloaca (mean = 0.39) and most notably, the ileum 337 (mean = 0.56), displayed much greater distances and much higher variances ( Figure 3B). 338 Specifically, the distances between identical samples prepared with each method from the 339 ileum were significantly higher than the corresponding distances in the caecum, colon, and Analyses of differences in the abundance of specific OTUs between the two methods 354 resulted in very few significantly different OTUs in the caecum (n = 9), colon (n = 13), and 355 faeces (n = 24) (Figure 4). However, there were many more in the cloaca (n = 67), and the 356 ileum demonstrated a staggering 324 significant OTUs between the DNA extraction and the 357 direct PCR method (Figure 4). Notably, the vast majority of significant OTUs across all 358 extraction. 360 Comparing the exact OTUs that had significantly different abundances in the five 361 sample types, showed that they were unique to each sample type (i.e. OTUs were only 362 significantly different within one type) (Table S2). We found one genus, however, 363 Mycoplasma, with significant OTUs present in all sample types. All significant Mycoplasma 364 OTUs (family: Mycoplasmataceae) had higher relative abundances in the samples from the 365 direct PCR compared to the DNA extraction method ( Figure 5). Other genera with 366 significantly different abundances in multiple sample types were e.g. Anaerofustis (higher 367 abundance with the direct PCR method in colon, faecal, and cloacal samples) and Klebsiella 368 (more numerous in the direct PCR method of ileum, caecum, and colon). The genus 369 Prevotella (class: Bacteroidia) was the most prevalent in the list of significant genera, 370 representing 43 unique OTUs in the cloaca and ileum, all of which (100%) had higher 371 abundance in the direct PCR samples (Table S2). 372 The phylum with the highest number of significantly differentially abundant OTUs 373 was the Firmicutes (n = 210 in total), and in particular the class Clostridia (n = 171 in total) 374 ( Figure 5; Table S2), which was in majority in most sample types ( Figure 1C). The genera 375 which comprised the most significant differentially abundant OTUs between the two methods 376 were the Oscillospira (class: Clostridia), Mycoplasma (class: Mollicutes), and Coprococcus 377 (class: Clostridia) (Table S2). 378 Repeatability of replicate samples with direct PCR and DNA extraction 381 Next, we evaluated the repeatability of the DNA extraction and direct PCR methods by 382 calculating correlations of OTU abundances and diversity between pairs of replicate samples. 383 For the "extraction replicates", the correlation coefficient of OTU abundance was almost 384 identical for the DNA extraction method (rs = 0.73; Figure 6A) and the direct PCR method (rs 385 = 0.70; Fig 6B). For the "PCR replicates", the strength of the correlation was slightly higher 386 but again similar for the two methods (DNA extraction: rs = 0.82, Figure 6C; and direct PCR: 387 rs = 0.80, Figure 6D). 388 When we partitioned the OTU abundance data according to sample type, large 389 differences in repeatability were observed ( Figure 6E). The caecal, colon, and faecal samples 390 had the strongest correlations between replicates, with both methods having an average rs = 391 0.70-0.74 for the extraction replicates and an average rs = 0.76-0.83 for the PCR replicates 392 ( Figure 6E). In contrast, the extraction replicates from the cloaca were characterised by a 393 much weaker correlation (rs = 0.36), as did the cloacal PCR replicates (rs = 0.49), and the 394 correlations between ileal replicates were even negative (extraction replicates: rs = -0.27, PCR 395 replicates: rs = -0.05) ( Figure 6E). 396 Finally, we examined the correlation between alpha diversity estimates in the replicate 397 samples to evaluate the repeatability of the community between methods. Relative to the 398 OTU abundance data, there was higher repeatability in alpha diversity for cloacal and ileal 399 samples (r = 0.79-0.96), while the caecal, colon, and faecal samples again had high 400 repeatability for both methods (r = 0.70-0.97) (Figures S4-S5). 401 403 This study shows that direct PCR provides highly comparable results to the widely used and 404 recommended DNA extraction method in analyses of gut microbiomes of animals. Both 405 techniques give qualitatively and quantitatively similar estimates of microbial diversity and 406 abundance for caecal, colon, and faecal samples, and were highly repeatable for these sample 407 types. However, the two methods present dissimilar microbiomes for cloacal, and in 408 particular, ileal samples, recovering large differences and poor repeatability in OTU 409 abundances across replicates. We discuss hypotheses that may explain why these methods 410 perform well with some sample types, but not others. Figures S4-S5) and between the methods ( Figure S2). This suggests that although it is 424 difficult to measure relative abundances of specific bacterial taxa when DNA concentrations 425 are low, it may still be possible to gain an accurate measure of the community composition 426 using both of these methods. 428 The direct PCR samples from the ileum had significantly higher alpha diversity ( Figure 3A) 429 and higher relative abundances in the majority of differentially abundant OTUs (Figure 4), 430 compared to the DNA extraction method. One potential reason for this difference is that more 431 DNA is lost during the DNA extraction procedure, which is column-based with several wash 432 and transfer steps. This is known to be associated with high DNA loss, whereas with direct 433 PCR, the individual samples are contained in just one plate well during the full extraction 434 procedure. In samples with low starting DNA concentrations, direct PCR may therefore be 435 superior to conventional extraction methods at recovering rare bacterial taxa. The higher 436 diversity in the ileal direct PCR samples could also be a consequence of the regular Taq 437 polymerase included in the direct PCR reaction mix, which is slightly more error-prone than 438 OTUs, raise the alpha diversity and the number of differentially abundant OTUs. However, 441 Taq polymerase does not explain the consistent changes observed in the PCoA (Figure 2), the 442 correlation in diversity between the methods ( Figure S2), or the changes in relative 443 abundances of specific taxonomic groups ( Figure 1C and Figure 5). 444 445 Despite high correspondence in abundance estimates across most OTUs, there were some 446 consistent differences in the abundance of specific taxa between direct PCR and the DNA 447 extraction method. It is possible that morphological differences between bacterial groups may 448 influence the efficiency with which the two methods recover DNA. For example, compared 449 to the DNA extraction method, we found that direct PCR had higher relative abundances of fold changes indicate higher relative OTU abundance in the DNA extraction method, and 640 negative log2 fold changes signify higher abundance in the direct PCR method. 641
3,330.6
2017-09-27T00:00:00.000
[ "Biology", "Environmental Science" ]
Towards a Better Detection of Horizontally Transferred Genes by Combining Unusual Properties Effectively Background Horizontal gene transfer (HGT) is one of the major mechanisms contributing to microbial genome diversification. A number of computational methods for finding horizontally transferred genes have been proposed in the past decades; however none of them has provided a reliable detector yet. In existing parametric approaches, only one single compositional property can participate in the detection process, or the results obtained through each single property are just simply combined. It’s known that different properties may mean different information, so the single property can’t sufficiently contain the information encoded by gene sequences. In addition, the class imbalance problem in the datasets, which also results in great errors for the gene detection, hasn’t been considered by the published methods. Here we developed an effective classifier system (Hgtident) that used support vector machine (SVM) by combining unusual properties effectively for HGT detection. Results Our approach Hgtident includes the introduction of more representative datasets, optimization of SVM model, feature selection, handling of imbalance problem in the datasets and extensive performance evaluation via systematic cross-validation methods. Through feature selection, we found that JS-DN and JS-CB have higher discriminating power for HGT detection, while GC1–GC3 and k-mer (k = 1, 2, …, 7) make the least contribution. Extensive experiments indicated the new classifier could reduce Mean error dramatically, and also improve Recall by a certain level. For the testing genomes, compared with the existing popular multiple-threshold approach, on average, our Recall and Mean error was respectively improved by 2.81% and reduced by 26.32%, which means that numerous false positives were identified correctly. Conclusions Hgtident introduced here is an effective approach for better detecting HGT. Combining multiple features of HGT is also essential for a wider range of HGT events detection. Introduction Horizontal gene transfer (HGT, also called lateral gene transfer) is a transfer of genetic material from one lineage to another and has played a key role in species evolution and microbial genome diversification [1,2]. Transfers can occur both between closely and distantly related species or strains, and are thought to be frequent events [3]. In addition, horizontal gene transfer has also been proposed to result in the emergence of novel human diseases and poses several risks to humans [4,5]. As sequence data has accumulated, evidence for rampant HGT has increased dramatically. Thus, detecting HGT has enormous practical significance for providing a better understanding of the impact of HGT on genome evolution and for identifying new drug targets. At present, there are two primary strategies to detect the genes that have been transferred horizontally: phylogenetic approaches and parametric approaches [6,7]. Phylogenetic approaches are typically based on the comparative study of numerous genomes to find genes with unusually taxonomic distributions. However, many other phenomena, such as biased mutation rates, gene loss and long branch length attraction etc., also can cause the phylogenetic tree for a gene to differ from that for the species, thus, phylogenetic approaches are time-consuming and insufficiently robust [8,9]. In contrast, parametric approaches (also called compositionbased approaches) are based on a common theory that the unusual characteristics of horizontally transferred genes can distinguish themselves from other genes in genome. This kind of approach is computationally less demanding and can be carried out in each single genome. So far, various parametric approaches have been proposed, but it's not difficult to find one common drawback that only one single compositional property could be used to identify the transferred genes in each predicting experiment. It's known that different properties may mean different information, and this limitation also results in great errors for HGT detection. Some combined methods were also proposed by Becq et al. [10] and Azad et al. [11] to resolve this problem, but these methods just only combined the predictive results that obtained through each single property, the essence remained that only one single property was used. Therefore, how to sufficiently extract the information encoded by genes has become an open and challenging issue. In addition, machine learning also was applied widely for HGT detection [12,13], but the class imbalance problem which can result in poor classification performance with respect to the minority class [14] hasn't been considered by them. In light of all the caveats, in this study, we have developed a new strategy (Hgtident) which used support vector machine (SVM) to detect horizontally transferred genes by combining the unusual properties effectively, meanwhile, the class imbalance problem was also considered. The information from combined properties can sufficiently stands for the whole gene sequence. To our knowledge, this is the first use of such integrated strategy to identify horizontally transferred genes. As a result, Hgtident can achieve better performance than the existing methods. Datasets In previously published study, various artificial datasets were put forward [3,5,6,7,8,10,11,12,13], and this kind of simulative dataset was composed of donor genes and recipient genome. The task is that of recovering as many as possible of the donor genes. But it's important to note that, in the evolutionary histories of recipient genome, those genes from a transfer of genetic materials between different species don't have been considered. So, in this article, in order to validate the performance of Hgtident in genuine genomes, we chose six common genomes published in more reliable HGT-DB database (http://genomes. urv.cat/HGT-DB/) [15], which was E. coli K12, E. coli O157 Sakai, S. enterica Typhi CT18, S. enterica Paratypi ATCC 9150, C. pneumoniae CWL029 and S. agalactiae 2603, respectively. The horizontally transferred genes and others in genome were respectively regarded as positives and negatives. The results predicted by Hgtident would be compared with that of the existing popular multiple-threshold approach proposed by Azad et al. [11]. Selection of SVM Model SVM is a supervised machine learning paradigm derived from the statistical learning theory of structural risk minimization principle for solving linear and non-linear classification and regression problem [21]. We chose SVM as our classification paradigm due to its high generalization capability, ability to find global classification solutions [21], and successful application in bioinformatics and other practical domains. The model selection for SVM involves the selection of a kernel function and its parameters which yield the optimal classification performance for a given dataset [21]. Among the available kernel functions, we chose the most popular and widely used Radial Basis Function (RBF) as the kernel function because of its higher reliability in finding optimal classification solutions in most practical situations [22]. The performance of the classifier at each parameter point (c, g) is evaluated by 5-fold cross-validation on the training dataset. After finding the best parameters, a new SVM model was trained using the complete training dataset at those parameters. Then a separate testing dataset was used to measure the performance of the developed classifier. The C++ interface of libsvm3.1 package [23] was used to develop SVM model. Before training the SVM classifier systems, the complete dataset was scaled into (21, +1) interval. Feature Selection Selecting the most discriminative set of features would increase the performance, efficiency and comprehensibility of a classifier system by reducing its complexity. In particular, through analyzing the optimal feature subsets for these genomes, we can clearly realize which features make more important contributions to HGT detection. Here genetic algorithm (GA) was chosen as our feature selection paradigm due to its strong random search ability to find the convincingly optimal feature subset. The evaluation from GA aims to one feature subset, not one single feature, and this can guarantee the combination optimization of feature subset [24]. Firstly, generated some feature subsets randomly, then the new feature subsets were obtained through selection, cross and mutation. After many iterations of this ritual, the result would converge to the optimal solution, which corresponded to the optimal feature subset. The 5-fold cross-validation was used to test the generalization ability of feature subsets, the feature subset which obtained optimal classification performance would be considered as optimal feature subset. The feature selection procedure was carried out based on initial imbalanced dataset. Binary string was chosen to code the feature data of the population, 1 meant that the corresponding feature was selected, and 0 was just the reverse. The chosen fitness function was f(x) = 10000*Recall because we needed to evaluate the Mean error under the situation where the highest Recall was obtained. Each generation contained 100 individuals. And the cross probability, mutation probability and iteration number was set to 0.9, 0.1 and 200, respectively. We employed the classical proportion selection operators, and the optimal two individuals in every generation were directly passed to the next generation. Class Imbalance Problem As is well-known, the horizontally transferred genes are far less than others in each genome, which will inevitably result in the sample imbalance problem. It has been well studied that training a classifier with an imbalance positive and negative dataset in machine learning research would result in poor classification performance with respect to the minority class [14,25], in this case, it would be with respect to the horizontally transferred gene class. According to previous study, Synthetic Minority Over-sampling Technique (SMOTE) which is independent from the learning algorithm and involves in pre-processing of training data was successfully applied to this kind of problem [26]. It is an oversampling technique which introduces new synthetic examples in the neighborhood of the existing minority examples. Therefore, SMOTE was chosen to resolve the class imbalance problem in this study. Evaluation Criteria In this research, we used detection rate Recall as our primary evaluation criteria as the same with that in the paper published by Azad et al. [11]. In addition, Mean error was also used as the evaluation criteria to sufficiently evaluate the performance of Hgtident. They are defined as follows, Feature Selection Results As the first experiment, we trained an SVM model with the complete imbalanced dataset to observe the classification performance. The complete dataset was randomly divided into five equally sized partitions and each partition contained the same ratio of positives and negatives. Then four partitions were used together as the training dataset to develop an SVM classifier, the resulted model was tested for its classification performance on the fifth partition. This procedure was repeated five times with different combinations of training and testing dataset, and the results were averaged. Table 1 shows the classification results obtained subjected to all features and optimal feature subset. For each genome, Recall and Mean error were respectively improved and reduced effectively by using the optimal feature subset. In average, Recall was improved by 6.50%, and Mean error was reduced by 4.67%, which showed the optimal feature subset has a significant influence in better classification results. The resulted optimal feature subsets with less number of features not only gave higher classification results, but also immensely reduced computational complexity. At the same time, the optimal feature subset for each genome was analyzed as well, and summarized in Table 2. We found the JS-DN and JS-CB appeared in five out of six feature subsets, which indicated these two features have higher discriminating power for HGT detection than the others. Second was Karlin's codon bias, x 2 dinucleotide and x 2 codon bias. In addition, GC1-GC3 and kmer (k = 1, 2, …, 7) hardly ever appeared in these optimal feature subsets, which also indicated these features make the least contribution to HGT detection. These deductions could also be well achieved from the results obtained through the multiplethreshold approach in Section ''Comparison of multiple-threshold approach with Hgtident''. Class Imbalance Learning Results The imbalance learning experiments would be carried out to observe the classification results by 5-fold cross validation. First, an SVM model was trained by applying SMOTE on a training dataset containing four-fifth complete dataset. Then its performance was tested on the remaining imbalanced one-fifth of dataset. This procedure was repeated five times with different combinations of training and testing datasets, finally, the results were averaged. Table 3 presents the classification results through class imbalance learning method with the optimal feature subsets. From these results, we could find that, compared with the preliminary classification results obtained through the imbalanced datasets, the application of SMOTE could improve the Recall and reduce the Mean error effectively. In average, Recall was improved by 6.53%, and Mean error was reduced by 6.02%, which provided a good evidence for us to apply SMOTE in this problem for the development of a better performing classifier with respect to imbalanced positive and negative classes. Comparison of Multiple-threshold Approach with Hgtident At present, the multiple-threshold approach proposed by Azad et al. [11] is very popular, because better results can be obtained. Thus the comparison between these two approaches would be carried out to evaluate the performance of Hgtident (Table 4). It's not difficult to find that, in the multiple-threshold approach, each Recall was obtained at the cost of a higher Mean error, which means that a mass of false positives were produced. The reason is maybe that only one single property can be applied in this approach, however, every one single property can't sufficiently express the comprehensive information encoded by genes. This information should be expressed sufficiently by different properties based on different directions. Therefore, these seven comprehen- sive and representative features were applied to this research together. From Table 4, we could clearly observe that Hgtident effectively reduced the Mean error, which also illustrated the correctness of our viewpoint. In addition, we respectively chose the highest Recall and the corresponding Mean error obtained through multiple-threshold approach in each genome to compare with our results (Fig. 1). For E. coli K12, our Recall was reduced by 0.76%; but for E. coli O157 Sakai, S. enterica Typhi CT18, S. enterica Paratypi ATCC 9150, C. pneumoniae CWL029 and S. agalactiae 2603, our Recall was respectively improved by 0.73%, 5.95%, 0.82%, 3.80% and 6.33%, as a whole, the mean was 2.81%. In addition, for each genome, our Mean error was reduced dramatically, and the overall mean was 26.32%. Tsirigos et al. [12] and Chen et al. [13] also used SVM to research the prediction of horizontally transferred genes, but they used the simulated datasets, most importantly, only one single property was used in their researches, which also indicated insufficient information encoded by genes was extracted. In addition, surprisingly, none of them have considered a proper class imbalance learning method for classifiers development. Thus, their results were even inferior to that obtained through the multiple-threshold approach. Therefore, we can state that the results reported in our research are much more reliable and better than those results published by other existing approaches. Conclusions In this research, an integrated strategy, which more comprehensively described the biological information encoded by genes, was proposed to identify horizontally transferred genes. Meanwhile, SMOTE was also considered to address the class imbalance problem. Extensive experiments indicated that the extraction of sufficient information can reduce Mean error dramatically, and also improve Recall by a certain level. However, change in gene inventory is a historical process, how to thoroughly extract the useful information encoded by genes still remain a challenging and open issue. Further study is yet needed to decrease the false positives and negatives.
3,507
2012-08-14T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
Direct Surface Modification of Polycaprolactone-Based Shape Memory Materials to Introduce Positive Charge Aiming to Enhance Cell Affinity In this study, the introduction of a positive charge on the surface of a shape memory material was investigated to enhance cell affinity. To achieve this, the direct chemical modification of a material surface was proposed. Sheet-type, crosslinked poly(caprolactone-co-α-bromo-ɤ-butyrolactone) (poly(CL-co-BrBL)) were prepared, and the direct reaction of amino compounds with bromo groups was conducted on the material surface with a positive charge. Branched poly(CL-co-BrBL) was prepared, followed by the introduction of methacryloyl groups to each chain end. Using the branched macromonomers, stable and sheet-type materials were derived through UV-light irradiation. Then, the materials were soaked in an amino compound solution to react with the bromo groups under various conditions. Differential scanning calorimetry and surface analysis of the modified materials indicated that 10 vol% of N, N-dimethylethylenediamine in n-hexane and 1 h soaking time were optimal to maintain the inherent thermal properties. The achievement of increased luminance and a positive zeta potential proved that the direct modification method effectively introduced the positive charge only on the surface, thereby enhancing cell affinity. Introduction Recently, mechanobiology has gained considerable attention in the biomaterial research field [1][2][3][4][5][6], because it allows clarification of the relationship between cell functions and mechanical properties. Moreover, investigation of the surface topography of materials significantly contributes to the field of regenerative medicine. It has been reported that extrinsic factors such as growth environment influences stem cell functions such as differentiation and proliferation in vivo [7]. Previous research has shown that three main factors of scaffold surfaces affect cell functions, namely, chemical, mechanical, and topographical factors [8]. Micro-or nanofabrication technology can effectively realize surface manufacturing. In particular, the effect of topographical features of nanoscale and microscale materials on cell functions such as adhesion, migration, proliferation, and differentiation can be elucidated [9]. Herein, we focused on polycaprolactone (PCL) as a functional material. PCL is useful as a biodegradable and shape memory material. Artificial dura mater has been successfully prepared using a PCL membrane [10,11], and many shape memory materials containing PCL have been reported [12][13][14][15][16]. The use of PCL promotes favorable polymerization initiated by alcoholic compounds and hydroxy group termination. Therefore, molecular design of end groups is possible via the initiation of functional compounds or chemical modification of hydroxy end groups. Previous studies have proven that the molecular design of polycaprolactone facilitated the development of thermo-responsive and shape memory materials without the requirement of additional processes, such as urethane bonding [17][18][19][20]. PCL is a well-known semicrystalline polymer with a melting point of~60 • C, and its crosslinked materials showed favorable thermo-responsiveness that operated around its melting point. Furthermore, because of the copolymerization with other monomers such as lactide and the introduction of a branched structure in the materials, the crystallinity of the materials was effectively modified to correlate it to their operating temperature [21]. The operating temperature must be adjusted approximately to the body temperature for biomedical applications, and the optimal condition was successfully determined [22]. In addition, surface toughness or patterned topography of film-type PCL can be modified upon changing the temperature. Alternatively, the hydrophilicity and hydrophobicity of the PCL surface do not undergo any change before and after the softening point, that is, a PCL film can be used as the scaffold to understand the influence of PCL on cell functions without considering the temperature-induced changes in hydrophilicity and hydrophobicity [19]. Ebara et al. prepared a shape memory film based on PCL with a patterned surface and used it as a scaffold [20]; they observed that cells were ceded on the patterned scaffold and were lined up along the pattern direction. After heating, the scaffold recovered a flat shape, and the cells randomly migrated. These results suggested that cells can recognize surface morphology and respond to changes in topography. However, PCL has a nonadhesive surface; therefore, it requires a cell-adhesive protein coating such as gelatin or fibronectin. As a result, cells can indirectly interact with the surface via the adhered proteins. In biomedical field, to improve affinity with a gene or cell for a polymer system, suitable copolymerization was carried out generally [23][24][25][26][27]. Bu et al. reported the surface modification of aliphatic polyesters such as PLA and PCL [28]. By implementing polymer design that improves the affinity of cells to the material surface, the introduction of a positive charge on the PCL surface allowed the enhancement of cell affinity. This could be attributed to the negative charge of the cell surface due to the presence of sialic acids at the sugar chain ends. To achieve this, crosslinked PCL films with a positive charge on their surfaces have been designed and prepared [29]. The enhancement of cell adhesion without any protein precoating was confirmed; however, controlling the positive charge density of PCL films was not easy. To simplify it, α-bromo-7-butyrolactone (BrBL), which has a reactive Br group, can be used to introduce a positive charge. BrBL copolymerizes with ε-caprolactone to afford copolymer poly(CL-co-BrBL) with Br groups of the polymer main chains [30]. Thus, a positive charge can be introduced by using a simple reaction with amino compounds and Br groups. In this study, we intended to introduce a positive charge only to the surface of the film by soaking the film in an amine solution. Such direct surface modification would be advantageous because it can be functionalized without influencing bulk properties. Moreover, the film is expected to maintain positive charge density before and after expansionary deformation. To achieve the research objectives, the actual studies consist of (a) copolymer synthesis and its characterization, (b) film preparation and its shape memory property, (c) investigation of optimal condition for introduction of amine groups and confirmation of maintenance of bulk properties, (d) characterization of the positively-charged surface, and (e) preliminary test for cell affinity. Materials ε-Caprolactone 1,4-butanediol and pentaerythritol as initiators were obtained from the Tokyo Chemical Industry (TCI), Tokyo, Japan. α-Bromo-7-butyrolactone (BrBL) was commercially available from Wako Pure Chemicals, Tokyo, Japan. Tin dioctanoate as a polymerization catalyst, methacryloyl chloride N, N-dimethylethylenediamine (DMEDA), and 2-hydroxy-4-(2-hydroxyethoxy)-2-methylpropiophenone were purchased from TCI (Tokyo, Japan) and used as received. Solvents such as tetrahydrofuran, chloroform, ethyl acetate, and n-hexane, which were used for the reaction or polymer purification, were reagent grade and used as received. Anionic dye 2 ,4 ,5 ,7 -tetrabromofluorescein disodium salt (Acid Red 87) were acquired from TCI and were used in the characterization with the amino compound and for positive charge introduction. HeLa cells (JCRB9004) were purchased from JCRB cell bank of National Institutes of Biomedical Innovation, Health, and Nutrition, Osaka, Japan. Characterization To calculate the monomer content in the copolymer, 1 H-NMR spectra were recorded using a JEOL RESONANCE spectrometer (JNM-ECP500; Tokyo, Japan) operated at 400 MHz. Deuterated chloroform (CDCl 3 ) was used as a solvent, and chemical shifts of the peaks were recorded with respect to tetramethylsilane (TMS). The thermal properties of the prepared materials were investigated through differential scanning calorimetry (DSC, DSC6100, Seiko Instruments Inc., Tokyo, Japan). The heating rate was 10 • C/min, and the data from the second scan were adopted. The photoluminescence of the adsorbed fluorescent probes was evaluated through a fluorescence spectrophotometer (Scope.A1, ZEISS Research Microscopy Solutions, GmbH, Jena, Germany) to estimate the positive charge. Moreover, the surface characteristics were studied on the basis of the ATR-IR spectra recorded by the Spectrum On system (PerkinElmer, Billerica, MA, USA). Zeta potential was measured using a zeta potential analyzer (ELS-2000ZS, Otsuka Electronics, Osaka, Japan). Characterization To calculate the monomer content in the copolymer, 1 H-NMR spectra were recorded using a JEOL RESONANCE spectrometer (JNM-ECP500; Tokyo, Japan) operated at 400 MHz. Deuterated chloroform (CDCl3) was used as a solvent, and chemical shifts of the peaks were recorded with respect to tetramethylsilane (TMS). The thermal properties of the prepared materials were investigated through differential scanning calorimetry (DSC, DSC6100, Seiko Instruments Inc., Tokyo, Japan). The heating rate was 10 °C/min, and the data from the second scan were adopted. The photoluminescence of the adsorbed fluorescent probes was evaluated through a fluorescence spectrophotometer (Scope.A1, ZEISS Research Microscopy Solutions, GmbH, Jena, Germany) to estimate the positive charge. Moreover, the surface characteristics were studied on the basis of the ATR-IR spectra recorded by the Spectrum On system (PerkinElmer, Billerica, MA, USA). Zeta potential was measured using a zeta potential analyzer (ELS-2000ZS, Otsuka Electronics, Osaka, Japan). Preparation of Shape Memory Film with Two-Branched and Four-Branched PolyCL-co-BrBL-MA (2bPolyCL-co-BrBL-MA and 4bPolyCL-co-BrBL-MA, Respectively) One gram of the mixture of the two-branched and four-branched PolyCL-co-BrBL-MA was dissolved in 1.5 mL THF containing a photosensitizer. This solution was poured into a 5.0 × 5.0 × 0.1 cm mold and quickly placed between a PET sheet and a glass plate. A light-triggered crosslinking reaction occurred upon irradiation of each side of the plate for 15 min by using a high-pressure mercury lamp (UVL-100HA (100 W), Rikokagaku Co., Tokyo, Japan). The lamp has some line spectra at 254, 313, 365 (main), 405, and 436 nm and was used without any filter. The obtained PolyCL-co-BrBL film was swollen in acetone for 1 d and then dried in a vacuum oven. Eventually, light-yellow-colored sheets were obtained. Direct Surface Modification of PolyCL-co-BrBL Film Using Amino Compound Solution Representative method to modify amine onto the materials surface is shown in Figure 2. The PolyCL-co-BrBL film, having dimensions of 1.0 × 3.0 × 0.1 cm, was cut and soaked into DMEDA solution of n-hexane. The apparatus is shown in Figure 2. The sample piece was put on the PET film with many holes fixed in the solution so as not to collide with the stirring bar. The solution was stirred under dry argon atmosphere at 25 • C. Then, solution was changed to 50 mL of pure n-hexane and stirred for 10 min to wash out the remaining amine. The operation was repeated twice to remove the amine thoroughly. The obtained cationic film was dried in a vacuum oven. Eventually, white-colored films were obtained. In this study, we investigated the effects of the amino compound solution concentration and reaction time on the film surface properties. Contact Angle Measurement The contact angles were measured to estimate the introduction of amino groups onto the surface. Such group was expected to show more hydrophilic property. We put a 2 μL water droplet on the sample and measured the contact angles using goniometer (No.20424, Erma CO., Tokyo, Japan) and calculated by a half-angle method. The measurement was repeated 3 times, and the value was averaged from 6 points of the one sample. Preparation of Stretched Modified PolyCL-co-BrBL Film The shape memory of the modified films was investigated using a temperature-controlled oven. At 50 °C, the specimens of the prepared films with a size of 1.5 cm × 5 cm × 0.5mm were fixed at the end using a clip in the oven and left for 30 min. Then, a weight of 250 g was hung on the other end of the specimens; this setup was maintained until the films cooled down to room temperature. Cell Adhesion onto Modified PolyCL-co-BrBL Film Before starting the cell culture, the modified PolyCL-co-BrBL film was sterilized using a 70% ethanol aqueous solution. The specimen was stuck on the glass plate, and the small silicone chamber was placed on the plate. The same three samples were provided to the experiment. HeLa cells were seeded at a density of 7.0 × 10 4 /cm 2 cells or were put on a sterilized material surface and cultured in a medium at 37 °C for 16.5 h. The HeLa cells on the surfaces were fixed with 4% paraformaldehyde for 10 min and permeabilized with 0.1% Tween 20/PBS for 5 min at room temperature. To visualize the nucleus, the cells were treated with Hoechst 33,342 for 1 h. The cell morphology was imaged using a fluorescence microscope (Olympus IX70, Tokyo, Japan). Preparation of Crosslinkable Poly(CL-co-BrBL) as a Starting Material According to our previous report, crosslinkable, branched, cationized PCL-based macromonomers were successfully prepared, and materials with positively charged surfaces were derived from macromonomers [29]. The quaternized cationic group was introduced by the reaction with bromoacetyl bromide substituted to hydroxy groups, followed by the reaction with N, N-diethylaminomethyl methacrylate. In these materials, the cationic groups localized around crosslinked points because the quaternized cationic group was derived from the reaction between N, N-dimethylaminoethyl methacrylate and bromoacetyl groups at the chain end. Therefore, in this study, we intended to prepare other types of positively charged materials wherein the cationic groups would be distributed Contact Angle Measurement The contact angles were measured to estimate the introduction of amino groups onto the surface. Such group was expected to show more hydrophilic property. We put a 2 µL water droplet on the sample and measured the contact angles using goniometer (No.20424, Erma CO., Tokyo, Japan) and calculated by a half-angle method. The measurement was repeated 3 times, and the value was averaged from 6 points of the one sample. Preparation of Stretched Modified PolyCL-co-BrBL Film The shape memory of the modified films was investigated using a temperaturecontrolled oven. At 50 • C, the specimens of the prepared films with a size of 1.5 cm × 5 cm × 0.5mm were fixed at the end using a clip in the oven and left for 30 min. Then, a weight of 250 g was hung on the other end of the specimens; this setup was maintained until the films cooled down to room temperature. Cell Adhesion onto Modified PolyCL-co-BrBL Film Before starting the cell culture, the modified PolyCL-co-BrBL film was sterilized using a 70% ethanol aqueous solution. The specimen was stuck on the glass plate, and the small silicone chamber was placed on the plate. The same three samples were provided to the experiment. HeLa cells were seeded at a density of 7.0 × 10 4 /cm 2 cells or were put on a sterilized material surface and cultured in a medium at 37 • C for 16.5 h. The HeLa cells on the surfaces were fixed with 4% paraformaldehyde for 10 min and permeabilized with 0.1% Tween 20/PBS for 5 min at room temperature. To visualize the nucleus, the cells were treated with Hoechst 33,342 for 1 h. The cell morphology was imaged using a fluorescence microscope (Olympus IX70, Tokyo, Japan). Preparation of Crosslinkable Poly(CL-co-BrBL) as a Starting Material According to our previous report, crosslinkable, branched, cationized PCL-based macromonomers were successfully prepared, and materials with positively charged surfaces were derived from macromonomers [29]. The quaternized cationic group was introduced by the reaction with bromoacetyl bromide substituted to hydroxy groups, followed by the reaction with N, N-diethylaminomethyl methacrylate. In these materials, the cationic groups localized around crosslinked points because the quaternized cationic group was derived from the reaction between N, N-dimethylaminoethyl methacrylate and bromoacetyl groups at the chain end. Therefore, in this study, we intended to prepare other types of positively charged materials wherein the cationic groups would be distributed evenly on the material surface. To achieve this, the copolymers containing CL and BrBL were synthesized by a previously reported method [30] because the cationic group introduction was expected to be easily achieved after the reaction with bromo groups. Figure 3 shows the 1 H-NMR spectra of the starting poly(CL-co-BrBL) and its macromonomer. According to the figure, specific peaks derived from CL and BrBL (upper spectrum in Figure 3) and additional peaks based on the methacryl groups in poly(CL-co-BrBL) are recognized. These results suggest the successful preparation of the objective materials. Materials 2021, 14, 5797 6 of 12 evenly on the material surface. To achieve this, the copolymers containing CL and BrBL were synthesized by a previously reported method [30] because the cationic group introduction was expected to be easily achieved after the reaction with bromo groups. Figure 3 shows the 1 H-NMR spectra of the starting poly(CL-co-BrBL) and its macromonomer. According to the figure, specific peaks derived from CL and BrBL (upper spectrum in Figure 3) and additional peaks based on the methacryl groups in poly(CL-co-BrBL) are recognized. These results suggest the successful preparation of the objective materials. Table 1 summarizes the preparation results and thermal properties of the branched macromonomers composed of copolymers CL and BrBL. The composition ratios of BrBL were calculated from the 1 H-NMR spectra, and the thermal properties were studied through DSC. The results indicated that the composition of BrBL in the copolymers was one-tenth that of the feeding ratio, which could be due to the relatively stable structure of the ring-opened BrBL. The literature that dealt with copolymerization of CL and BrBL showed that a larger ratio was obtained in the same condition [30]. The reason for the resulting lower ratio of introduced BrBL was unclear; however, larger introduced BrBL ratio was obtained by increased molar ratio of BrBL for copolymerization, as indicated in Table 1. The four-branched CL-co-BrBL did not show a peak in the DSC chart, indicating that the melting point was beyond the temperature range. Its lower melting point was due to the presence of the branched structure and the disrupted copolymerization owing to polymer crystal formation. Therefore, we considered that the presence of the fourbranched structure may significantly decrease the melting point; thus, direct surface modification can be studied only in the presence of a two-branched structure. Figure 4 shows shape memory of (2-Poly(CL-co-BrBL) film) using its strip sample, and it memorized expansional deformation and recovered into original shape by heating over the softening point. Its recovery ratio (R%) was estimated by size measurement method [31]. The value was almost 100%, and it indicated this sample did keep its bulk property, even in the deformation process. Table 1 summarizes the preparation results and thermal properties of the branched macromonomers composed of copolymers CL and BrBL. The composition ratios of BrBL were calculated from the 1 H-NMR spectra, and the thermal properties were studied through DSC. The results indicated that the composition of BrBL in the copolymers was one-tenth that of the feeding ratio, which could be due to the relatively stable structure of the ring-opened BrBL. The literature that dealt with copolymerization of CL and BrBL showed that a larger ratio was obtained in the same condition [30]. The reason for the resulting lower ratio of introduced BrBL was unclear; however, larger introduced BrBL ratio was obtained by increased molar ratio of BrBL for copolymerization, as indicated in Table 1. The four-branched CL-co-BrBL did not show a peak in the DSC chart, indicating that the melting point was beyond the temperature range. Its lower melting point was due to the presence of the branched structure and the disrupted copolymerization owing to polymer crystal formation. Therefore, we considered that the presence of the four-branched structure may significantly decrease the melting point; thus, direct surface modification can be studied only in the presence of a two-branched structure. Figure 4 shows shape memory of (2-Poly(CL-co-BrBL) film) using its strip sample, and it memorized expansional deformation and recovered into original shape by heating over the softening point. Its recovery ratio (R%) was estimated by size measurement method [31]. The value was almost 100%, and it indicated this sample did keep its bulk property, even in the deformation process. This sample was not used for macromonomer preparation. Influence on Bulk Properties Based on Reaction Conditions To obtain surface-modified shape memory materials, the surface was modified to acquire a positive charge, and brominated materials were soaked in the DMEDA solution of n-hexane for 1, 3, 6, and 12 h. For the preliminary experiment, benzylamine (BA) and trimethylamine (TMA) were tested to investigate suitable amine structure. BA could react with Br groups, but TMA could not. Steric hinderance of TMA might obstruct the reaction. From that result, DMEDA that contained both primary and tertiary amine was selected for the reaction. Furthermore, some organic solvents such as swellable tetrahydrofuran and nonswellable methanol were tested to prepare the amine solution. The swelling of the materials allowed the reaction at the surface and in the bulk phase as well as changed the thermal properties. The use of tetrahydrofuran causes swelling of the materials and changed the softening point and the value of enthalpy change. Alternatively, methanol did not cause material swelling; however, methanolysis unexpectedly occurred, and the material surface was eroded [32]. Therefore, we selected n-hexane as the nonswellable and nonerodible solvent for direct modification on the surface. In this experiment, solutions having 10, 30, and 50 vol% of DMEDA in n-hexane were used. Figure 5 shows an image of the soaked samples in the reaction solutions. As seen in the figure, the sample size became larger in 30 vol% of amine compound and swelled in 50 vol%. For example, the sample height become almost 1.3-fold larger in 30vol% condition and 1.8-fold in 50 vol% larger. This result suggests that the reaction proceeded in the bulk phase; resultantly, change of the softening point was concerned. That is to say, it was observed that high amine concentration enhanced material swelling; at low concentration, amino compounds could effectively react with the material surface. Thus, at low concentration condition with 10% of DMEDA, the reaction was conducted. Figure 6 shows the DSC charts of the samples with different reaction times of 1, 3, 6, and 12 h, and the results are summarized in Influence on Bulk Properties Based on Reaction Conditions To obtain surface-modified shape memory materials, the surface was modified to acquire a positive charge, and brominated materials were soaked in the DMEDA solution of n-hexane for 1, 3, 6, and 12 h. For the preliminary experiment, benzylamine (BA) and trimethylamine (TMA) were tested to investigate suitable amine structure. BA could react with Br groups, but TMA could not. Steric hinderance of TMA might obstruct the reaction. From that result, DMEDA that contained both primary and tertiary amine was selected for the reaction. Furthermore, some organic solvents such as swellable tetrahydrofuran and nonswellable methanol were tested to prepare the amine solution. The swelling of the materials allowed the reaction at the surface and in the bulk phase as well as changed the thermal properties. The use of tetrahydrofuran causes swelling of the materials and changed the softening point and the value of enthalpy change. Alternatively, methanol did not cause material swelling; however, methanolysis unexpectedly occurred, and the material surface was eroded [32]. Therefore, we selected n-hexane as the nonswellable and nonerodible solvent for direct modification on the surface. In this experiment, solutions having 10, 30, and 50 vol% of DMEDA in n-hexane were used. Figure 5 shows an image of the soaked samples in the reaction solutions. As seen in the figure, the sample size became larger in 30 vol% of amine compound and swelled in 50 vol%. For example, the sample height become almost 1.3-fold larger in 30vol% condition and 1.8-fold in 50 vol% larger. This result suggests that the reaction proceeded in the bulk phase; resultantly, change of the softening point was concerned. That is to say, it was observed that high amine concentration enhanced material swelling; at low concentration, amino compounds could effectively react with the material surface. Thus, at low concentration condition with 10% of DMEDA, the reaction was conducted. Figure 6 shows the DSC charts of the samples with different reaction times of 1, 3, 6, and 12 h, and the results are summarized in Table 2. The T m and ∆H values of each sample were almost equal because the unexpectable reaction in the bulk phase could be avoided at low amine concentration. However, the peak shape became broader with increasing reaction time, and the result suggested short reaction time was preferable to maintain inherent thermal property. Table 2. The Tm and ΔH values of each sample were almost equal because the unexpectable reaction in the bulk phase could be avoided at low amine concentration. However, the peak shape became broader with increasing reaction time, and the result suggested short reaction time was preferable to maintain inherent thermal property. Table 2. The Tm and ΔH values of each sample were almost equal because the unexpectable reaction in the bulk phase could be avoided at low amine concentration. However, the peak shape became broader with increasing reaction time, and the result suggested short reaction time was preferable to maintain inherent thermal property. Table 2. Table 2. Evaluation of Surface Modification of Poly(CL-co-BrBL)-Based Materials To confirm surface modification, contact angle measurements were conducted. Table 3 shows the contact angles of the water droplets on the material surfaces. Its actual image is shown in Figure 7. As expected, reaction with the amine compound led to enhanced hydrophilicity, and certainly, the more charged surface meant the greater interaction with water and showed lower contact angle value. In a previous study, the anionic dye adsorption test was concluded to be easy and useful in evaluating the positive charge of a material surface [29]. Therefore, the same experiment was conducted, and the observed luminescence is indicated in Figure 8a. Interestingly, the samples Am10-1 h and -3 h showed the larger intensities compared with unreacted one. These results mean the surface turned into positive and promoted the interaction with anionic compound. Furthermore, we tried to make a pattern of positive charge by masking the method preliminarily. In Figure 8b, only the positively charged surface was obtained at the area without masking and it was promising to design the patterning of the materials surface. Zeta potentials of both samples before and after reaction (Am10-1 h was used) were measured to be −5.6 ± 0.6 and 20.1 ± 2.1 mV. This values also indicated successful reaction with DMEDA. Finally, cell affinity to the prepared materials was investigated preliminarily. Figure 9 shows the images of the HeLa cells adhered to the surfaces of the starting and aminemodified materials. The number of cells adhered to the modified samples was almost two times larger than that to the starting material, that calculated from the pictures, which indicates that the materials in this study could improve the cell affinity. Now, we have been studying quantification. Consequently, these materials can have practical applications in mechanobiology, specifically for enhancing cell-material interaction. Evaluation of Surface Modification of Poly(CL-co-BrBL)-Based Materials To confirm surface modification, contact angle measurements were conducted. Table 3 shows the contact angles of the water droplets on the material surfaces. Its actual image is shown in Figure 7. As expected, reaction with the amine compound led to enhanced hydrophilicity, and certainly, the more charged surface meant the greater interaction with water and showed lower contact angle value. In a previous study, the anionic dye adsorption test was concluded to be easy and useful in evaluating the positive charge of a material surface [29]. Therefore, the same experiment was conducted, and the observed luminescence is indicated in Figure 8a. Interestingly, the samples Am10-1 h and -3 h showed the larger intensities compared with unreacted one. These results mean the surface turned into positive and promoted the interaction with anionic compound. Furthermore, we tried to make a pattern of positive charge by masking the method preliminarily. In Figure 8b, only the positively charged surface was obtained at the area without masking and it was promising to design the patterning of the materials surface. Zeta potentials of both samples before and after reaction (Am10-1 h was used) were measured to be −5.6 ± 0.6 and 20.1 ± 2.1 mV. This values also indicated successful reaction with DMEDA. Finally, cell affinity to the prepared materials was investigated preliminarily. Figure 9 shows the images of the HeLa cells adhered to the surfaces of the starting and amine-modified materials. The number of cells adhered to the modified samples was almost two times larger than that to the starting material, that calculated from the pictures, which indicates that the materials in this study could improve the cell affinity. Now, we have been studying quantification. Consequently, these materials can have practical applications in mechanobiology, specifically for enhancing cell-material interaction. Conclusions To impart cell affinity to the shape memory material surface while maintaining the material's thermal property, direct modification was studied. The macromonomer derived from copolymer poly(CL-co-BrBL)-MA was used as the starting material to prepare the shape memory material. After the crosslinking reaction, stable materials were obtained and were subjected to direct surface modification. The use of a nonswellable solvent and optimization of amine concentration and reaction time facilitated the reaction only on the surface without preventing unexpected reactions in the bulk phase. The results of DSC, contact angle, and zeta potential measurement, and anionic dye adsorption tests suggested that a successful reaction occurred and positively charged material surface was formed. Moreover, the cell adhesion study proved that the material was able to maintain cell affinity. Consequently, the direct modification investigated in this study can be considered for the introduction of cell affinity, and the as-prepared materials can be applied in mechanobiology to explore cell-material interactions. Conclusions To impart cell affinity to the shape memory material surface while maintaining the material's thermal property, direct modification was studied. The macromonomer derived from copolymer poly(CL-co-BrBL)-MA was used as the starting material to prepare the shape memory material. After the crosslinking reaction, stable materials were obtained and were subjected to direct surface modification. The use of a nonswellable solvent and optimization of amine concentration and reaction time facilitated the reaction only on the surface without preventing unexpected reactions in the bulk phase. The results of DSC, contact angle, and zeta potential measurement, and anionic dye adsorption tests suggested that a successful reaction occurred and positively charged material surface was formed. Moreover, the cell adhesion study proved that the material was able to maintain cell affinity. Consequently, the direct modification investigated in this study can be considered for the introduction of cell affinity, and the as-prepared materials can be applied in mechanobiology to explore cell-material interactions. Conclusions To impart cell affinity to the shape memory material surface while maintaining the material's thermal property, direct modification was studied. The macromonomer derived from copolymer poly(CL-co-BrBL)-MA was used as the starting material to prepare the shape memory material. After the crosslinking reaction, stable materials were obtained and were subjected to direct surface modification. The use of a nonswellable solvent and optimization of amine concentration and reaction time facilitated the reaction only on the surface without preventing unexpected reactions in the bulk phase. The results of DSC, contact angle, and zeta potential measurement, and anionic dye adsorption tests suggested that a successful reaction occurred and positively charged material surface was formed. Moreover, the cell adhesion study proved that the material was able to maintain cell affinity. Consequently, the direct modification investigated in this study can be considered for the introduction of cell affinity, and the as-prepared materials can be applied in mechanobiology to explore cell-material interactions.
7,133
2021-10-01T00:00:00.000
[ "Materials Science" ]
Accurate and real - time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4 - tiny network : Real - time object detection is an integral part of internet of things ( IoT ) application, which is an important research fi eld of computer vision. Existing lightweight algorithms cannot handle target occlu - sions well in target detection tasks in indoor narrow scenes, resulting in a large number of missed detec - tions and misclassi fi cations. To this end, an accurate real - time multi - scale detection method that integrates density - based spatial clustering of applications with noise ( DBSCAN ) clustering algorithm and the improved You Only Look Once ( YOLO )- v4 - tiny network is proposed. First, by improving the neck network of the YOLOv4 - tiny model, the detailed information of the shallow network is utilized to boost the average precision of the model to identify dense small objects, and the Cross mini - Batch Normalization strategy is adopted to improve the accuracy of statistical information. Second, the DBSCAN clustering algorithm is fused with the modi fi ed network to achieve better clustering e ff ects. Finally, Mosaic data enrichment technique is adopted during model training process to improve the capability of the model to recognize occluded targets. Experimental results show that compared to the original YOLOv4 - tiny algorithm, the mAP values of the improved algorithm on the self - construct dataset are signi fi cantly improved, and the processing speed can well meet the requirements of real - time applications on embedded devices. The performance of the proposed model on public datasets PASCAL VOC07 and PASCAL VOC12 is also better than that of other advanced lightweight algorithms, and the detection ability for occluded objects is signi fi cantly improved, which meets the requirements of mobile terminals for real - time detection in crowded indoor environments. Introduction Dense object detection is one of the special application tasks of computer vision [1]. It has a wide range of application value in fields such as control and identification of suspicious objects [2], intelligent collection of human traffic statistics [3], and abnormal behavior detection [4]. Crowded indoor space is a single background scene that is less influenced by factors such as illumination, angle, and internal structure. The difficulty lies in the fact that when faced with factors such as mutual occlusion between multiple objects and incomplete camera framing, the traditional lightweight algorithms will lead to a great number of missed detections and false alarms, so it is of great research significance to achieve a lightweight detection algorithm that can accurately and quickly detect occluded small targets within crowded indoor spaces in the internet of things (IoT) environment [5]. Taking the elevator car detection application as an example, in the field of elevator safety management, the information provided by the elevator monitoring video is often used as the basis to judge the situation in the car and take corresponding measures. The current elevators generally have video acquisition devices on the top of the car, but this only provides real-time observation, video storage, and evidence collection and does not have functions such as timely warning and real-time monitoring. Although elevator managers can observe the video images and take timely measures after discovering the forbidden objects, managers cannot observe the surveillance video all the time, which may cause omissions in work. Therefore, it is necessary to solve the shortage of human monitoring through image acquisition equipment, processors, and target detection algorithms, that is, to design an intelligent video monitoring system that can detect prohibited objects in elevators throughout the day and realize the detection of prohibited target intrusions based on elevator monitoring video [6]. Through timely alarm to the management personnel, the workloads of relevant personnel are reduced and the elevator riding safety is guaranteed. Traditional object detection methods rely on manual feature extraction and complete detection tasks through classifiers such as the support vector machine (SVM) [7]. The recognition process of such method is complex and highly subjective, and it is not sensitive to occluded salient regions, which leads to a poor performance in terms of detection accuracy, processing speed, and robustness. Recently, with the continuous progress in convolutional neural network (CNN), the concept of end-to-end framework has been applied to target detection algorithms, such as single shot multibox detector (SSD) [8] and the You Only Look Once (YOLO) series [9]. These algorithms combine the classification process and the regression network into a single stage, which has a significant improvement in the trade-off between detection accuracy and speed and is suitable for deployment on mobile terminals for target detection in crowded indoor spaces. Existing algorithms such as YOLOv4-tiny network [10] achieve good detection performance in dense scenes, but they also have the following shortcomings: 1) The backbone network is too lightweight, and the contour evolution of the feature map is insufficient during the layer-by-layer transfer process, making it impossible to effectively learn enough features of the occluded targets in the course of training; 2) The neck network structure is too simple, with low efficiency when fusing feature maps of different sizes, and it is prone to lose detailed edge information; 3) The traditional K-means clustering has limitations in the post-processing stage, which can easily lead to missed detection. The main contributions of the proposed method are listed as follows: 1) An adaptive multi-scale detection algorithm is proposed by taking the crowded indoor spaces such as the elevator car, bus car, and passenger aircraft cabin as the main research scenarios. Based on the YOLOv4-tiny model which is suitable for embedded hardware platforms, the neck network of the model is improved to improve the recognition accuracy of small targets. The Cross mini-Batch Normalization (CmBN) technique is used to replace the batch normalization (BN) technique from the original YOLOv4tiny model to ensure the estimation accuracy of statistical information and reduce estimation errors. The Mosaic data enrichment strategy is introduced in the training phase to improve the utilization of feature maps of different scales and alleviate the loss of edge information; 2) An improved clustering method combining DBSCAN and K-means algorithm is proposed. First, initial clustering is performed by DBSCAN clustering algorithm, and then multi-scale clustering is added to further obtain the local and overall information of the targets. After multi-scale clustering and convolution operations, the initial feature map is obtained. Afterward, the accurate center point position is derived with the K-means algorithm, which effectively accelerates the convergence and improves the classification accuracy for dense small targets. As a result, the classification effect of objects in different datasets and complex environments is improved, and the resistance to noise and interference is also improved. The remainder of this article is organized as follows. Section 2 introduces the related research, and Section 3 explains the background knowledge of YOLOv4-tiny. Section 4 describes the proposed method specifically, including the improved YOLOv4-tiny network structure and the enhanced clustering algorithm for bounding box determination. In Section 5, the experimental results and discussion are presented. Finally, Section 6 summarizes the full text. Related research Before the rise of deep learning (DL) methodology, the traditional object recognition technology proposed in the early days mainly utilize the more obvious features including image corners, textures, and contours, and used feature descriptors such as histogram of oriented gradient [11], scale-invariant feature transform [12], speeded up robust features [13], Haar-like features [14], and local binary pattern [15], and the recognition and classification of the extracted object features are performed with template matching [16], boosting [17], SVM, and other methods. However, the traditional recognition algorithms have shortcomings such as relying on manual experience for feature selection and poor robustness in complex application scenarios. As hardware equipment and DL technology evolve, DL-based frameworks have begun to emerge in the field of object detection. Classic DL models include CNN, deep belief networks, deep residual Networks, and AutoEncoders (AE) [18]. LeCun et al. [19] first applied artificial neural network to recognize handwritten fonts, and the LeNet [20] proposed later was the beginning of research on deep convolutional networks in the area of object recognition. Later, the ALexNet suggested by Krizhevsky et al. [21] achieved the best result of 11% false recognition rate in the visual object classes (VOC) challenge, which greatly improved the practicability of neural networks in the area of target recognition. The VGGNet [22] proposed by Oxford University further deepens the neural network and extracts deeper abstract features of the image. Goo-gleNet [23] developed by Google improves the computational efficiency of neural networks, and new versions including inception2, inception3, inception4, and Xception [24] were subsequently proposed. The ResNet [25] developed by He et al. from the Microsoft Research Institute is a 152-layer deep neural network. The model gained the championship in the ImageNet Large Scale Visual Recognition Challenge 2015 competition with significantly lower error rate and reduced number of parameters compared to that of the VGGNet. In addition, there are also some studies on shallow networks and unsupervised DL models. For example, Chen et al. [26] proposed to use an unsupervised sparse AE network to learn from randomly sampled image patches, and finally train a softmax classifier to classify the objects. In 2014, Girshick et al. [27] proposed a regional detection-based CNN (R-CNN) in which a region-selective search algorithm is used to obtain 2,000 candidate bounding boxes and then convolution operations are performed, which reduce the redundant convolution operations and realize the function of multi-target recognition for a single image. So far, the CNN has gained considerable attention in the area of object recognition. On the basis of the classic networks, by improving the way of obtaining candidate regions and the modification of the network structure, two types of object detection methods have been developed, namely the candidate region-based (also known as two-stage) algorithms, and the regression-based (also known as one-stage) object detection algorithms. In general, the candidate region-based algorithms have better recognition performance, while the regression-based algorithms have faster processing speed [28]. The popular candidate region-based object detection models include Spp-Net [26], Fast R-CNN [29], Faster R-CNN [30], and mask R-CNN [31]. These algorithms mainly involve two tasks: candidate region acquisition and candidate region identification. Spp-Net reduces the computing load of the network through pyramid pooling layers. Fast RCNN and Faster R-CNN improve the network in different aspects to improve the detection speed. The key improvement of Faster R-CNN is to use the region proposal network (RPN) instead of the selection search algorithm, which further improves the detection speed. RPN is currently the most accurate candidate bounding box localization method, so Faster R-CNN has very high localization accuracy. On the basis of Faster R-CNN, researchers have proposed mask R-CNN, RetinaNet [32], etc. The regression-based (one-stage) object detection algorithms seek a compromise between recognition speed and recognition accuracy; the selection of candidate bounding boxes, feature extraction, object classification and the bounding box predictions are all regressed into the network, and the location and category of the target are obtained explicitly at the output layer, so that the recognition speed improves significantly. The regression-based object recognition algorithms mainly include YOLO [9], SSD [8], and EfficientNet [33]. These are the methods with fast processing speed and great detection accuracy. In 2016, Redmon et al. proposed the YOLO network [9] which no longer uses the region proposal strategy, but the entire image input is divided into multiple grids. Each grid is responsible for forecasting the object whose center is within the grid. End-to-end prediction can be achieved by running a single CNN operation on an image, which significantly accelerates the speed of object recognition. YOLO also has some shortcomings. Its positioning accuracy is lower than that of the candidate region-based algorithms, it cannot identify small targets well, and its generalization performance is poor. Liu et al. proposed SSD [8] in the European Conference on Computer Vision 2016, in which the RPN structure is used to improve the YOLO network. By mapping objects at different scales through different convolutional layers, the detection accuracy for small targets is enhanced, while retaining the merits of fast computing speed from YOLO and high accuracy from Faster R-CNN. YOLO is superior to the SSD algorithm in terms of detection speed, but there is a problem of missed detections for mutually occluded targets. In addition, researchers have developed simplified versions of YOLOv3-Tiny [34], YOLOv4-tiny [10], and other lightweight models based on the original YOLO network. These models are relatively simple and efficient with fewer parameters, thus significantly reducing storage and computing requirements. Among them, YOLOv4-tiny is significantly better than other lightweight models in terms of training time and detection speed, which are applicable for embedded devices. Yolov4-tiny In the YOLO series, region classification proposals are integrated into a single neural network for the predictions of bounding boxes and classification probabilities, where the input image is partitioned into × S S grid cells and detection is performed in a single evaluation stage. To improve accuracy, the YOLOv2 model [35] introduces BN and direct position prediction strategies and replaces fully connected layers with convolutional layers to accelerate the training and detection processes. The YOLOv3 model [34] uses darknet-53 as the backbone, which utilizes 53 convolutional layers for feature extraction. The improved CSPdarknet-53 is used as the backbone in YOLOv4 [36], where feature extraction and connection are split into two parts using cross-stage partial (CSP) connections. YOLOv4-tiny [10], as a lightweight version of the YOLOv4 model, using CSPDarkNet53-tiny as its backbone network, and the network structure of the YOLOv4-tiny are shown in Figure 1. CSPDarkNet53-tiny uses Figure 1: Structure of Yolov4-tiny. three simplified CSP blocks with residual modules removed in the CSP network. In order to further reduce computing complexity, the Mish function is replaced by the leaky rectified linear unit as the activation function, and the feature pyramid network is used to extract two feature maps of different sizes to predict the recognition results, reducing the number of model parameters and computational loads, thereby promoting the application in embedded systems and mobile devices. However, due to the simpler network structure, the detection performance of YOLOv4-tiny is also significantly lower than that of the YOLOv4 network, especially for small targets. Proposed method Based on the original YOLOv4-tiny model, this study designs a dense object recognition model for indoor spaces. The YOLOv4-tiny model has a simplified structure and fast inference speed and is suitable for embedded hardware platforms. However, the recognition accuracy of the model for small targets and overlapping targets within dense scenes needs to be improved, and further optimization and improvement are required. Improvement of the network structure The backbone network of the YOLOv4-tiny model contains three CSP modules, namely CSP1, CSP2, and CSP3. Among them, the CSP2 layer includes precise location information and more specific information, but fewer semantic information. The CSP3 layer includes a larger amount of semantic information but less specific information, the location information is relatively rough, and the position information and detail information of many small targets could be lost. To promote the recognition precision of the YOLOv4-tiny model for densely crowded targets, we propose an improved YOLOv4-tiny model, the detailed structure of which is shown in Figure 2. In the figure, Conv denotes convolutional operation, Leaky is the Leaky Relu activation function, and Maxpool represents max pooling operation. From Figure 2, it can be seen that the proposed model adds a path connected to the CSP2 layer of the backbone network in the neck network of the original YOLOv4-tiny model, and the detection scales are expanded from two to three. The outputs of the CSP2 layer and the upsampling layer (UP layer) are concatenated in the channel dimensions, the feature maps from two different layers are fused, and then through two CBL modules, the feature map that is downsampled by eight times of the resolution of the input image is finally obtained. For an input image with a resolution of 416 × 416 pixels, the improved YOLOv4-tiny model outputs feature maps at three scales with resolutions of 13 × 13, 26 × 26, and 52 × 52 pixels, respectively. Since the occluded objects have the characteristics of small salient areas, the original YOLOv4-tiny network is prone to lose a lot of edge detail information. In the proposed model, large-scale feature map optimization technique is introduced into the improved backbone network, which enables the network to capture more detailed image information. By improving the resolution of input pictures, the dimensions of the output feature map are changed from 13 × 13, 26 × 26 to 13 × 13, 26 × 26, and 52 × 52, thereby enhancing the learning capability of the shallow network and reducing the information loss of the shallow layers in the training process. CmBN strategy In object detection tasks, due to the limited memory capacity of the video cards, the BN strategy is inaccurate in estimating statistical information, which may easily lead to increased model errors. This drawback will be more pronounced in resource-constrained embedded devices and can significantly degrade model performance. To this end, the CmBN strategy is used to replace the BN strategy in the original YOLOv4-tiny model. During model training, the batch of image data in the training set is evenly divided into several minibatches and passed to the model for forward propagation. The weights of the model do not change until a single iteration is completed, so the statistics of different mini-batches in the same batch can be directly accumulated. The execution processes of the CmBN strategy are as follows: 1) When calculating the mean and standard deviation of the ith batch data, the output information of the convolutional layer of the (i − 1)th mini-batch data in the training set is combined; 2) The output of the convolution layer of the ith small batch of data is transformed into a normal distribution with a mean of 0 and a variance of 1; 3) Using learnable parameters, linearly transform the normalized output of the convolutional layers of the ith mini-batch to enhance its expressive ability. The forward pass calculation process of the CmBN strategy is as follows: where l denotes the number of convolutional layers and j denotes the index number of the mini-batch. m represents the size of the mini-batch data, i is the index number of the mini-batch data, and τ is the serial number of the mini-batch index. x j i l , is the output at the lth convolutional layer for the ith batch of data from the jth mini-batch. ς¯j l and − ς¯j τ l are the average values of the output at the lth convolutional layer for the jth and ( ) − j τ th mini-batch, respectively. v j l is the sum of squares of the mean output at the lth convolutional layer for the j-th mini-batch. φ j l is the standard deviation of the output at the lth convolutional layer for the jth mini-batch. δ is a constant added to increase numerical stability.x j i l , is the normalized output at the lth convolutional layer for the ith batch of data from the j-th minibatch. γ and β are learnable scaling and translation parameters, respectively. y j i l , is the result obtained after performing CmBN on the output of the lth convolutional layer for the ith batch of data from the jth mini-batch. Target bounding box prediction In the majority of DL-based target detection algorithms, the convolutional layers generally only acquire the features of the targets, and then pass the acquired features to the classifier or regressor for result prediction. The YOLO series of algorithms use a 1 × 1 convolution kernel to complete the target prediction so that the dimension of the obtained prediction map is equal to the dimension of the feature map. The number of target bounding boxes in YOLOv4-tiny is 3, which is determined by each grid cell in the prediction map, and the feature map contains grid information, where C is the total number of categories and b represents the number of bounding boxes obtained, and different targets have their own unique bounding boxes. is the parameter for each bounding box, which consists of the center coordinates, the size, the target score, and the confidence of the C classes of the bounding box. The relationship between the predicted bounding boxes and the network output can be expressed as follows: the predicted target center relative to the center coordinates, and the values of t x and t y are controlled in the [0, 1] interval by the sigmoid function. The YOLOv4-tiny algorithm consists of two parts, training and testing. During the training stage, a mass of data needs to be fed into the model, and during the prediction stage, the candidate bounding boxes are used to determine whether any target falls into the candidate box. If a target falls within the bounding box, its probability is as follows: In the process of object detection, images vary in size and variety. In order to determine the initial positions of the candidate bounding boxes in YOLOv4-tiny, the K-means clustering algorithm is utilized to determine the initial position of the bounding boxes. As an unsupervised learning method, the K-means clustering algorithm performs clustering operations on surrounding objects close to them by specifying a K value as the center. Through repeated iterations, the cluster center value is updated. When the intra-class difference is smaller and the out-of-class difference is larger, the desired effect is achieved, and the measurement of the "difference" is calculated based on the distance from the sample point to the centroid of the class to which it belongs. Euclidean distance is generally used to measure this difference, and the Euclidean distance measurement can be calculated as follows: where x is the sample point within the class, μ is the centroid within the class, n is the number of samples in each class, and i is the mark number of each data point. However, the appropriate selection of K value in K-means clustering algorithm is often hard to complete, and it will directly affect the effect of clustering. At the same time, the clustering results of the Kmeans method can only guarantee the local optimal clustering in general and are more sensitive to noise interference. In addition, the K-means clustering algorithm is difficult to achieve the ideal clustering effect for non-convex data and data with large differences in size. In practice, the number of targets and the number of object categories in an image are generally unknown, and the objects may also be distributed in a very scattered way. Therefore, if the clustering is performed according to the specified K value, the distance between the obtained center point and the actual position may be too far. In contrast, the DBSCAN clustering algorithm has no bias on the shape of the clusters and does not need to input the number of clusters to be divided, only the appearance radius of the target is required to be set. In addition, the impact of the center point being too far away from the actual location can be alleviated by performing clustering based on density-reachable concept. Therefore, this article proposes a DB-K clustering method that combines DBSCAN and K-means algorithms. First, through the DBSCAN clustering algorithm, the initial good clustering effect is obtained under the condition of ignoring the center point, and thus several completed clusters are obtained. Then multi-scale clustering is added to further obtain the local and overall information of the object; specifically, multi-scale clustering is performed on the contour information of the object. After multi-scale clustering and convolution operations, the initial feature maps can be obtained. Finally, these clusters are used as input data, and the K-means algorithm is used to divide the clusters so that the accurate center point position can be obtained. This method can effectively speed up the convergence of the dataset and promote the classification accuracy of small objects. In the DBSCAN clustering algorithm, the closeness of the sample distribution in the neighborhood is described by the parameter ( In the original YOLOv4-tiny, the seed points of the K-means clustering algorithm are chosen randomly, which increases the randomness of the clustering and will lead to poor clustering effect. This article proposes a strategy to reduce the randomness of seed point selection. First, the K value is obtained through the error sum of squares method, as shown in equation (11), and then the first cluster center is obtained through the K-means algorithm. Afterward, the obtained K clusters are analyzed, and the closest clusters are merged, thereby reducing the number of cluster centers; the corresponding number of clusters will also decrease when the next clustering is performed, obtaining an ideal number of clusters. After several iterations, when the convergence of the evaluation function reaches the expected value, the best clustering effect can be obtained as follows: In the experiment, 400 random points are selected and tested with the K-means clustering algorithm and the proposed DB-K hybrid clustering algorithm, respectively. In the K-means clustering algorithm, the K value is specified as 3, and the experimental results are shown in Figure 4(a). In the proposed DB-K clustering algorithm, the value of K is 3 and it is determined by the DBSCAN clustering algorithm, and the experimental results are shown in Figure 4(b). It can be clearly seen from the experimental results that the method proposed in this article has better clustering effect. Therefore, by integrating the DBSCAN clustering algorithm and improving the traditional K-means clustering algorithm, the classification performance of object detection in different datasets and complex environments is enhanced, and the robustness to noise and interference is also improved. With the proposed DB-K clustering algorithm, the performance of neural network in object recognition is significantly improved. Experiment results and analysis The experiments in this article are conducted in the Linux environment, and the system is configured as Ubuntu18.04, compute unified device architecture (CUDA) 11.0, and CUDA deep neural network library (CUDNN) 8.0. The hardware platform is equipped with an 8-core Intel 10400F CPU and the graphics card is GTX 960, with 16 GB RAM. During the experiment, GPU is used to speed up the training process. The selfmade data set is randomly split into training and testing sets in a ratio of 7:3, and multiple groups of ablation experiments are set up to verify the effect of each improved strategy on the model, thus obtaining the optimal model. In order to further authenticate the performance of the proposed algorithm, a comparative experiment with the most commonly used lightweight algorithms was conducted on the PASCAL VOC07 + 12 [37,38] public dataset, and the performance of different algorithms was compared in terms of the average detection accuracy (mAP) and detection speed (FPS). Experiment datasets In order to analyze the recognition performance of the proposed lightweight model in the crowded indoor spaces with occluded objects, two datasets were constructed: the PASCAL VOC07 + 12 public dataset which includes 16,551 training images and 4,952 testing images, and a self-made dataset. The self-made dataset involves four different scenarios, with 10,432 images captured from elevator cars and 9,139 images captured from bus cars, passenger aircraft cabins, and natural scenes. The training, validation, and test sets are partitioned in a ratio of 7:2:1. Among them, 70% of the pictures have the characteristics of mutual occlusion or incomplete camera framing. In this article, such pictures are defined as complex images in the experiment, which can effectively prevent the overfitting caused by the single background in the dense space during training, and improve the generalization capability of the model in crowded indoor scenarios. The dataset defines three detection categories: person, electric-bicycle, and bicycle. Baby carriage, trolley, furniture, other goods, and pets are used as negative samples to enhance the reliability of the model. The LabelImg tool designed by Tzutalin [39] is used to mark the labeling area and picture area of each picture according to a certain proportion, and the aspect ratio of the pictures is limited within 3:1 to make the predicted bounding boxes more fit to the targets. The statistics of the self-built dataset is shown in Table 1. Objects in dense indoor scenes such as elevators and carriages are prone to mutual overlap and occlusion, which will seriously affect the accuracy of target recognition. Aiming at the aforementioned problems, this study introduces Mosaic data augmentation technique [40] during model training to improve the model's capability to recognize occluded objects. The example images after Mosaic process is shown in Figure 5. Before each iteration, the DL framework not only acquires images from the training set, but also generates new images through Mosaic data augmentation, and then the newly generated images and the original images are combined and fed into the model for training. Mosaic data enhancement randomly selects four images from the training set, randomly crops the four selected images, and splices the cropped images in sequence to obtain a new image, and the generated image has the same resolution as the training set image. During random cropping, part of the bounding box of the target in the training set image may be cropped to simulate the effect of the object being occluded. In addition, this study optimizes Mosaic data augmentation, and proposes an improved Mosaic data augmentation, which uses the intersection-over-union (IoU) as an indicator and sets the threshold considering the relevant standards used when calibrating the dataset to filter the objects' bounding boxes in the newly generated image. The improved Mosaic strategy generates a new image according to the steps of Mosaic data enhancement and filters the object bounding boxes in the newly generated image. If the IoU between the object bounding boxes in the new image and in the corresponding original image is less than the threshold, the object bounding box in the new image will be deleted and it is considered that there is no object here. Otherwise, the object bounding box in the newly generated image will be retained, and it is considered that there is a target here. Evaluation metric The inference speed (ms/frame) is used as an evaluation metric for model recognition speed, and the inference speed is the time required for the model to recognize an image. The average precision (AP) is used as the evaluation metric for the model recognition accuracy. AP is calculated based on the precision (P) and recall (R) of the model. P, R, AP, and mAP are calculated as follows: ( ) where TP is the number of correctly detected positive samples, TN is the number of correctly detected negative samples, FP is the number of falsely detected positive samples, and FN is the number of falsely detected negative samples. P represents the percentage of correctly detected positive samples in all detected positive samples. R represents the percentage of correctly detected positive samples in all of the groundtruth positive samples. In the final evaluation result, AP represents the comprehensive evaluation of a certain category, and the greater the AP value is, the better the accuracy of a single category. mAP is an evaluation of the level of the entire network, C is the number of classes contained in the whole dataset, and c denotes a single class. Model training parameters The model parameters in the experiment are set as follows: the input image is 416 × 416; the epoch is set to 300; the batch_size is 128 for the first 70 rounds, and 32 for the last 230 rounds. The learning rate is 1 × 10 -3 for the first 70 rounds and 1 × 10 -4 for the last 230 rounds. We employed stochastic gradient descent optimization strategy for model training, with an initial learning rate whose momentum was gradually decreased as the degree of gradient descent increased, in order to improve convergence performance. Specifically, we set the momentum parameter to 0.9 to accelerate the convergence rate of the optimization process. The loss curves during training are shown in Figure 6. As the epoch continues to increase, the loss values of both modes continue to decrease. After 70 rounds of training, the loss curves tend to be stable, and there is no underfitting or overfitting. The loss values of the original YOLOv4-tiny algorithm and the improved algorithm converge to around 2.3 and 1.9, respectively, which proves that the recognition accuracy of the model is constantly improving, and the hyperparameters of the proposed algorithm are set reasonably. Comparison with the original YOLOv4-tiny The test results of the original YOLOv4-tiny and the proposed model on self-built dataset are shown in Table 2, in which the mAP, inference speed, and model size comparisons are given. Among them, complex images account for 70% of the dataset, which correspond to the situation where the targets are occluded by each other or the camera view is incomplete. The mAP of the original YOLOv4-tiny model for recognizing complex images and all images in the test dataset are 75.47 and 84.25%, respectively, and the proposed model is 7.94 and 8.32% higher than those of the original model, respectively. The neck network of the improved YOLOv4tiny model reduces the loss of low-level feature map details and location information in the backbone network. The newly added feature maps have a smaller perceptual field for detecting blurred and occluded targets in images and improve the model's ability to detect small targets. And, the DB-K clustering algorithm is used to further improve the classification effect. Compared with the original Yolov4-tiny, the model size of the proposed model is increased by 2MB, and the inference time per image is increased by 0.37 ms. The improved model only adds three convolutional layers, one upsampling operation, and one connection operation to the neck network, and the backbone network of the model does not change. Therefore, the improved YOLOv4-tiny model maintains a fast inference speed while improving the detection accuracy. The mAP curves of the proposed model and the original YOLOv4-tiny on PSACAL VOC dataset are shown in Figure 7. From the figures, it can be observed that with the proposed model, the mAP for recognizing objects of various scales and different categories are improved significantly. For example, compared with the original model, when using the proposed model, the APs for categories containing a large number of small targets such as potted plant, boat, and bird are increased by 7, 4, and 3%, respectively. The results validate that the proposed algorithm has better detection effect when dealing with scenes containing many occluded targets and small targets, which further proves the superiority of the proposed algorithm. Ablation study The ablation analysis is performed based on the original YOLOv4-tiny model and combined with different improvement strategies, and the training and performance evaluation are carried out on the self-built dataset containing four types of scenes, to validate the contribution of each module to the improvement of recognition accuracy under the precondition of guaranteeing real-time performance. The test results of different module combinations are shown in Table 3, all of which are trained using the proposed enhanced Mosaic technology. The effectiveness of each module is analyzed in the table, and it can be seen that each improved module has different degrees of contribution to the overall performance. Among them, the introduction of DBSCAN in Model 4 contributes the most to the network, and the mAP increases by 4.99%, which proves that the proposed clustering algorithm significantly improves the classification effect in complex environments, while increasing the robustness to noise and interference. The mAP value of the original YOLOv4-tiny (model 1) is 84.25%, and the mAP value after using the neck network optimization (model 2) is 86.77%, an increase of 2.52%, indicating that the introduction of this optimization module enables the model to strengthen the extraction of detailed information from shallow layers, which makes the training for occluded objects more in-depth. Model 3 replaces BN with CmBN on the basis of model 2. BN can only use the output features from the convolution layers of the current mini-batch, so the statistic information is not accurate enough, resulting in a poor performance. CmBN, on the other hand, realizes the expansion of samples by accumulating information from different mini-batches and makes the estimation of statistical information more accurate. Faster RCNN [30], YOLOv4 [36], YOLOv3-tiny [34], and YOLOv4-tiny [10]. In addition, the DL frameworks and the trained models are ported to Jetson nano and Raspberry Pi 4B to test the inference speed of the models on the embedded hardware platforms. The results are listed in Table 4. From Table 4, it can be observed that the advantage of large networks is high detection accuracy. For example, the two-stage algorithm Faster RCNN using Resnet50 as the backbone network and the classic one-stage algorithm YOLOv4 have achieved mAPs of 83.77 and 90.01%, respectively. However, the size of these two models is too large, 330 MB and 256 MB, respectively, making such networks difficult to deploy to mobile terminals with limited computing capacity. The performance of the proposed method is comparable to that of the large network Faster RCNN in mAP, only 1.41% behind, and the size of the proposed model is only 1/13 to that of the faster RNN. Compared with the 64.3 million parameters of the YOLOv4 model, the parameters of the proposed model are only 6.1 million, which is less than 1/10. The advantage of the lightweight network is that the detection speed and accuracy are relatively balanced, and it can perform real-time detection on the mobile terminal, but the performance is relatively poor in complex scenes. Compared with two popular lightweight models YOLOv3-tiny and YOLOv4-tiny, the proposed method has significantly improved mAP performance while satisfying the real-time detection requirements. The proposed model will not introduce too much extra computing power and memory overhead in the inference process. The model retains the advantages of simplified structure and fast inference while ensuring a high recognition accuracy. It proves that the proposed model is applicable for the deployment in embedded hardware platforms. Conclusion In order to address the problem of poor performance of the original YOLOv4-tiny algorithm for detection of occluded object in dense indoor scenes, a modified target detection model is proposed based on YOLOv4tiny algorithm, and three feasible improvements are made: 1) The neck network structure of the original YOLOv4-tiny model is modified so that the model can learn more information from the occluded objects; 2) The CmBN strategy is used instead of the BN strategy, and the model error is reduced by accumulating the outputs of the convolutional layers; 3) The DBSCAN clustering algorithm is incorporated in the proposed network and then the anchor point coordinates are determined through the improved K-means clustering, thus the detection accuracy is further increased. Experimental results show that the mAP values of the proposed algorithm in the PASCAL VOC07 + 12 dataset and the self-built dataset are 92.57 and 82.36%, respectively, which are 8.32 and 6.87% higher than those of the original YOLOV4-tiny model, respectively. It proves that the performance of the proposed algorithm is significantly improved compared with the original YOLOV4-tiny model. The inference speeds of the proposed algorithm on the embedded platforms Jetson nano and Raspberry PI are 183 and 2,601 ms/frame, respectively, indicating that the processing speed of the proposed algorithm can satisfy the requirements of different real-time applications, the occluded target can be detected quickly and accurately, and the proposed model is applicable for practical application of target detection in crowded indoor spaces. There are still some areas for improvement of the proposed method. Although the recognition accuracy of the proposed algorithm has been greatly improved, it is limited by the lightweight nature of the backbone network. When performing detection tasks in general scenes, the detection accuracy of small objects is still lower than that of complex networks. In the future, we will continue to optimize the backbone network. In order to reuse local features and strengthen the fusion of global features, we can try to connect the CBM module and the CBL module in the CSP module in a dense connection structure. The CBM modules in different CSP modules can be tensor spliced after the upsampling operation, which further integrates the feature information of the shallow layer and the deep layer. In addition, an attempt will be made to augment the experimental dataset for small object detection in general scenarios. Funding information: This work was not supported by any fund projects. Conflict of interest: The author declares that there is no conflict of interest regarding the publication of this article. Data availability statement: The data used to support the findings of this study are included within the article.
9,455.6
2023-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Subleading Shape Functions and the Determination of |V_{ub}| It is argued that the dominant subleading shape-function contributions to the endpoint region of the charged-lepton energy spectrum in B ->X_u l nu decays can be related in a model-independent way to an integral over the B ->X_s gamma photon spectrum. The square root of the fraction of B ->X_u l nu events with charged-lepton energy above E_0=2.2 GeV can be calculated with a residual theoretical uncertainty from subleading shape-function effects that it safely below the 10% level. These effects have therefore a minor impact on the determination of |V_{ub}|. Introduction: One of the most promising strategies for the extraction of the Cabibbo-Kobayashi-Maskawa matrix element |V ub | relies on the measurement of the inclusive semileptonic B → X u l ν decay rate in the endpoint region of the charged-lepton energy spectrum, which is inaccessible to decays with a charm hadron in the final state [1]. Nonperturbative effects can be controlled systematically by using a twist expansion [2,3] and soft-collinear factorization theorems [4,5]. At leading order in 1/m b , bound-state effects are incorporated by a shape function accounting for the "Fermi motion" of the b quark inside the B meson. This function can be determined experimentally from the photon energy spectrum in inclusive radiative B → X s γ decays [2]. Recently, there have been first discussions of the structure of subleading-twist contributions to the B → X s γ and B → X u l ν spectra, which (at tree level) can be parameterized in terms of four subleading shape functions [6]. The phenomenological impact of these functions on the inclusive determination of |V ub | has been investigated in [7,8]. These authors point out that certain 1/m b corrections related to chromo-magnetic interactions appear to be enhanced by large numerical coefficients. They conclude that the ignorance about the functional form of the subleading shape functions would lead to a significant theoretical uncertainty in the determination of |V ub |, which could only be reliably reduced if the lower cut on the lepton energy were taken below the region where Fermi-motion effects are important (i.e., below 2 GeV or so). For a value E 0 = 2.2 GeV, as employed in a recent analysis reported by the CLEO collaboration [1], the resulting uncertainty on |V ub | was estimated to be at the 15% level [7]. While using simple models the correction was found to be negative, it was argued that the sign of the effect was uncertain in general [8]. In the present note we explore in more detail the origin of the "enhanced" corrections found in these papers. Our main point is that the first moments (but not higher moments) of the subleading shape functions give a large, non-vanishing contribution to the integral over the lepton spectrum even if the lower lepton-energy cut is taken out of the endpoint region. This effect corresponds to a calculable correction of order The hadronic uncertainty inherent in the modeling of subleading shape functions must therefore be estimated with respect to this contribution. When this is done, the remaining theoretical uncertainty is found to be much less than what has been estimated in [7,8]. We show how the effect of the first moments of the subleading shape functions can be isolated and expressed in a modelindependent way in terms of the photon energy spectrum measured in B → X s γ decays. We then estimate the numerical effect of the residual higher-twist corrections and find their impact on the |V ub | determination to be small, safely below the level of 10%. Charged-lepton energy spectrum: The quantity of primary interest to the determination of |V ub | is the normalized fraction of B → X u l ν events with charged-lepton energy above a threshold E 0 chosen so as to kinematically suppress the background from B → X c l ν decays, When combined with a prediction for the total B → X u l ν decay rate, knowledge of the function F u (E 0 ) allows one to turn a measurement of the branching ratio for B → X u l ν events with E l > E 0 into a determination of |V ub |. In the formal limit where the "energy window" ∆E = M B /2 − E 0 is such that Λ QCD ≪ ∆E ≪ m b , Fermi-motion effects can be neglected, and the function F u (E 0 ) can be calculated using the operator product expansion. At tree level the result is is twice the width of the energy window in the parton model. The hadronic parameters λ 1 and λ 2 measure the b-quark kinetic energy and chromo-magnetic interaction inside the B meson. Note that while the leading contribution in (2) is proportional to the width of the energy window, the power corrections are independent of ∆E. As a result, the relative size of the power corrections strongly increases as the energy cut E 0 is raised toward the kinematic endpoint (corresponding to ∆E → 0). Although this simple analysis breaks down as ∆E ∼Λ, it explains that the origin of the large power corrections found in [7,8] is the kinematic suppression of the leading-order term. For realistic values of the energy threshold the quantity ∆E is of orderΛ, and the operator product expansion must be replaced by the twist expansion [2,3]. At subleading order in 1/m b the tree-level expression for F u (E 0 ) can then be written as where is a combination of the leading and subleading shape functions [7], and the dots denote higher-order terms in the expansion. The function F s (ω) defined by the second relation is related to the normalized photon energy spectrum in B → X s γ decays, S(E γ ), by (the factor 2 results from the Jacobian dω/dE) It is important in this context that the shape of the B → X s γ photon spectrum is largely insensitive to possible effects of New Physics [9], so F s (ω) can be extracted from the data in a model-independent way. When we include radiative corrections below, S(E γ ) will still denote the photon energy spectrum, normalized however on an interval E min sufficiently small to be out of the shape-function region. The combination of subleading shape functions remaining in the last line of (4) parameterizes chromo-magnetic interactions in the B meson. The moment expansion of these functions yields [6] where ρ 2 is a B-meson matrix element of a local dimension-6 operator. In the limit where ∆E ≫Λ, only the first moment yields a non-zero contribution to the function (3), because the weight function under the integral is linear in ω (to first order in 1/m b ). On the other hand, near the endpoint of the lepton spectrum all moments of the shape functions become equally important [2,3]. In between these two extremes there is a transition region, where only the first few moments of the shape functions give significant contributions. Theoretical studies of the photon spectrum in B → X s γ decays have shown that this transition region corresponds to values E 0 ∼ 2.0-2.3 GeV (for yet lower values, Fermi-motion effects become unimportant) [9]. To account for the effect of the first moment we define a new subleading shape function whose normalization and first moment vanish, and whose contribution to the quantity F u (E 0 ) therefore vanishes for ∆E ≫Λ. Inserting this definition into relation (4), and using that F s (ω) = f (ω) + . . . to leading order in 1/m b , we obtain from (3) Taking into account the known O(α s ) corrections to the leading term in the twist expansion [10,11], and rewriting the contribution involving F s (ω) as a weighted integral over the normalized photon energy spectrum in B → X s γ decays, we get our final result 1 with the weight function 1 Using an integration by parts, this result can be rewritten as a weighted integral over the fraction F s (E) of B → X s γ events with photon energy above E, normalized such that F s (E min γ ) = 1. and the subleading shape-function contribution The factor 2 in front of Λ SL (E 0 ) in (9) is inserted so that Λ SL (E 0 )/m b is the subleading shape-function correction to |V ub |. We stress that, by definition, Λ SL (E 0 ) is a parameter of order Λ QCD that vanishes for ∆E ≫Λ. It is thus a true measure of shape-function effects. On the contrary, the power corrections studied in [7,8] arise predominantly from the λ 2 /m 2 b correction to the weight function. The expression for the perturbative coefficient k pert in (10) can be obtained from the results of [9,12]. It reads where δ = 1 − E min γ / E γ depends on the lower boundary of the energy interval used to normalize the B → X s γ photon spectrum, C i (µ) are leading-order Wilson coefficients in the effective weak Hamiltonian for B → X s γ transitions, and the functions f ij (δ) can be found in [9]. In the definition of δ we use the central value of the CLEO result for the average photon energy above 2 GeV, E γ = (2.346 ± 0.034) GeV [13], as a substitute for m b /2. Numerical results: The value of the coefficient k pert is sensitive to the choice of the renormalization scale µ and the value of the quark-mass ratio m c /m b used in the evaluation of charm-quark loops. We take µ b = m b (m b ) = 4.2 GeV as our central value for the renormalization scale and vary µ between µ b /2 and 2µ b . We use a running charmquark mass to evaluate the loop functions [14], taking m c (µ)/m b (µ) = 0.23 ± 0.03. The results for k pert corresponding to two different choices of E min Since the effect of this correction is very small, one should not consider the small variation of k pert as a measure of the perturbative uncertainty in the weight function (10). Typically, we expect O(α 2 s ) corrections to contribute at the level of 5% of the tree-level term. A corresponding uncertainty will be included in our numerical analysis below. The power correction to the weight function in (10) may be rewritten as Alternatively, using λ 2 = (0.12 ± 0.02) GeV 2 and m b = (4.72 ± 0.06) GeV we obtain the value 0.043 ± 0.007, which will be used in our numerical analysis. The size of this correction is not anomalously large; however, its impact is significant because it competes with terms proportional to the small difference (1 − E 0 /E γ ). In the endpoint region this difference scales like Λ QCD /m b , and so the λ 2 /m 2 b term is of relative order 1/m b . Our final focus is on the subleading shape-function contribution Λ SL (E 0 ) defined in (11). Little is known about the function s(ω) except that its normalization and first moment vanish, and that its second moment, M (s) 2 = −ρ 2 , is given by a hadronic matrix element expected to be of order (0.5 GeV) 3 with undetermined sign. As a result, the functional form and sign of Λ SL (E 0 ) cannot be predicted at present. However, the fact that Λ SL (E 0 ) must approach zero as E 0 is lowered to a value of about 2 GeV (below which shape-function effects from higher moments are irrelevant) ensures that its impact on the determination of |V ub | is small. To substantiate this claim we investigate several models for the subleading shape function in more detail. For the leading-order function we take the ansatz [9] f where g a (x) = [a a /Γ(a)] x a−1 e −ax . The parameter a must be larger than 1 and is fixed so that the second moment of f (ω) equals −λ 1 /3 [2], yielding a = −3Λ 2 /λ 1 . We assume that the subleading function s(ω) is finite everywhere in the interval −Λ ≤ ω < ∞, but we do not require that this function vanish at the endpoint. The model functions adopted in [7] are such that s(ω) is set to zero, and so Λ SL (E 0 ) vanishes by construction. The model functions used in [8] correspond to the ansatz where the lower bound on the parameter b is enforced by the requirements that s(ω) be finite at the endpoint ω = −Λ and have vanishing normalization and first moment. A property of this model is that also the second moment of s(ω) vanishes. Three alternative choices for s(ω) with non-zero second moment M Λ SL (E 0 ) can be large close to the kinematic endpoint, it takes values of orderΛ for E 0 ∼ 2.35 GeV and quickly decreases as E 0 is lowered below 2.3 GeV. For E 0 = 2.2 GeV we find values of Λ SL (E 0 ) of at most 130 MeV (model 2), corresponding to a power correction to the extraction of |V ub | of less than 3%. Although our choice of model functions is meant as an illustration only, we believe the rapid decrease of Λ SL (E 0 ) for E 0 < 2.3 GeV is a general result. It appears to be extremely unlikely that with a reasonable shape of s(ω) and a natural size of the second moment M (s) 2 the power correction Λ SL (E 0 )/m b could be as large as 10%. Conclusion: In summary, we have studied the impact of subleading shape functions on the determination of |V ub | from the combination of weighted integrals over energy Table 1: Illustrative theoretical predictions for the fraction F u (E 0 ) of B → X u l ν events with charged-lepton energy E l > E 0 , assuming a perfect measurement of the B → X s γ photon spectrum (see text for explanation). spectra in inclusive B → X u l ν and B → X s γ decays. We have argued that for a lower energy cut E 0 = 2.2 GeV as employed in a recent CLEO analysis one is in a transition region, where Fermi-motion effects are dominated by the first few moments of the leading and subleading shape functions. The dominant power correction (the only one that remains when the cut is lowered below about 2 GeV) results from the first moment of the subleading shape function, which is known in terms of the hadronic parameter λ 2 . Our main result is given in (9) and (10). To exhibit its features, let us assume that a perfect measurement of the B → X s γ photon spectrum is available in the energy range above E min γ = 1.5 GeV. (For the purpose of illustration, we use a fit to the CLEO data in [13].) We then calculate the fraction of B → X u l ν events with charged-lepton energy above E 0 for different values of the cut. The results are summarized in Table 1. Columns 2, 3, and 4 show the contributions from the tree-level term, the O(α s ) corrections, and the power correction to the weight function in (10), including theoretical uncertainties from input parameter variations as detailed above. The next column shows the total result, while the final column gives an estimate of the residual uncertainty from subleading shape-function effects, as parameterized by the term 2Λ SL (E 0 )/m b in (9). We show the largest uncertainty obtained in the four classes of models considered earlier. We observe that the power correction to the weight function has a significant impact, which as anticipated is by far the dominant effect of subleading shape functions. For E 0 = 2.2 GeV, the power correction leads to a reduction of the predicted value for F u (E 0 ) by (26 ± 6)%, corresponding to a 13% enhancement of the extracted value of |V ub |. This is in good agreement with the estimate given in [7]. The most important implication of our analysis is that subleading shape-function effects do not entail a significant limitation on the extraction of |V ub |. This assessment differs from the conclusion reached in [7,8], where is was argued that these effects could not be controlled reliably unless the cut E 0 could be lowered outside the shape-function region. The new element of our analysis is that we identify the first moment of the subleading shape-function as the dominant source of power corrections and show how its contribution can be expressed in terms of an integral over the B → X s γ photon spectrum. We have estimated the residual uncertainty on |V ub | from subleading shapefunction effects by using four different classes of model functions and found corrections of at most 3% (with E 0 = 2.2 GeV). The smallness of this effect can be understood on the basis that it is a power correction of the form Λ SL (E 0 )/m b with a hadronic parameter Λ SL (E 0 ) = O(Λ QCD ) that vanishes as E 0 is lowered below about 2 GeV. We thus conclude that, very conservatively, the residual uncertainty on |V ub | is less than 10%. The main result of this letter is the new expression for the weight function in (10), which now includes the leading power correction. Perhaps the largest uncertainty in this method for determining |V ub | is due to (largely unknown) corrections from violations of quark-hadron duality, and from spectator-dependent effects such as weak annihilation and Pauli interference [15,16] (see also [8], where a 6-8% correction on |V ub | was obtained for E 0 = 2.2 GeV using a simple model for spectator effects). In these references, several strategies have been developed that could help to determine the magnitude of these corrections using experimental data.
4,278.4
2002-06-28T00:00:00.000
[ "Physics" ]
CSS code surgery as a universal construction We define code maps between Calderbank-Shor-Steane (CSS) codes using maps between chain complexes, and describe code surgery between such codes using a specific colimit in the category of chain complexes. As well as describing a surgery operation, this gives a general recipe for new codes. As an application we describe how to `merge' and `split' along a shared $\overline{X}$ or $\overline{Z}$ operator between arbitrary CSS codes in a fault-tolerant manner, so long as certain technical conditions concerning gauge fixing and code distance are satisfied. We prove that such merges and splits on LDPC codes yield codes which are themselves LDPC. Introduction Quantum computers have become larger and more sophisticated in recent years [10,35], but faulttolerance is necessary to perform practically relevant quantum algorithms. Qubit stabiliser error-correction codes are a well-studied approach to fault-tolerant quantum computing [20] and are favourable both for their practicality and theoretical simplicity. Such codes store logical data using entangled states of physical qubits and repeated many-body measurements, and so long as the physical errors on the qubits stay below a certain threshold the logical data is protected. The most well-known example of a qubit stabiliser code is the toric code, in which qubits are embedded on the surface of a torus, and properties of the logical space are determined by the topology of the surface [15,28]. This is a basic example of a qubit Calderbank-Shor-Steane (CSS) code; there are several equivalent ways of defining CSS codes, but for our purposes we shall describe them as codes which are all homological in a suitable sense [3]. This means that we can study CSS codes using the tools of homological algebra [38]. This approach has recently seen much success, for example in the construction of so-called good low-density parity check (LDPC) code families using a balanced product of chain complexes [33]. Such code families have an encoding rate k/n of logical to physical qubits which is constant in the code size, while maintaining a linear code distance d, a substantial asymptotic improvement over simpler examples such as the toric code. The main caveat is, informally, that the connectivity between physical qubits is non-local. This complicates the architecture of the system, and also complicates the protocols for performing logical gates. There have been several recent works on protocols for logical gates in CSS codes [29,9,6,34,24], of varying generality. Here, we build on this work by defining surgery, in the abstract, using arbitrary 2. Constructing new codes. Designing fault-tolerant logical operations. We intend to expound on code maps in future work, but presently we focus on items 2 and 3. We define CSS code merges as a colimit -specifically, a coequaliser/pushout -in the category of chain complexes. Not only does the construction describe a surgery operation, but it also gives a general recipe for new codes. An application of our treatment is the description of certain classes of code surgery whereby the codes are merged or split along a Z or X operator, closely related to the notion of 'welding' in [32]. We prove that merging two LDPC codes in such a manner still yields an LDPC code. We give a series of examples, including the specific case of lattice surgery between surface codes. Lastly, we discuss how to apply such protocols in practice, and prove that when a technical condition related to gauge fixing is satisfied then code surgery can be performed fault-tolerantly, allowing us to perform logical parity measurements on codes. 1.1 Guide to reading the paper Section 2 gives a bird's eye view of category theory and universal constructions, which will be useful later on. Section 3 describes the category of chain complexes with morphisms as matrices over F 2 . Category theorists may wish to skip past these sections. We then give a rundown of CSS codes viewed as chain complexes in Section 4. Readers familiar with basic category theory and this perspective of CSS codes can safely skip to Section 4.3, where we introduce the notion of code maps, that is coherent transforms between codes. We introduce surgery of codes as a colimit in Section 5. This is when the notion of 'gluing' codes together comes in, and we prove several results about these codes when the colimit uses logical Z or X-operators. Lastly, we introduce a protocol for performing logical Z ⊗ Z and X ⊗ X measurements fault-tolerantly in Section 6. Universal constructions In this section we provide a cartoon introduction to category theory and universal constructions. We avoid any weasel phrases like "in some sense", or even any further scare quotes. However, when we use actual precise language, ie. jargon words, we emphasise these with italic font. A category is a collection of objects and morphisms. We will begin by drawing an object as a box with a decoration, such as . Morphisms are arrows between objects, like this . The arrow notation suggests that we can compose these. . The product of two objects in a category is an object, together with two arrows, . The product decoration combines the two decorations, . The product also must satisfy a universal property. This states that any other object that also combines the two decorations is already compatible with the product object in a unique way. In other words, for all test objects there exists a unique comparison morphism: . Please define \titlerunning The real product is the minimal object that projects down to the factors. Any other test object lives over the real product. This universal property has the immediate consequence that any other object that satisfies all these requirements, will be isomorphic via a unique isomorphism that commutes with the other morphisms, . A pullback is a product with constraints: . The resulting square should commute: if we compose any two paths of arrows with the same source object and the same target object then these paths should be equal. . As with products, we also require the pullback to satisfy a universal property. . All of these statements have dual statements, which we get by reversing all the arrows. When we do this we sometimes put a coprefix on the terminology. For example, a coproduct, which would normally be called a sum, looks like this . Once again, we require any such candidate coproduct to satisfy a universal property . We think of a coproduct as a way of gluing together objects. By adding constraints we can express where we wish to glue The answer to this question is called a pushout: it is an object together with two morphisms, that satisfies a universal property, We have purposefully avoided describing the decorations in these diagrams: how they work, what they mean. A more sensible introduction to category theory would describe these systematically, possibly mentioning the category of finite sets and functions. This is a story about how the objects are sets, with elements, and we can combine these in various ways to make other sets. Instead of telling this story, we skip to the punchline, which is that there are no elements, or rather, what you think of as an element of an object is really a morphism into that object: To push this idea home, and also move toward the goals of this paper, we consider the category Mat F 2 of finite dimensional matrices over the field F 2 . This has as objects the natural numbers and matrices over F 2 as morphisms. Composition of morphisms is matrix multiplication. We will show each object as a box with dots. For example, here is a composition of two morphisms in this category . The objects have very little going on inside them, serving only as anchors for the morphisms (matrices) where all the action is taking place. The vector elements of a vector space have vanished into the morphisms: A coproduct (sum) in this category is an object together with two morphisms satisfying the universal property of coproducts. Here is one candidate: . This coproduct will not be unique (except for some degenerate cases), but the universal property of the coproduct guarantees it is unique up to unique isomorphism. We have reinvented the direct sum of vector spaces. For a pushout of vector spaces . we get . This is gluing of a two dimensional vector space and a three dimensional vector space along a one dimensional vector space. But what about products? A curious thing happens in the category Mat F 2 ; we can get the dual universal construction by transposing matrices. For example, the above coproduct becomes the product: and similarly with pullbacks. The transpose duality of Mat F 2 will follow us throughout the rest of this paper. Here we have been taking the objects of Mat F 2 to be just natural numbers. In the rest of the paper we will use a slightly different definition for the objects: each natural number n is replaced by a basis set of size n for an n-dimensional vector space. Chain complexes We now recap some elementary homological algebra. All of this section is known, but we fix notation and look explicitly at the particular category of interest. Let Mat F 2 be the category which has as objects based finite-dimensional vector spaces over F 2 , so each vector space V has a specified basisṼ , and we have V ∼ = F |Ṽ | . A morphism Each V has a dual space V * . As V ∼ = V * , we may fix the duals such that V * = V , andṼ * =Ṽ . This has the benefit of forcing the dual of any matrix f : V → W , which is given by f * : W * → V * , to strictly be the transpose f : W → V . Much of the following mathematics will work given any rigid Abelian category A as input, but we only need Mat F 2 for our purposes in Section 4. Let Ch(Mat F 2 ) be the category of bounded chain complexes in Mat F 2 . We now recap some of the basic properties of this category. A chain complex C • looks like this: where each component C i is a based vector space and n ∈ Z is called the degree of the component in C • . C • has F 2 -matrices as differentials ∂ n : C n+1 → C n such that ∂ n • ∂ n+1 = 0 (mod 2), ∀n ∈ Z. To disambiguate differentials between chain complexes we will use ∂ C • n := ∂ n ∈ C • when necessary. All our chain complexes are bounded, meaning there is some k ∈ Z such that C n>k = 0 and l ∈ Z such that C n<l = 0, i.e. it is bounded above and below. We call k − l the length of C • for k and l the smallest and largest possible values respectively. and call Z n , B n the n-cycles and n-boundaries. We also define a quotient H n (C • ) = Z n (C • )/B n (C • ), and call H n the nth homology space of C • . Recall that dim(ker(∂ n−1 )) = null(∂ n−1 ) = dimC n − rank(∂ n−1 ). Note that throughout we sometimes use ker( f ) of a matrix f to mean the kernel object, i.e. subspace, and sometimes the kernel morphism, i.e. inclusion map. It should be clear from context which is meant. Example 3.2. Let Γ be a finite simple undirected graph. We can form the incidence chain complex C • of . All other components are zero. The sole nonzero differential ∂ −1 is the incidence matrix of Γ, with (∂ −1 ) i j = 1 if the jth edge is attached to the ith vertex, and 0 otherwise. has a basis set of vertices with no attached edges, and H 0 (C • ) is determined by the graph homology of Γ [38]. is called a chain map, and consists of a collection of matrices { f i : C i → D i } i∈Z such that each resultant square of maps commutes: As we specified bounded chain complexes only a finite number of the f i matrices will be non-zero. A chain map f • is an isomorphism in Ch(Mat F 2 ) iff all f i are invertible, in which case one can think of the isomorphism as being a 'change of basis' for all components, which thus transforms the differential matrices appropriately. Observe also that every pair of chain complexes has at least two chain maps, the zero chain maps, between them, given by a collection of entirely zero matrices either way. Proof. It is easy to check that f n induces matrices from Z n (C • ) → Z n (D • ) and the same for B n . This lemma is equivalent to saying that H n (−) is a functor from Ch(Mat F 2 ) → Mat F 2 . Ch(Mat F 2 ) has several known categorical properties which will be useful to us. One way to see a chain complex C • in Mat F 2 is as a Z-graded F 2 -vector space, with specified bases and a distinguished map ∂ : C • → C • with components ∂ i : C i+1 → C i , such that ∂ •∂ = 0. Many of the properties of Ch(Mat F 2 ) are inherited directly from those of Z-graded F 2 -vector spaces. Lemma 3.5. Ch(Mat F 2 ) is an additive category, i.e. it has all finite biproducts. Proof. Adding two chain maps obviously gives a chain map. Define the biproduct of C • and D • with shorthand (C ⊕ D) • of chain complexes C • , D • . It has components (C ⊕ D) n = C n ⊕ D n and the same for differentials. This is both a categorical product and coproduct. Lastly, the zero object is i.e. all 0 i are 0. Lemma 3.6. Homology preserves direct sums (coproducts): given chain complexes C • and D • , This is obvious, considering the blocks of each differential in (C ⊕ D) • . Definition 3.7. Let the dual chain complex C * • have components . Our choice of duals means that C * * • = C • on the nose. Lemma 3.8. For a chain complex C • , The dual of a chain map f is straightforward: it has matrices ( f Remark 3.9. These are categorical duals with respect to a tensor product of chain complexes. As we only need this tensor product for a very specific construction in Section 6 we relegate it to Appendix B. Quantum codes Here we introduce classes of both classical and quantum codes as chain complexes. We give easy examples such as the surface and toric codes. Up until Section 4.3, this part is also well-known, although we describe the relationship between Z and X operators in greater detail than we have found elsewhere. Codes as chain complexes Binary linear classical codes which encode k bits using n bits can be described by a k × n encoding F 2matrix G and a m × n parity check F 2 -matrix P. One can use G to take any length k bitstring b, viewed as a column vector, and obtain c = Gb, the codeword for b. The parity check matrix P, when applied to any codeword, gives Pc = 0; if the result is non-zero then an error has been detected, and under certain assumptions can be corrected. The distance d of a binary linear classical code is the minimum Hamming weight of its nonzero codewords, and one characterisation of codes is by their metrics [n, k, d]. We have that rank(P) = n − k, and the checks are independent when P is full rank, but we do not assume this in general. Given P, G is uniquely defined up to isomorphism as the matrix ker(P), so we use only P to define the code. We may trivially view a binary linear classical code as a length 1 chain complex in Ch(Mat F 2 ), with indices chosen for convenience: . d is then the minimum Hamming weight of vectors in Z 0 (C • ), which is the codespace. Note also that we can see H −1 (C • ) = C −1 /B −1 (C • ) as measuring the 'redundancy' of the parity checks. From Example 3.2 we know that each graph defines length 1 chain complex, and so every graph evidently specifies a classical code, with edges as physical bits and vertices parity checks. An easy case is for cycle graphs, whereby we have a repetition code with one redundant parity check, i.e. vertex. In general, the cycle graph C n with n vertices gives an [n, 1, n] repetition code. We now move on to quantum codes. Qubit Calderbank-Shor-Steane (CSS) codes are a type of stabiliser quantum code. Let P n = P ⊗ n be the Pauli group over n qubits. Stabiliser codes start by specifying an Abelian subgroup S ⊂ P n , called a stabiliser subgroup, such that the codespace H is the mutual +1 eigenspace of all operators in S . That is, We then specify a generating set of S , of size m. For CSS codes, this generating set has as elements tensor product strings of either {I, X} or {I, Z} Pauli terms, with no scalars other than 1. One can define two parity check F 2 -matrices P X , P Z , for the Xs and Zs, which together define a particular code. Each column in P X and P Z represents a physical qubit, and each row a measurement/stabiliser generator. P X and P Z thus map Z and X operators on physical qubits respectively to sets of measurement outcomes, with a 1 outcome if the operators anticommute with a given stabiliser generator, and 0 otherwise; these outcomes are also called syndromes. P X is a m X × n matrix, and P Z is m Z × n, with m X , m Z marking the division of the generating set into Xs and Zs respectively, satisfying m = m X + m Z . We do not require the generating set to be minimal, and hence P X and P Z need not be full rank. Definition 4.3. We say that w Z is the maximal weight of all Z-type generators and w X the same for the X-type generators. These are the highest weight rows of P Z and P X respectively. Similarly, we say that q Z , q X is the maximal number of Z, X generators sharing a single qubit. These are the highest weight columns of P Z and P X . CSS codes are characterised by n, k, d , with k the number of encoded qubits and d the code distance, which we define presently. That the stabilisers must commute is equivalent to the requirement that P X P Z = P Z P X = 0. We may therefore view these matrices as differentials in a length 2 chain complex: where ∂ 0 = P Z and ∂ −1 = P X , or the other way round (∂ 0 = P X , ∂ −1 = P Z ) if desired, but we start off with the former for consistency with the literature. The quantum code then has C 0 ∼ = F n 2 . We will typically fix C 0 = F n 2 for convenience. The code also has k = dim H 0 (C • ). To see this, observe first that C 0 represents the space of Z Paulis on the set of physical qubits, with a vector being a Pauli string e.g. v = 1 0 1 ; Z ⊗ I ⊗ Z. Each vector in H 0 (C • ) can be interpreted as an equivalence class [v] of Z operators on the set of physical qubits, modulo Z operators which arise as Z stabilisers. That this vector is in Z 0 (C • ) means that the Z operators commute with all X stabilisers, and when the vector is not in [0] = B 0 (C • ) it means that the Z operators act nontrivially on the logical space. A basis of H 0 (C • ) constitutes a choice of individual logical Paulis Z, that is a tensor product decomposition of the space of logical Z operators, and we set Z 1 = Z ⊗ I · · · ⊗ I on logical qubits, Z 2 = I ⊗ Z · · · ⊗ I etc. There is a logical qubit for every logical Z, hence k = dim H 0 (C • ). To get the logical X operators, consider the dual C * • . The vectors in H 0 (C * • ) then correspond to X operators in the same manner. As a consequence of Lemma 3.8 there must be an X operator for every Z operator and vice versa. Proof. First, recall that we have the nondegenerate bilinear form which is equivalent to · : C 0 × (C * ) 0 → F 2 ; computationally, this tells us whether a Z operator commutes or anticommutes with an X operator. Now, let u ∈ Z 0 (C • ) be a (possibly trivial) logical Z operator, and v ∈ B 0 (C * • ) be a product of X stabilisers. Then P X u = 0, and v = P X w for some w ∈ C −1 . Thus u · v = u v = u P X w = (P X u) w = 0, and so products of X stabilisers commute with logical Z operators. The same applies for Z stabilisers and logical X operators. As a consequence, v · w = (v + s) · (w +t) for any The duality pairing of C 0 , (C * ) 0 thus lifts to H 0 (C • ), H 0 (C * • ), and a choice of basis The above lemma ensures that picking a tensor product decomposition of logical Z operators also entails the same tensor product decomposition of logical X operators, so that X i Z j = (−1) δ i, j Z j X i , for operators on the ith and jth logical qubits. Let where | · | is the Hamming weight of a vector, then the code distance d = min(d 1 , d 2 ). d 1 and d 2 are called the systolic and co-systolic distances, and represent the lowest weight nontrivial Z and X operators on the logical space respectively. As all the data required for a CSS code is contained within the chain complex C • -and potentially a choice of basis of H 0 (C • ) -then we could define a CSS code as just the single chain complex, but it will be convenient to have direct access to the dual complex as well. Definition 4.5. A CSS code is a pair (C • ,C * • ) of a length 2 chain complex centred at degree 0 and its dual: A based CSS code additionally has a choice of basis for H 0 (C • ), and hence for H 0 (C * • ). We call the first of the pair the Z-type complex, as vectors in C 0 correspond to Z-operators, and the second the X-type complex. Remark 4.6. As C * * • = C • , we see that given any CSS code (C • ,C * • ) we can exchange Z and X stabilisers (and operators) to obtain (C * • ,C • ). Employing the direct sum (C ⊕D) • of chain complexes we have the CSS code ((C ⊕D) • , (C * ⊕D * ) • ), which means the CSS codes (C • ,C * • ) and (D • , D * • ) perform in parallel on disjoint sets of qubits, without any interaction. The Z and X operators will then be the tensor product of operators in each. In summary, there is a bijection between length 1 chain complexes in Ch(Mat F 2 ) and binary linear classical codes, and between length 2 chain complexes in Ch(Mat F 2 ) and CSS codes. That this is a bijection is because we have been quite careful: under our definitions, two classical codes with different matrices but which are the same up to, say, Gaussian elimination are counted as being different classical codes. Morally, these could be considered equivalent codes for some purposes, as they have the same codespaces, code distances and number of physical bits. A similar qualification applies to quantum CSS codes. Additionally, there is no guarantee that the chain complexes will give useful codes. For example, a length 2 chain complex could be an exact sequence and thus have a homology space of zero at all components, in which case there are no logical qubits in the code; even if there is nontrivial homology, the (co-)systolic distance could be very low and thus not practical as a code. There are many classical and quantum codes which do not fit into this classification using Ch(Mat F 2 ), such as nonlinear classical codes and stabiliser quantum codes which are not CSS (although any n, k, d stabiliser code can be mapped to a 4n, 2k, 2d CSS code [5]). There are also fault-tolerant topological quantum systems which certainly don't live in Ch(Mat F 2 ), although they have similar homological properties [28,12]. Lastly, there are CSS codes for higher dimensional qudits, but for simplicity we stick to qubits. Rather than just individual codes we tend to be interested in families of codes, where n, k, d scale with the size of code in the family. Of particular practical interest are quantum low density parity check (LDPC) CSS codes, which are families of codes where all w Z , w X , q Z and q X in the family are bounded from above by a constant. Equivalently, this means the Hamming weight of each column and row in each differential is bounded by a constant. Basic quantum codes , with a representative v = 1 1 1 1 1 1 1 1 1 . Similarly H 0 (C • ) * has the nonzero vector w = 1 1 1 1 1 1 1 1 1 , which is a representative of [w] ∈ H 0 (C • ). Hence, we have two logical operators Z = 8 i Z i , X = 8 i X i with Z i on the ith qubit and the same for X i . We equally have, say, Z = Z 1 ⊗ Z 4 ⊗ Z 7 and X = X 1 ⊗ X 2 ⊗ X 3 in the same equivalence classes as those above, [v] and [w]. We now consider two examples which come from square lattices. This can be done much more generally. In Appendix C we formalise categorically the procedure of acquiring chain complexes -and therefore CSS codes -from square lattices, which are a certain type of cell complex. Edges in the lattice are qubits, so n = 18, the 9 X-checks are associated with vertices and the 9 Z-checks are associated with faces, which are indicated by white circles. Grey vertices indicate periodic boundary conditions, so the lattice can be embedded on a torus. This is an instance of the standard toric code [28]. The abstracted categorical homology from before is now the homology of the tessellated torus, with cycles, boundaries etc. having their usual meanings. Letting the code be (C • ,C * • ), we have k = dim H 0 (C • ) = 2, and (co)systolic distances are the lengths of the essential cycles of the torus. This represents a patch of surface code (D • , D * • ), where we have two smooth sides, on the left and right, and two rough sides to the patch, on the top and bottom. Observe that we have 'dangling' edges at the top and bottom, which do not terminate at vertices. We have The systolic distance is 3, the length of the shortest path from the top to bottom boundary, and the cosystolic distance 3, the same but from left to right. Code maps One may wish to convert one code into another, making a series of changes to the set of stabiliser generators to be measured, and potentially also to the physical qubits. The motivation behind such protocols is typically to perform logical operations which are not available natively to the code; not only might the target code have other logical operations, but the protocol is itself a map between logical spaces when chosen carefully. An example of a change to the measurements and qubits is code deformation. We do not formalise code deformation here, as that has some specific connotations [37]. Instead we define a related notion, called a code map, which has some overlap. A code map is also related to, but not the same as, the 'homomorphic gadgets' from [24]. Note that the second chain map is strictly speaking obsolete, as all the data is contained in a single chain map f • , but as with chain complexes it will be handy to keep both around. Let us unpack this definition. F Z first maps Z-operators in C 0 to Z-operators in D 0 , using f 0 . It may map a single Z on a qubit to a tensor product of Zs, or to I. It then has a map f 1 on Z generators, and another f −1 on X generators. Recalling Definition 3.3, we have: I II With two commuting squares labelled I and II. I stipulates that applying products of Z stabiliser generators on the code and then performing the code map should be equivalent to performing the code map and then applying products of Z stabiliser generators, i.e. II stipulates that performing the X measurements and then mapping the code should be equivalent to mapping the code and then performing X measurements, so there is a consistent mapping between all measurement outcomes, i.e. This has the component f 0 : D 0 → C 0 , which maps an X-operator in D 0 back to an X-operator in C 0 . Similarly for f −1 and f 1 , each of which come with commuting squares which are just the transposed conditions, e.g. • f 0 , so they say nothing new. This is not surprising, as all the data for f * is given by f already. We now show that this definition entails some elementary properties. For a start, Lemma 3.4 implies that a code map gives a map from a Z operator in H 0 (C • ) to Zs in H 0 (D • ); this can also map to a tensor product of logical Zs, and in particular map Z to zero i.e. I, but it must not map a Z to an operator which can be detected by the X stabiliser measurements. Hence ( f • , f * • ) preserves the fact that any Z is an undetectable operator on the codespace. A similar requirement holds for X operators, but this time the condition is inverted. Every X in H 0 (D * • ) must have a map only to logical operators in H 0 (C * • ), but the other way is not guaranteed. Let n C and n D be the number of physical qubits in codes (C • ,C * • ) and (D • , D * • ) respectively. We may interpret F Z as a C-linear map M in FHilb, the category of Hilbert spaces. This C-linear map has the property that MU Z = U Z M, where U Z is a tensor product of Z Paulis on n C qubits and U Z is a tensor product of Z Paulis on n D qubits. In particular, given any U Z we have a specified U Z . The same is not true the other way round, as the map f 0 is not necessarily injective or surjective. Similarly, MU X = U X M. This time, however, given any unique U X on n D qubits we have a specified U X but vice versa is not guaranteed, depending on f 0 . As a consequence, the linear map M is stabiliser, in the sense that it maps Paulis to Paulis, but not If M is not even an isometry, it cannot be performed deterministically, and the code map must include measurements on physical qubits. There will in general be Kraus operators corresponding to different measurement outcomes which will determine whether the code map has been implemented as desired; for now we assume that M is performed deterministically, and leave this complication for Section 6. Similarly, while the code map can be interpreted as a circuit between two codes, we do not claim that such a circuit can be performed fault-tolerantly in general. Remark 4.11. For the following proposition, and at various points throughout the rest of the paper, we will use the ZX-calculus, a formal graphical language for reasoning about computation with qubits. We do not give a proper introduction to this calculus for brevity, but Sections 1-3 of [36] are sufficient for the interested reader. Our use of ZX diagrams is unsophisticated, and primarily for convenience. Proposition 4.12. Let F Z be a Z-preserving code map between codes (C • ,C * • ) and (D • , D * • ) with qubit counts n C and n D . The interpretation of F Z as a C-linear map M in FHilb has a presentation as a circuit with gates drawn from {CNOT, |+ , 0|}. Proof. We start with the linear map M : By employing the partial transpose in the computational basis we convert it into the state i.e. inserting n C Bell pairs. By the definition of f 0 we know that this has an independent stabiliser, with one Z and n C − 1 Is followed by some n D -fold tensor product of Z and I, for each of the n C qubits. From f 0 it also has an independent stabiliser, with some n C -fold tensor product of X and I followed by n D − 1 Is and one X, for each of the n D qubits. |ψ is therefore a stabiliser state. Further, from Theorem 5.1 of [26] it has a presentation as a 'phase-free ZX diagram', of the form where the top n C qubits do not have a green spider. We perform the partial transpose again to convert the state |ψ back into the map M, which has the form Any ZX diagram of this form can be expressed as a matrix over F 2 , mapping X-basis states from (C 2 ) ⊗ n C to (C 2 ) ⊗ n D . The example above, ignoring the ellipses, has the matrix 1 0 1 1 1 1 which is equal to f 0 ; the point of the above rigmarole is thus to say that f 0 is precisely a linear map between X-basis states, which one can check easily. We can then perform Gaussian elimination on f 0 , performing row operations, which produce CNOTs on the r.h.s. of the diagram in the manner of [27], until the matrix is in reduced row echelon form. We then perform column operations producing CNOTs on the l.h.s. of the diagram, until the matrix has at most one 1 in each row and column. This can be performed using the leading coefficients to remove all other 1s in that row. The final matrix just represents a permutation of qubits with some states and effects. An empty column corresponds to a 0| effect, and an empty row a |+ state. We thus end up with a presentation of M in the form Proof. When n C = 0 we see that M has exactly n D independent stabilisers with 1 X and n D − 1 Is, for each qubit to put X on. The flipped argument applies when n D = 0. Definition 4.14. An X-preserving code map F X from a CSS code So F X is just mapping in the other direction to F Z from before, and we say that F X is opposite to F Z . In this case, when we interpret F X as a C-linear map L, it has the property that LU X = U X L and that any U X gives a specified U X , and LU Z = U Z L, but that any U Z gives a specified U Z but not vice versa. By inspecting the stabilisers we see that, for F Z with interpretation M and F X with interpretation L, While our definitions are for chain complexes of length 2, in principle one can map between any two codes with an arbitrary number of meta-checks, or between a classical code and quantum code, which could be interpreted as 'switching on/off' either X or Z stabiliser measurements. While code maps are related to code deformations, we are aware of code deformation protocols which do not appear to fit in the model of chain maps described. For example, when moving defects around on the surface code for the purpose of, say, defect braiding [18], neither Z nor X operators are preserved in the sense we give here. CSS code surgery To understand code surgery we require some additional chain complex technology, namely colimits. Coproducts, pushouts and coequalisers are directly relevant for our applications. We have already covered coproducts in Lemma 3.5, so we describe pushouts and coequalisers here. then for degrees n, n + 1 we have with [y] being the equivalence class in Q n having y as a representative, and the same for = 0, and one can additionally check that this is indeed a pushout in Ch(Mat F 2 ) by considering the universal property at each component. Doing some minor diagram chasing one can check that this is indeed a coequaliser in Ch(Mat F 2 ). Remark 5.3. We can view the pushout The difference is that the pair of chain maps k • , l • have been replaced with the single map coeq • , so we have We can view coequalisers as instances of pushouts as well, doing a sort of reverse of the procedure above. Remark 5.4. As with all colimits, those above are defined by the category theory only up to isomorphism. Any two chain complexes C • and D for all i ∈ Z, which one can check using rank-nullity. This is a very large isomorphism class, and we require more fine grained control over which chain complexes are chosen by the colimits. Thus we pick out the obvious objects from their isomorphism classes, which are those described in the definitions of the colimits above. is an Abelian category, and thus is finitely complete and cocomplete, meaning that it has all finite limits and colimits. While this is well-known, we sketch a proof of this lemma in Appendix A for completeness. There we also describe some additional limits, but we only use colimits in this work. Generic code surgery We now give a general set of definitions for surgery between arbitrary compatible CSS codes; the condition for compatibility is very weak here. Working at this level of generality means that we cannot prove very much about the output codes or relevant logical maps. As a consequence, we will then focus on particular surgeries which make use of 'gluing' or 'tearing' along logical Z or X operators in Section 5.3. Recall from Remark 5.3 that we can view any pushout as a coequaliser. We thus have and we call coeq • the Z-merge chain map. We can bundle this up into a Z-merge code map: We then call coeq * • : Q * • → (C ⊕ D) * • an X-split chain map, and hence we have an X-split code map too: We have an X-merge chain map and thus X-merge code map using the coequaliser picture, so We also have a Z-split chain map and the Z-split code map E Z by taking the opposite. This is rather abstract, so let's see a small concrete example. Example 5.8. Consider the following pushout of cell complexes: We have not properly formalised pushouts of square lattices in the main body for brevity, but we do so in Appendix C. Informally, we are just 'gluing along' the graph in the top left corner, where the edges to be glued are coloured in blue. We can consider this pushout to be in Ch(Mat F 2 ) 1 , giving the pushout: One can see from the cell complexes that we have Rather than compute the pushout maps, let us instead give the coequaliser coeq • : We immediately see that coeq 1 = id. For the other two surjections we have Finally we interpret all the chain complexes in this pushout as being the Z-type complexes of CSS codes Thus we have a Z-merge code map F Z , with an interpretation M as a C-linear map, using coeq 0 and coeq 0 . We refrain from writing out the full 32-by-64 matrix, but as a ZX-diagram using gates from {CNOT, |+ , 0|} we have simply We know from Lemma 3.4 that this map must restrict to a map on logical qubits. However, easy calcu- there are no logical qubits -there are still operators which show up as errors and some which don't, but all of those which don't are products of Z or X stabiliser generators. By Corollary 4.13 and Corollary 4.17 the logical map in FHilb is then just |+ . This trivially preserves both Z and X operators, although its opposite code map F X does not preserve Z operators. This example was very simple, but the idea extends in quite a general way. To give an idea of how general this notion of CSS code surgery is, consider the balanced product codes from [2,33]. The balanced product of codes is by definition a coequaliser in Ch(Mat F 2 ), and so we can convert it into a pushout using routine category theory. The coequaliser is where g • and f • represent left and right actions of A • respectively. Recall that we did not explicitly define the tensor product in the main body for brevity, but see Appendix B. Then to this coequaliser we can associate a pushout, where one can check that the universal property is the same in both cases. Thus we can think of a balanced product as a merge of tensor product codes, with the apex being two adjacent tensor product codes. As the maps in the span are evidently not monic, the merge is of a distinctly different sort from Example 5.8, and also the Zand X-merges we will describe in Section 5.3. It would be convenient if we could guarantee some properties of pushouts in general; for example, if the pushout of LDPC codes was also LDPC, or if the homologies were always preserved. Unfortunately, the definition is general enough that neither of these are true. We discuss this in slightly greater detail in Appendix D, but the gist is that we need to stipulate some additional conditions to guarantee bounds on these quantities. Surgery along a logical operator The procedure of merging here is closely related to that of 'welding' in [32]. Our focus is not just on the resultant codes, but the maps on physical and logical data. On codes generated from square lattices, the merges here will correspond to a pushout along a 'string' through the lattice. Please define \titlerunning be a length 2 chain complex. Let v ∈ C 0 be a vector such that v ∈ ker(∂ C • −1 )\im(∂ C • 0 ). We now construct the logical operator subcomplex V • . This has: where supp v is the set of basis vectors in the support of v, and ∂ i S is the restriction of a differential to a subset S of its domain. All other components and differentials of V • are zero. There is a monic f • : V • → C • given by the inclusion maps of V 0 ⊆ C 0 etc. Definition 5.10. Let V • be a logical operator subcomplex of two chain complexes C simultaneously, so there is some vector v ∈ C 0 and w ∈ D 0 such Then there is a monic span This monic span has the pushout The construction here is inspired by [9]. We are particularly interested in based CSS codes, i.e. when the logical spaces have bases. Then we say that V • is a separated logical operator subcomplex. We also say that the corresponding Z-merged code of (C • ,C * • ) and The intuition here, following [9], is that it is convenient when the logical operators we glue along do not themselves contain any nontrivial logical operators belonging to a different logical qubit; if they do, the gluing procedure may yield a more complicated output code, as we could be merging along multiple logical operators simultaneously. In Appendix E we demonstrate that it is possible for this condition to not be satisfied, using a patch of octagonal surface code. We now prove some basic results. Lemma 5.13. Let (Q • , Q * • ) be a separated Z-merged code with metrics n Q , k Q , d Q , and let n C , k C , d C , n D , k D , d D be the metrics of (C • ,C * • ) and (D • , D * • ) respectively. Let n V = dimV 0 be the Hamming weight of u and v. Then Further Proof. n Q is immediate by the definition. For the others, we start off by considering the code (C For the distance, we have d (C⊕D) = min(d C , d D ). Combining along a Z operator cannot decrease the distance. First, every vector v ∈ Z 0 (Q • )\B 0 (Q • ) has at least one corresponding vector v in Z 0 ((C ⊕ D) • )\B 0 ((C ⊕ D) • ) such that |v | = |v|. Now, we use the fact that any vector in B 0 ((C ⊕ D) * • ) is sent to a vector in B 0 (Q • ) by the quotient. For any vector a ∈ Z 0 (Q * • )\B 0 (Q * • ), i.e. a nontrivial X-operator, we can construct a vector a ∈ Z 0 ((C ⊕ D) * • )\B 0 ((C ⊕ D) * • ) by discarding all the 1s in a which are not in one of C * 0 or D * 0 , and so |a | ≤ |a|. However, there are cases in which there is no a such that |a | = |a|, which one can show with easy surface code merges, so the distance can be increased by the merge. We would like to study not only the resultant code given some Z-merge, but also the map on the logical space. We will now switch from pushouts to coequalisers. Recall the Z-merge code map F Z from Equation 1. We call this a Z-merge code map when the merge is along a Z-operator as above, and from now on we assume that all merges are separated. Lemma 5.14. Let the Z-merge code map These equivalences are from different quotients: y ∼ u and v ∼ x are homology quotients, while u ∼ v is a quotient from the pushout. Proof. M must have the following maps on Paulis on each pair of qubits being merged: which uniquely defines the matrix above. In other words we have |00 → |0 , |11 → |1 , |01 → 0, |10 → 0 etc, which has the convenient presentation as the ZX diagram on the left above. This is obvious by considering the surjection in question and using Lemma 5.13. It essentially says that on each pair of logical operators in ((C ⊕ D) • , (C ⊕ D) * • ) which are both being quotiented, F Z acts as: where the map on Xs is inferred from the dual. We now show that, if we consider two codes to be merged as instances of LDPC families, their combined Z-merged code code is also LDPC. Recall Definition 4.3. Similarly, letting the input codes have maximal number of shared generators on a single qubit q Z C , q X C and q Z D , q X D we have Proof. None of the Z-type generators are quotiented by a Z-merge map, so w Z Q = w Z (C⊕D) = max(w Z C , w Z D ). For the X-type generators, in the worst case the two generators which are made to be equivalent by the merge are the highest weight ones. For these generators to appear in V −1 they must have at least two qubits in each of their support which is in V 0 , and thus these qubits are merged together, so w X Q < w X C +w X D . Next, using again the fact that none of the Z-type generators are quotiented, a single qubit could in the worst case be the result of merging two qubits in (C • ,C * • ) and (D • , D * • ) which each have the maximal number of shared Z-type generators, so q Z Q ≤ q Z C + q Z D . For the X case, if a qubit is in V 0 then all X-type generators it is in the support of must appear in V −1 . Therefore, when any two qubits are merged all of their X-type generators are also merged. Thus q Z Q = q Z (C⊕D) = max(q X C , q X D ). Note that as w Z , w X and q Z , q X are at worst additive in those of the input codes, the Z-merge of two LDPC codes is still LDPC, assuming the pushout is still well-defined using matching Z operators for each member of the code families; moreover by Lemma 5.13 the Z-merge of two good LDPC codes is still good. Next, we dualise everything, and talk about X-merges. be a CSS code such that V * • is a logical operator subcomplex of C * • and D * • , and Q * • is the merged complex along V * • . Then the CSS code . In this case we glue along an X logical operator instead. The notion of separation, Lemma 5.13 and Lemma 5.16 carry over by transposing appropriately. An X-merge map E X can be defined similarly, and a similar result as Lemma 5.14 applies to separated X-merged codes. Proof. This time, L must have the maps Similarly, the maps on logical operators are X ⊗ I → X; I ⊗ X → X; Z ⊗ Z → Z Having discussed Zand X-merged codes, we briefly mention splits. These are just the opposite code maps to F Z and E X . In both cases, all the mappings are determined entirely by Lemma 5.13 by taking transposes or adjoints when appropriate. Remark 5.19. In practice, when the CSS codes in question hold multiple logical qubits it may be preferable to merge/split along multiple disjoint Z or X operators at the same time. Such a protocol is entirely viable within our framework, and requires only minor tweaks to the above results. The same is true should one wish to merge/split along operators within the same code. We now look at a short series of examples. Examples of surgery 5.4.1 Lattice surgery Lattice surgery is the prototypical instance of CSS code surgery. It starts with patches of surface code and then employs separated splits and merges to perform non-unitary logical operations [23]. Here we discuss surgery on qubit lattices, although the protocol has recently been generalised significantly to qudits and arbitrary Kitaev models [11,13]. The presentation we give of lattice surgery is quite idiosyncratic, in the sense that we perform the merges on physical edges/qubits, whereas the standard method is to introduce additional edges between patches to join them together. We remedy this in Section 6. Consider the pushout of cell complexes below: As before, we informally consider this to be 'gluing along' the graph in the top left, but for completeness it is formalised in Appendix C. By considering the pushout to be in Ch(Mat F 2 ), we have: Letting coeq • : (C ⊕ D) • → Q • be the relevant coequaliser map, we see that F Z = (coeq • , coeq * • ) constitutes a separated Z-merge map. In particular, observe that F Z sends the logical operators: as predicted by Lemma 5.15. The first two give H 0 (coeq • ) = 1 1 and the last H 0 (coeq * • ) = 1 1 . F Z is evidently Z-preserving but not X-preserving, as X ⊗ I is taken to an operation which is detected by the Z stabilisers. Observe that we end up with a greater cosystolic distance of (Q • , Q * • ) than we started with in ((C ⊕ D) • , (C ⊕ D) * • ). If we instead consider the pair (coeq • , coeq * • ) as an X-preserving code map F X , then it is a separated X-split map. In terms of cell complexes we would have → We similarly have a separated X-merge map, by considering the pushout Then we also have a separated Z-split map with the obvious form. Remark 5.20. While it is convenient to choose logical operators along patch boundaries to glue along, so that the complexes can all be embedded on the 2D plane, this is not necessary. One could intersect two patches along any matching operator. Remark 5.21. We do not expound on this example, but the protocols for performing X and Z measurements by generalised lattice surgery in [9] can be seen as using separated Z-and X-merged codes, with the caveat that they don't perform the merge maps; instead they initialise fresh qubits in the ancillary hypergraph code and measure all stabiliser generators. The present work has overlap with their protocols, but we do not subsume them; for example their X ⊗ Z and Y measurement methods are outside of our formalism, as they lead to non-CSS codes. Recall the toric code (C • ,C * • ) from Example 4.8. We can merge two copies of C • along a logical Z operator, which corresponds to an essential cycle of each torus. The resultant code will then look like two tori intersecting, depending somewhat on the choices of essential cycle: The Z-merge map on logical qubits will be the same as for patches. Shor code surgery Of course, the pushout we take does not have to come from square lattices. Let C • and D • be two copies of Shor codes from Example 4.7. 3 We can perform separated merges between them. We give two examples. First, for a Z-merge, we take the logical Z operator Z = 8 i Z i and apply Definition 5.9 to get the logical operator subcomplex: , and all other components zero. This is just C • from Example 4.7 truncated to be length 1, as this logical Z operator has support on all physical qubits. The monic chain map f • given by inclusion into the Shor code is just and the same for g • . The pushout of The Shor code can be constructed as a cellulation of the projective plane, so it is actually not wholly dissimilar from the lattice codes [19]. where ∂ Q • −1 = P X and ∂ Q • 0 = P Z |P Z . The map on logical data is fully determined by Lemma 5.15. We have ended up with virtually the same code as the Shor code, except that we have a duplicate for every Z-type generator, i.e. every measurement of Z stabilisers is performed twice and the result noted separately. While this example is very simple, it highlights that the result of a merge can have somewhat subtle features, such as duplicating measurements, which the two input codes do not. 4 For our second case, we use a different (but equivalent) logical operator, Z = Z 1 ⊗ Z 4 ⊗ Z 7 . We still glue two copies of the Shor code, but now we have V 0 = F 3 2 , V −1 = F 2 2 and ∂ V • −1 = 1 1 0 1 0 1 . That is, our logical operator subcomplex is just the repetition code from Example 4.1. We then have and the same for g 0 , forming again a monic span of chain complexes. The resultant Z-merged code is then Fault-tolerant logical operations We now describe how our abstract formalism leads to a general set of fault-tolerant logical operations for CSS codes. We consider this to be a good application of the homological algebraic formalism, as we suspect these logical operations would be challenging to derive without the machinery of Ch(Mat F 2 ). 5 So far in our description of code maps there are two main assumptions baked in: that one can perform linear maps between CSS codes (a) deterministically and (b) fault-tolerantly, both of which are desired for performing quantum computation. For assumption (a), we can only implement code maps which are interpreted as an isometry deterministically. If they are not, instead we must perform measurements on physical qubits. Recall from Proposition 4.12 that every code map has an interpretation constructed from CNOTs and some additional states and effects taken from {|+ , 0|} for a Z-preserving code map or { +| , |0 } for an X-preserving code map. This means that in order to implement the code map non-deterministically, one need only apply CNOTs and measure some qubits in the Z-basis (for a Z-preserving code map) or the X-basis (Xpreserving code map). Of course, should we acquire the undesired measurement result, we induce errors in our code map. There is no protocol for correcting these errors in all generality. For assumption (b), there is no protocol for performing arbitrary CNOT circuits on physical qubits in a code fault-tolerantly. However, when performing CSS code surgery which is a separated Zor X-merge, we have a protocol which addresses both (a) and (b). For this we need an additional technical condition involving gauge fixing. For reasons of brevity we do not describe the connection between lattice surgery and gauge fixing, but refer the interested reader to [37]. In summary, we will consider the whole system to be a subsystem code, and fix the gauges of the Z operators we are gluing along. We will also make use of the tensor product of chain complexes, for which see Appendix B. Definition 6.1. Let C • be a chain complex and u be a representative of the equivalence class [u] ∈ H 0 (C • ), which is a basis vector for H 0 (C • ). Let x be a vector in C 0 such that |x| = 1 and x · u = 1. We say that x is a qubit in the support of u. Recall from Lemma 4.4 that u has a unique paired basis vector It is possible to safely correct a qubit x when there is a vector v ∈ [v] such that x · v = 1 and y · v = 0 for all other qubits y in the support of u. We say that u is gauge fixable when it is possible to safely correct all qubits in the support of u. The same definition applies if we exchange X and Z appropriately. We now use this notion of gauge fixing to describe fault-tolerant CSS code surgery, generalising lattice surgery. be length 1 chain complexes. Then we can make the tensor product chain complex W • = (P ⊗V ) • , see Appendix B. Explicitly, In the case where V • is a string along a patch of surface code, say of the form: Proof. Observe that ∂ V • −1 has maximum row weight w X C and column weight q X C . Then recall Definition 4.3 and inspect the matrices ∂ W • 0 and ∂ W • −1 . For dim H 0 (W • ), we use the Künneth formula [38] (or see Lemma B.3), which in this case says H 0 ((P ⊗V ) where the last comes from the fact that V −1 = im(∂ V • −1 ), using Definition 5.9. Note dim H 0 (V • ) ≥ 1 as there is always at least one nonzero vector which is mapped to zero by construction, and B 0 (V • ) = 0. dim H 0 (V • ) may be greater than 1, however, as there may be other nonzero vectors in Z 0 (V • ) which previously corresponded to products of Z stabilisers in C • and D • before taking the logical operator subcomplex. where the middle term is W • = (P ⊗V ) • from Definition 6.3 above, and the two inclusion maps V • → W • map V 0 into each of the copies of V 0 in W 0 , and the same for V −1 . Colloquially, we are gluing first one side of the code W • to C • , and then the other side to D • . 6 Lemma 6.6. The 'sandwiched code' (T • , T * • ) has n T = n C + n D + r; and Proof. For n T , just apply Lemma 5.13 twice. For k T , use Lemma 6.4 and observe that all elements of H 0 (W • ) are merged with stabilisers in B 0 (C • ) or B 0 (D • ), and thus do not contribute to k T , apart from the element corresponding to a logical operator in H 0 (C • ) and H 0 (D • ). Bearing this in mind one can also apply Lemma 5.13 twice. For d T , observe that all Z logical operators in W • have weight at least n V ≥ min(d C , d D ), and the only X logical operators in W • become part of logical operators in C • and D • , so the minimum distance cannot be lowered. For w X T , the pushouts will glue each X type stabiliser generator in W • into those in C • and D • in such a way that they will have exactly one extra qubit in the support, by the product construction of W • ; we can see this from ∂ W • −1 in Definition 6.3, as there is exactly a single 1 which is not part of the ∂ V • −1 in any given row of the matrix. For w Z T , q Z T and q X T we just use Lemma 6.4 and apply Lemma 5.16 twice. The intuition here is that rather than gluing two codes (C • ,C * • ) and (D • , D * • ) together directly along a logical operator, we have made a small distance 1 hypergraph code (W • ,W * • ) and used that to sandwich the codes. A consequence of the above lemma is that this 'sandwiching' procedure maps LDPC codes to LDPC codes. Importantly, the two pushouts let us perform a code map on logical qubits fault-tolerantly. Proposition 6.7. Let (C • ,C * • ) and (D • , D * • ) be CSS codes which share a separated gauge-fixable Z operator on m physical qubits and r X-type stabiliser generators each; let the relevant logical qubits be i and j, and let V • be the logical operator subcomplex of C • and D • such that the codes admit a separated Z-merge. Further, let d be the code distance of ((C ⊕ D) • , (C ⊕ D) * • ). Then there is a fault-tolerant procedure with distance d for implementing a Z ⊗ Z measurement on the pair i, j of logical qubits, which gives the 'sandwiched code' (T • , T * • ). This procedure requires r auxiliary clean qubits and an additional m Z-type stabiliser generators. Proof. We aim to go from the code The code map we apply to physical qubits is as follows. We call the physical qubits in the support of the logical operators to be glued together the participating qubits. We initialise a fresh qubit in the |+ state for each pairing of X-measurements on the two logical operators of qubits i and j, that is for each qubit in (W • ,W * • ) which is not glued to a qubit in (C • ,C * • ) or (D • , D * • ). We now modify the stabilisers to get to (T • , T * • ). To start, change the X stabiliser generators with support on the participating qubits to have one additional fresh qubit each, so that each pairing of Xmeasurements shares one fresh qubit. We add a new Z stabiliser generator with weight a + 2 for each participating qubit in one of the logical operators to be glued, where a is the number of X type generators of which that physical qubit is in the support. One can see this using Definition 6.3, as on the middle code (W • ,W * • ) we have We then measure d rounds of all stabilisers. All of the qubits in the domain of the last block of P Z above are those which were initialised to |+ . The only other qubits which contribute to the new Z stabiliser generators are those on either side of the sandwiched code, i.e. those along the Z logical operators of qubits i and j. Each of the physical qubits in the support of these logical operators is measured exactly once by the new Z stabiliser generators, and they are measured in pairs, one from each side; therefore performing these measurements and recording the total product is equivalent to measuring Z ⊗ Z. We will now check this, and verify that it is fault-tolerant. Let the outcome of a new Z-type measurement be c λ ∈ {1, −1}, and the overall outcome c L = ∏ λ ≤m c λ . Whenever c λ = −1 we apply the gauge fixing operator X λ = (i∈v | i=1) X i for the specified v ∈ C * 0 (or one could choose a gauge fixing operator using D * 0 instead). We let X c L = ∏ (λ | c λ =−1) X λ . On participating physical qubits, the merge is then where we abuse notation somewhat to let I and Z here refer to tensor products thereof. As each X λ belongs to the same equivalence class of logical X operators in H 0 (C • ), if c L = 1 then X c L acts as identity on the logical space; if c L = −1 then X c L acts as X on logical qubit i in the code before merging. One can then see that these two branches are precisely the branches of the logical Z ⊗ Z measurement. As the measurements were performed using d rounds of stabilisers, and the gauge fixing operators each have support on at least d qubits, the overall procedure is fault-tolerant with code distance d. We also check that the procedure is insensitive to errors in the initialisation of fresh qubits. If a qubit is initialised instead to |− , or equivalently suffers a Z error, then the new Z stabiliser measurements are insensitive to this change, and it will just show up at the X measurements on either side of the fresh qubit. If it suffers some other error, say sending it to |1 , then each new stabiliser measurement with that qubit in its support may have its result flipped. By construction of V • , each fresh qubit is adjacent to an even number of new Z stabiliser measurements, and so initialising the fresh qubits incorrectly will not change c L . As ZX diagrams, the branches are: ; π on logical qubits i and j, and all other logical qubits in the code are acted on as identity. We can freely choose which logical qubit may have the red π spider, as it will differ only up to a red π -i.e. a logical X -on the output logical qubit. In practice, depending on the code there will typically be cheaper ways of fixing the gauges than using an X logical operator for each −1 outcome, as there could be an X logical operator which has support on multiple of the qubits belonging to new stabilisers. The protocol obviates the problem of performing the code map on physical qubits deterministically, as the only non-isometric transformations we perform are measurements of stabiliser generators. However, the code map on logical qubits is still not isometric, hence we have a logical measurement. For the prototypical example of lattice surgery we then have: → We also look at a less obvious example, that of fault-tolerant surgery of the Shor code, in Appendix F. By dualising appropriately one can perform an X-merge by sandwiching in a similar manner. We can also do the 'inverse' of the merge operation fault-tolerantly: be a CSS code formed by sandwiching codes (C • ,C * • ) and (D • , D * • ) together along a Z operator. Then there is a fault-tolerant procedure to implement a code map on logical qubits Proof. As the initial code is already a sandwiched code we can just take the opposite of sandwiching. We delete the qubits belonging to the intermediate code by measuring them out in the X-basis. The code map E X on participating logical qubits is by following precisely the same logic as for traditional lattice surgery [23]. Again, by dualising appropriately we get the last split operation. Given a procedure for making Z ⊗ Z and X ⊗ X logical measurements and the isometries from splits, one can easily construct a logical CNOT between suitable CSS codes following, say, [4] and observing that the same ZX diagrammatic arguments apply. Augmented with some Clifford single-qubit gates and non-stabiliser logical states one can then perform universal computation. As opposed to some other methods of performing entangling gates with CSS codes, e.g. transversal 2-qubit gates, the schemes above require only the m qubits from the respective Z or X operators to participate, and we expect m n for practical codes. Our method does not require the code to be 'self-ZX-dual' in the sense of [6]. Unlike that of [9], our method does not require a large ancillary hypergraph product code, which can have significantly worse encoding rate and code distance scaling than the LDPC codes holding data. Unlike [24], our method does not require the code to be defined on any kind of manifold, and is purely algebraic in description. Conclusions and further work We believe our constructions are flexible and conceptually quite simple. The immediate next step is to benchmark our CSS code surgery against other methods of performing entangling logical gates and characterise which CSS codes admit gauge fixable logical operators. It would also be interesting to test on various quantum LDPC codes with various noise models. The pushouts we gave along logical operators are the most obvious cases. By taking pushouts of more interesting spans other maps on logical data can be obtained, although by Proposition 4.12 and Corollary 4.17 all code maps as we defined them are limited and do not allow for universal quantum computation; we also do not know whether other pushouts would allow the maps on logical data to be performed fault-tolerantly. Herein we assumed that the two codes being 'glued' are different codes, but the same principles apply if we have only one code we would like to perform internal surgery on. In this case, the correct universal construction to use should be a coequaliser. There may be other uses of colimits in Ch(Mat F 2 ). The method of constructing good families of quantum LDPC codes in [33] uses a balanced product, which is also a coequaliser, and the instance used there could be generalised. The initial classical codes come from expander graphs, and the quotient is with respect to actions of a finite group G. This group could be changed to some other differential graded algebra, and the starting codes do not have to come from graphs. A generalised balanced product in this way cannot have asymptotically better n, k, d metrics than those of [33], up to constant factors, as their construction already saturates the relevant bounds. However, it would be interesting to see if one can obtain better metrics for concrete instances. Calculating the homologies of such codes is likely to require tools such as spectral sequences. It should be possible to extend the definitions of Xand Z-merges straightforwardly to include metachecks [8], say by specifying that the logical operator subcomplex V • now runs from V 0 to V −2 , so it has X-checks and then metachecks on X-checks, but we have not proved how this affects metachecks in the merged code. There are several ways in which our constructions could be generalised to other codes. The obvious generalisation is to qudit CSS codes. For qudits of prime dimension q, everything should generalise fairly straightforwardly using a different finite field F q but in this case the cell complexes will require additional data in the form of an orientation on edges, as is familiar for qudit surface codes. When q is not prime, one formalism for CSS codes with dimension q looks to be chain complexes in Z q -FFMod, the category of free finite modules over the ring Z q . As Z q is not generally a P.I.D. this may complicate the homological algebra. Second, if we wish to upgrade to more general stabilisers we can no longer use chain complexes. The differential composition P X P Z is a special case of the symplectic product ω(M, N) = MωN for ω = 0 n I n −I n 0 n [21], but by generalising to such a product we lose the separation of Z and X stabilisers to form a pair of differentials. For quantum codes which are not stabiliser but are based on cell complexes, such as the Kitaev model [28], there are no stabiliser generators, but the codes are still 'CSS-like', in the sense that vertices correspond to actions of the group algebra CG and faces actions of the function algebra C(G), with each measurement outcome corresponding to an irreducible representation of the quantum double D(G) = C(G)> CG. More generally we can replace CG and C(G) with H and H * for any semisimple Hopf algebra H [31,12]. Just as there are no stabiliser generators, there are no longer Z and X-operators, but there are ribbon operators. As special cases there are ribbon operators which correspond to actions of only CG or C(G). The first author recently generalised lattice surgery to Kitaev models [13], albeit with some caveats. In the same way that CSS codes generalise stabiliser codes based on cell complexes, we imagine there could be a general class of commuting projector models using the quantum double, which are not necessarily defined on a tessellated manifold. The details of such a class are not known to us, and generalising the notion of 'sites' on a lattice seems difficult. We speculate that the notion of 'gluing' along, say, a CG operator could work for such commuting projector models. where ∂ K • n always exists and is uniquely defined, because and so by the universal property of ker( f n ) there is a unique matrix ∂ K • n : K n+1 → K n . These satisfy and then kernels are monic. K n = {v ∈ C n | f n (v) = 0} by the definition of kernels in Mat F 2 . Given the correct choice of basis, ∂ K • n is thus just ∂ C • n • ker( f n+1 ) as a matrix but without the all-zero rows which map into C n /K n . That ker( f ) is a genuine kernel in Ch(Mat F 2 ) is straightforward to check but we do not give further details. The reversed argument applies for cokernels, giving quotient complexes D • /im( f ) with components D n /im( f n ) etc. Remark A.2. As Ch(Mat F 2 ) is additive, equalisers and coequalisers can be seen as special cases of kernels and cokernels by defining eq( f , g) = ker( f −g) and coeq( f , g) = coker( f −g), for f , g : C • → D • . For the chain complex part E • of an equaliser we have components E n = {c | f (c) = g(c)} ⊆ C n . For the chain complex part F • of a coequaliser, we have components F n = D n / f (c) ∼ g(c), for c ∈ C n . We now sketch a proof of Lemma 5.5. Proof. Recall that an Abelian category is an additive category such that: 1. Every morphism has a kernel and cokernel. 2. Every monomorphism is the kernel of its cokernel. 3. Every epimorphism is the cokernel of its kernel. The first is just Lemma A.1, and the other two follow using the fact that they hold degree-wise in Mat F 2 . We will now spell out pullbacks. While they can be defined using equalisers and products we construct them explicitly, as it is easy to do so. One can check that (C ⊗ D) • is a F 2 -linear monoidal product ⊗ in Ch(Mat F 2 ), which follows from associativity and distributivity of ⊕ and ⊗ in Mat F 2 . For the unit, observe that Example B.2. Consider two chain complexes of length 1: In this case we have for nonzero components, and as the matrix partitions factor upon multiplication. This example illustrates an interesting property of ⊗ in Ch(Mat F 2 ): both C • , D • have only one nonzero differential, but (C ⊗ D) • has two. It is easy to see that given two complexes of lengths s,t the tensor product will have length s + t. That is, the homology subspaces factor through the tensor product conveniently. This is also called the Künneth formula. The manner in which the homology factors through does not make H n (−) a monoidal functor with respect to the tensor product. The tensor product is used to build codes from other CSS codes [1]. C Graphs and cell complexes In this appendix we give some categorical background on abstract cell complexes. This is not necessary to define CSS code surgery, but codes obtained from cell complexes are an important motivating example, as they include surface codes, toric codes [28], hyperbolic codes [7] and the balanced product codes from [33]. In general, if a CSS code comes from tessellating a manifold, it is likely to use cell complexes. Cell complexes are important in the study of topological spaces, and many of the constructions of CSS codes, such as balanced/lifted products, can also be phrased in the language of topology, but we stick to cell complexes for brevity. As a warm-up, we describe certain categories of graphs, and then move on to a specific kind of cell complex. Let Γ be a finite simple undirected graph. Recall that as a simple graph, Γ has at most one edge between any two vertices and no self-loops on vertices. Γ can be defined as a pair of sets, V (Γ) and E(Γ), with E(Γ) ⊆ 2 V (Γ) , the powerset of vertices, where each e ∈ E(Γ) has 2 elements i.e. it can be expressed as e = {v 1 , v 2 }. An example of a graph is C n , the cycle graph with n vertices and edges. We will also use P n , the path graph with n edges and n + 1 vertices. Definition C.1. Let Grph be the category of finite simple undirected graphs. i.e. the function respects the incidence of edges. Grph has several different products and other categorical features. We are particularly interested in colimits. Grph has a coproduct Γ + ∆ being the disjoint union, with V (Γ + ∆) = V (Γ) V (∆) and E(Γ + ∆) = E(Γ) E(∆). It also has an initial object I given by the empty graph. However, Grph is not cocomplete, as it does not have all pushouts. Example C.2. As a counterexample [30], given the diagram no cocone exists, as the graphs are not allowed self-loops. Therefore, no pushout exists. One can easily see that there are diagrams for which pushouts do exist, though. More than just graphs, we would like to allow for open graphs, i.e. graphs which may have edges which connect to only one vertex, but are not self-loops. This restriction prevents internal vertices from being 'deleted' by a graph morphism by converting them to boundary vertices, although we do not prevent the reverse from occurring. OGrph has very similar properties to Grph. Its initial object is the empty open graph. OGrph has a coproduct, where V (Γ + ∆) = V (Γ) V (∆) and B(Γ + ∆) = B(Γ) B(∆). Like Grph, OGrph is not cocomplete, as Example C.2 also works in the setting of open graphs. It is obvious that Grph is a subcategory of OGrph. We now move on to cell complexes, in particular abstract cubical complexes. These are abstract cell complexes which are 'square', unlike their 'triangular' relatives simplicial complexes. Definition C.6. [17] Let S be a finite set and let Ω be a collection of nonempty subsets of S such that: • Ω covers S. • For each X ∈ Ω, there is a bijection from X to the abstract d-cube for some choice of d, such that any Y ⊂ X is in Ω iff it is mapped to a face of the d-cube. Then Ω is an abstract cubical complex. Abstract cubical complexes are combinatorial versions of cubical complexes, meaning they are stripped of their associated geometry. The elements in Ω are still called faces. We can consider Ω to be a graded poset, with subset inclusion as the partial order, and the grading dim(X) = log 2 |X|. We also call this grading the dimension d of X, and we call X a d-face. The set of d-faces in Ω is called Ω d . There is a relation Ω d → Ω d−1 taking a d-face to its (d − 1)-face subsets. We call the vertex set V (Ω) = S = Ω 0 , and also define the dimension of a cubical complex The d-skeleton of Ω is the maximal subcomplex ϒ ⊆ Ω such that dim(ϒ) = d. The 1-skeleton of an abstract cubical complex is a finite simple undirected graph. The 2-skeleton of an abstract cubical complex is 'like' a square lattice, in that it has 2-faces which each have 4 0-faces as subsets and 4 1-faces. Definition C.7. Let ACC be the category of abstract cubical complexes. A morphism f : incidence is preserved at each dimension. Similar to Grph, ACC has coproduct given by (Ω + ϒ) i = Ω i ϒ i and an initial object I = / 0, and does not generally have pushouts, where we can reuse the same counterexample as Grph. Another categorical property we highlight here is that ACC has a monoidal product called the box product. Definition C.8. Let ϒ 2 Ω be the box product of abstract cubical complexes. Then We now check that ϒ 2 Ω is indeed an abstract cubical complex. Proof. First, it has a vertex set V (ϒ 2 Ω) = V (ϒ) × V (Ω), and thus trivially covers 0. Third, if X and Y each have a bijection to an i-cube and j-cube respectively, then X × Y has a bijection to an (i + j)-cube. Any W ⊂ X × Y can be expressed as T × U, for T ⊂ X and U ⊂ Y . Then W is in Ω 2 ϒ iff T is mapped to a face of the i-cube and U to a face of the j-cube, thus W to a face of the (i + j)-cube. Let us compile this into a more digestible form for the case when ϒ and Ω are both graphs. Given vertices (u, u ) and . The 1-skeleton of ϒ 2 Ω is just the normal box product of graphs [25]. Example C.9. Let C m and C n be cycle graphs with m and n vertices respectively, considered as abstract cubical complexes. Then T = C m 2 C n admits an embedding as a square lattice on the torus, and has dim(C m 2 C n ) = 2. Setting m = n = 3 we have where the grey dots indicate periodic boundary conditions and the white circles specify 2-faces. This example will come up in the form of the toric code in Section 4. Obviously, Grph is a subcategory of ACC. We are also interested in open abstract cubical complexes. As in our previous examples, OACC has the obvious coproduct and initial object, and does not have pushouts in general. Example C.12. Let ϒ be a 'patch', a square lattice with two rough and two smooth boundaries: This patch has 6 2-faces, 13 1-faces and 6 0-faces. Example C. 13. We can perform the pushout of two smaller open abstract cubical complexes to acquire a patch: where the apex is P 1 , the blue edge indicates where the apex is mapped to, and the bottom right open abstract cubical complex is the object of the pushout. Example C.14. Let G 3 be the open path graph, and let Ω be a patch. Then we have a pushout This example comes up in the context of lattice surgery on surface codes. Evidently, both OGrph and ACC are subcategories of OACC, and one can define a box product for OACC in the same way as we did for ACC in Definition C.8. One can define quantum codes using abstract cell complexes more generally, but abstract cubical complexes are the specific type which we make use of in examples in Section 4 and onwards. We now relate the above cell complexes to chain complexes by way of functors. Definition C. 15. Given an abstract cubical complex Ω we can define the incidence chain complex C • in Ch(Mat F 2 ), where each nonzero component has a basisC n−1 = Ω n , say, and each nonzero differential ∂ C • n−1 takes an n + 1-face to its n-dimensional subsets. 7 The differential is thus a matrix with a 1 where an n-face is contained within an (n + 1)-face, and 0 elsewhere. It is an elementary fact that every (d − 2)face in a d-face is the intersection of exactly 2 (d − 1)-faces, thus ∂ C • n−1 • ∂ C • n = 0 mod 2. Clearly, the incidence chain complex of a dimension 1 abstract cubical complex is just the incidence matrix of a simple undirected graph. We can do essentially the same thing given an open abstract cubical complex ϒ. In this case, each nonzero component has a basisC n−1 = {X ∈ Ω n | X ⊆ B(Ω)}, that is we ignore all faces which are made up only of boundary vertices, and differentials are the same matrices as above, with a 1 where an n-face which is not a subset of B(Ω) (and therefore would be 'invisible') is contained in an (n + 1)-face. It is easy to see that we still have ∂ C • n−1 • ∂ C • n = 0 mod 2. The incidence chain complex of a dimension 1 open abstract cubical complex is the incidence matrix of an open graph. Definition C. 16. Let C • and D • be the incidence chain complexes of two abstract cubical complexes Ω and ϒ with a morphism f : Ω → ϒ, and setC −1 ,D −1 as V (Ω),V (ϒ) respectively. This induces a chain map g • : C • → D • , with the matrix g −1 given by f , and all matrices on higher components generated inductively. Degrees i < −1 are assumed to be zero. As a consequence, we can define a functor ϕ : ACC → Ch(Mat F 2 ), sending each abstract cell complex to its free chain complex as described in Definition C. 15. ϕ( f ) ∈ Hom(ϕ(Ω), ϕ(ϒ)) for any morphism f : Ω → ϒ between abstract cubical complexes, as the function on vertices is already F 2 -linear and the matrices at higher degrees are uniquely determined. ϕ is faithful but not full, as there exist morphisms, such as the zero morphism, which are not in the image of ϕ. Definition C.17. There is also a functor ϑ : OACC → Ch(Mat F 2 ). On objects, this again follows Definition C.15. On morphisms this is the same as ϕ except it must obviously ignore maps between boundary vertices everywhere. Thus ϑ is not faithful. Proof. We give a proof sketch here. We know already that ϕ preserves coproducts so it is sufficient to check that it preserves pushouts. Let The same checks apply if we take ϑ : OACC → Ch(Mat F 2 ) instead. Observe that in this case f and g may have images only in B(Ω) and B(ϒ), in which case Ξ must have empty V (Ξ). Then the pushout in Ch(Mat F 2 ) will just be a direct sum, i.e. the pushout with ϑ (Ξ) = 0 • as the apex. Recall that ACC and OACC do not themselves have all pushouts, and therefore all colimits, but ϕ and ϑ preserve those which they do have. Remark C.20. For any chain complex C • we have also the pth translation C[p] • , where all indices are shifted down by p, i.e. C[p] n = C n+p and ∂ C[p] • n = ∂ C • n+p . This extends to an invertible endofunctor p : Ch(Mat F 2 ) → Ch(Mat F 2 ) in the obvious way. D Pushouts and properties of codes Here we describe a few problems with using general pushouts to construct new quantum codes. First, in a certain sense the pushout of LDPC codes is not necessarily LDPC. To illustrate this, consider the following pushout of graphs: As ϑ is cocontinuous this pushout exists also in Ch(Mat F 2 ). There, it represents a merge of two binary classical codes, although we can consider a binary linear code to just be a CSS code without any Z measurements. As a consequence, we have two initial codes with P X having maximal weights 1 each, and the merged code has maximal weight 4. Evidently, one can scale this with the size of the input graphs: here, the input graphs each have 3 edges, but if there are graphs with m edges each (and weight 1) and the apex with m vertices (and weight 0) then the pushout graph will have maximal weight m + 1. As a consequence the family of pushout graphs as m scales is not bounded above by a constant, and so the corresponding family of codes is not LDPC. be a monic span in Ch(Mat F 2 ), and let Q • be the pushout chain complex of this monic span. Further, let the monic span be a representative of a family of monic spans which are parameterised by some n ∈ N, and let A • , C • and D • be the Z-type complexes of quantum LDPC codes. Then (Q • , Q * • ) is also LDPC. Formulating this conjecture properly requires specifying what it means for a monic span to be parameterised. The above conjecture is clearly not an if and only if, as balanced products are not pushouts of monic spans. Lastly, taking pushouts evidently preserves neither homologies nor code distances, as easy examples with lattice surgery demonstrate. Moreover, we do not know of a way of giving bounds on these quantities for general pushouts, although again we suspect it should be easier for monic spans. E Octagonal surface code patch Consider the following patch of surface code: where the bristled edges are rough boundaries, and the diagonal edges are smooth boundaries. We have abstracted away from the actual cell complex as the tessellation is not important. Z-type logical operators take the form of strings extending from one rough boundary to another, e.g. Two strings belong to the same equivalence class iff they are isotopic on the surface, allowing for the endpoints to slide up and down a rough boundary. There are exactly 3 nontrivial such classes out of which all other strings can be composed. As a consequence, this patch of surface code has logical space V with dimV = 2 3 = 8. 8 We can choose a basis for this logical space, which has logical Z operators with representatives: where the middle operator can be smoothly deformed to a vertical line from top to bottom if desired. Recall that on the surface code an X operator anticommutes with a Z operator iff the strings cross an odd number of times. Thus, given the basis above, the duality pairing of Lemma 4.4 forces a similar basis of X operators, with representatives: We see that Z 1 is contained entirely within Z 2 on physical qubits. Thus it is possible to construct a Z merge which is not separated, in the parlance of Definition 5.12. If we choose a different representative, by deforming Z 2 to be a vertical line, then we can also perform a separated Z merge. F Fault tolerant Z-merge with the Shor code In this appendix we work through an example explicitly, using the techniques of Section 6 to perform a distance 3 fault-tolerant Z ⊗ Z measurement between two copies of the Shor code, for which see Example 4.7. Let us say the two copies are labelled (C • ,C * • ) and (D • , D * • ), with We will use the Z operator Z 1 ⊗ Z 4 ⊗ Z 7 , denoted u = 1 0 0 1 0 0 1 0 0 , with u ∈ C 0 and u ∈ D 0 , to glue along. The logical operator subcomplex V • is then For T • we take the two pushouts from Definition 6.5. First, we have with R 1 = W 1 ⊕C 1 , as V 1 = 0. The other components of R • require taking quotients, identifying elements of W 0 and C 0 , and the same for W −1 and C −1 . One can then use Definition 5.1 to show that For the second pushout, that is we then have The differentials are somewhat unwieldy, but we include them for completeness: For the fault-tolerant Z ⊗ Z measurement, we therefore start with the code ((C ⊕ D) • , (C ⊕ D) * • ). Recall that this has d = 3. We then initialise the 2 new qubits in the |+ state and measure 3 rounds of the stabilisers specified by ∂ T • 0 and ∂ T • −1 . As the 2 new qubits each participate in 2 of the new Zmeasurements, the product of the outcomes is insensitive to initialisation errors. We apply the gaugefixing operators from Example 6.2 to correct for measurements of the 3 new Z-measurements which output the -1 measurement outcome. We end up with the code (T • , T * • ).
22,803.2
2023-01-31T00:00:00.000
[ "Computer Science", "Mathematics" ]
Flow-induced crystallisation of polymers from aqueous solution Synthetic polymers are thoroughly embedded in the modern society and their consumption grows annually. Efficient routes to their production and processing have never been more important. In this respect, silk protein fibrillation is superior to conventional polymer processing, not only by achieving outstanding physical properties of materials, such as high tensile strength and toughness, but also improved process energy efficiency. Natural silk solidifies in response to flow of the liquid using conformation-dependent intermolecular interactions to desolvate (denature) protein chains. This mechanism is reproduced here by an aqueous poly(ethylene oxide) (PEO) solution, which solidifies at ambient conditions when subjected to flow. The transition requires that an energy threshold is exceeded by the flow conditions, which disrupts a protective hydration shell around polymer molecules, releasing them from a metastable state into the thermodynamically favoured crystalline state. This mechanism requires vastly lower energy inputs and demonstrates an alternative route for polymer processing. With the rise in polymer consumption, energy efficient techniques for polymer processing become more important. Using poly(ethylene oxide) aqueous solutions, the authors show that flow can causes a change of polymer solubility resulting in polymer crystallisation at ambient conditions. I n the natural world numerous strategies for efficient processing of materials have evolved 1,2 , and one such solution has been recently highlighted in natural silk spinning 3,4 . Spiders and silk worms are able to extrude an aqueous polymer solution, a liquid silk dope, which solidifies to form functional structures such as webs and cocoons. Silk is widely known to have special properties such as the combination of high tensile strength, durability and biocompatibility 5 , but infrequently mentioned is its ability to denature, that is convert from liquid to solid, triggered by flow. This unique property gives the animal the ability to create solid fibres from liquid silk dope, stored inside its body, in a much more energetically efficient way than fibres made from synthetic thermoplastics. In order to create crystal nuclei by shear flow, a certain amount of specific mechanical work must be performed on a polymer melt 6,7 . Silk behaves in a similar fashion, although in vitro measurements show that the specific work required to convert silk from liquid to solid by using just flow is orders of magnitude smaller than that of thermoplastics and the whole process takes place at ambient conditions 3 . In addition, there is an indication that animals can speed up the nucleation step by a careful control of the pH and ion concentration in the processing environment 8 . When extruded through a spinning duct into a fibre, the solidification is not by the commonly encountered mechanisms of heat transfer or crosslinking, specifically neither cooling nor chemical reaction, it is solely converted from one phase to another by the application of flow displacing the hydration layer surrounding silk protein molecules. To date no other material has been reported which can reproduce this mechanism under ambient conditions. Silk protein solidification is thought to be dependent on an energetically bound shell of water molecules preventing hydrophobic regions of individual proteins from intermolecular hydrogen bonding 9 . If this hydration shell is disturbed by flow stretching the molecules during spinning, a phase transition is facilitated by the formation of intermolecular hydrogen bonds. Flow removes the water layer by changing the conformational order of proteins and this facilitates inter-protein interactions. The term aquamelt was coined to describe materials with this type of behaviour 3 , but in general terms this material could be classified as a metastable aqueous polymer solution. The bound water layer plays a crucial role by keeping hydrophobic domains separated from each other in a liquid metastable state, which is then converted under the flow into a thermodynamically stable solid phase. Although this process takes place in water at ambient conditions, the resulting solid is water-insoluble with a melting point, T m , as high as 257°C 10 corresponding to the crystallised peptide beta-sheets. It is well-established that poly(ethylene oxide) (PEO) molecules in aqueous solutions are surrounded by a hydration layer similar to proteins 11,12 . Moreover, it has recently been demonstrated by molecular dynamic (MD) simulations that stretching of oligomer PEO chains dissolved in water initiates interchain aggregation, which ultimately leads to the phase separation of the PEO solution with the formation of highly oriented fibrillar nanostructures 13 . The aggregation was related to the change of PEO conformation making specific hydrogen-bond-induced solvation of PEO in water unfavourable which destroys the hydration layer. In this respect some observations of PEO fibrillation from aqueous solution under strong flows [14][15][16] , previously assumed to be driven by PEO and water phase separation, could be explained by these recent MD simulations 13 . Theoretical results indicate that solidification of PEO in water solution can be triggered by stretching in analogy with silk protein dopes. This work is inspired by the observation that silk solutions require orders of magnitude less work to induce crystallisation compared to conventional polymer melts, it tests the hypothesis that this behaviour is not unique to silk proteins but a feature of polymer solutions with specific interactions, e.g., hydrogen bonds. Using rheological properties of PEO and structural techniques based on birefringence and X-ray scattering, it is demonstrated herein that a simple synthetic polymer, PEO, with a conformation-dependent hydration layer [11][12][13] , can be solidified and crystallised upon flow. The processing conditions required for flow-induced nucleation and crystallisation of the polymer are quantified, and related to the molecular relaxation times. Results A metastable hydration shell. PEO is crystallisable and water soluble, due to the similarity of its oxygen-oxygen spacing to that seen in liquid water molecules 12 , so also demonstrate the properties of a metastable aqueous polymer solution. The hydrophobic methylene groups of the polymer are prevented from coming into contact with each other by a sheath of bound water (Fig. 1a), consisting of about 1.6 water molecules per PEO repeat unit 17,18 , which is confirmed using differential scanning calorimetry (DSC) ( In the quiescent state a PEO chains are coiled globular molecules (PEO segments-ball representation, water moleculesstick representation), surrounded by a protective sheath of water molecules (blue dashed lines), which prevents PEO segments from polymer intermolecular interactions. Crystallisation is prevented by this hydration sheath even when cooled far below the melting point. However, when flow is applied, b molecules become oriented and stretched along the flow direction leading to breakage of hydrogen bonds (dotted lines) and rupture of the hydration shell. Stretched segments of desolvated PEO, similar in orientation and conformation to that of a PEO crystal are exposed to each other. Following removal of water molecules from between the chains establishing polymer intermolecular interactions, c PEO chains crystallise in a helical conformation creating a solid phase. "Methods" section). In molten (anhydrous) PEO the ether stretching band is observed at 1097 cm −1 , whereas in wellsolvated dilute PEO (such as PEO in 50% w/w aqueous solution) the ether stretching band is observed at 1084 cm −1 (Supplementary Fig. 1A). As water is added to PEO the peak position falls rapidly reaching the limiting solvated value at concentrations around 60% w/w ( Supplementary Fig. 1B). The decrease in the C-O-C stretching frequency upon adding of water is commonly attributed to the formation of hydrogen bonds between oxygen atoms in the ether backbone of PEO and water molecules 19 . Polymer chains cannot come into close proximity without the hydration shells rupturing and de-solvation occurring. This sheath could be destabilised by a stimulus such as flow (Fig. 1b), stretching the PEO and leading to partial dehydration and aggregation of the polymer chains as predicted by MD simulations 13 . When the stretching is released the dehydrated PEO chains are likely to relax into their stable 7 2 helical crystal structure (space group P2 1 /a) because of chain flexibility and the PEO-PEO intermolecular forces 20 (Fig. 1c). This proposed mechanism is qualitatively different from the shish-kebab formation that demonstrated for polymers deposited on a free surface during the stirring of solutions of supercooled polyethylene-xylene 21,22 or PEO-ethanol 23 solutions. PEO dehydration can also be stimulated by thermal treatments. An increase of temperature decreases the solvent quality through the reduction of hydrogen bonds; thereby PEO aqueous solutions undergo phase separation at a lower critical solution temperature of about 100°C 24,25 . Another example is cold crystallisation of PEO 26,27 , which is observed on heating preliminary cooled PEO-water mixtures, where PEO is present in a glassy state, and takes place at temperatures below solidus line of the PEO-water eutectic phase diagram 26 (about −21°C). For these reasons the temperatures used for the shear experiments herein are above liquidus line of the PEO-water phase diagram and below 80°C (see "Methods" section), and cannot stimulate the PEO and water phase separation without an external impact of flow. A synthetic material crystallised by a flow (similar to a silk dope) can be created by dissolving a bimodal blend of linear PEO in water (see "Methods" section). An aqueous solution containing 50% w/w PEO exhibits a single melting transition from spherulitic crystals between −15 and +5°C (Fig. 2a). However, the polymer solution can be held at a temperature below the spherulite melting point without crystallising instantly due to significant hysteresis. In order to create crystal nuclei and cause crystallisation under quiescent conditions, the sample had to be cooled below the solidus line 26 to −30°C for about an hour (Fig. 2b). This hysteresis in melting/crystallisation presents an opportunity to cool the sample below T m , and hold it in a metastable state for a significant amount of time without solidification (Fig. 1a) as flow (Fig. 1b, c), solidify on demand. This phenomenon can be used to solidify PEO aqueous solution through flow-induced nucleation analogous to that used by silk worms and spiders. The flow can be generated in a controlled fashion by a rotational rheometer following a melt-cool cycle (Fig. 2d). Disruption of metastable state by flow. A birefringence-based, shear-induced polarised light imaging (SIPLI) technique 28 , previously developed to study flow-induced crystallisation of polymers 29 , has been used to measure flow conditions for the nucleation of PEO in water ( Fig. 3a and Supplementary Fig. 2). During a shear pulse long PEO molecules (nominal molecular weight M w = 2 MDa) are stretched by the flow at shear rates greater than the inverse Rouse time (_ γ R >τ À1 R ) (see "Methods" section and Supplementary Table 1) indicated by a weak Maltese Cross pattern in the polarised light image (PLI) (Fig. 3a, 1 s). Modelling 13 and experiment 30 show that PEO chain stretching results in breakage of the bifurcated hydrogen bonds between PEO and water molecules, leading to dehydrated polymer segments (Fig. 1b). The polymers conformational entropy is also reduced upon stretching, increasing Gibbs free energy and reducing the energy barrier for the crystal nuclei formation (Fig. 1c). As the shear pulse ceases the stretched polymer chains relax and the corresponding Maltese Cross pattern fades within a fraction of second (Fig. 3a, 25 s), consequently, the sample appears as it did before the shear pulse (Fig. 3a, −900 s). However, the shear pulse has performed a certain amount of specific work on the solution allowing multiple stretched segments of the long PEO molecules to combine to form crystal nuclei 31 and this is particularly visible in the rectangular time-lapse image. Whilst these nuclei are too small and too dilute to observe optically, or by any other method, over time they grow into larger oriented polymer crystals which can be detected using polarised light. It should be noted that in the quiescent state this sample does not crystallise (at least for a day) but following a shear pulse forms oriented crystals after a few minutes. A strong Maltese Cross, indicating the formation of oriented crystals, appears on the outside edge of the image (Fig. 3a, 380 s), which grows towards the centre over time before stabilising at a certain radius (Fig. 3a, 1800 s) corresponding to the minimum (boundary) shear rate ( _ γ b ) required to create oriented polymer nuclei at that particular time of the shear pulse (t p ) 7 . Thus, in analogy to natural silks, a solid phase has been created from a metastable liquid by the application of flow (see schematic in Fig. 1). Repeating the SIPLI experiment, converting liquid to solid, at various angular speeds and shear pulse durations, highlights the relationship between shear rate and shear time required for the formation of PEO nuclei in water. Experiments on 50% w/w PEO at 0°C and 60% w/w at 25°C show a boundary shear rate inversely related to the shear time (Fig. 3b) as previously demonstrated for polymer melts 7 . The critical specific work required for the PEO nucleation, W c , can be calculated from _ γ b and t p (Fig. 3b), and the magnitude of complex viscosity of the PEO solutions, |η * | (Fig. 3c) (see "Methods" section). Just like polyolefin melts 7,32 the work required to create crystal nuclei is independent of the shear rate or shear time used (Fig. 3d). On shallow undercooling 60% w/w and 50% w/w PEO aqueous solutions nucleate at W c~1 MPa (Fig. 3d), similar to the values obtained for polyolefins. However, the undercooling has a large effect on the W c for nucleation (Fig. 3e, f). Performing flow-induced nucleation at lower temperatures (greater undercooling) takes the specific work for PEO from values typical of thermoplastics (~2 MPa) 7,32 to values similar to or even lower than those of silk (~0.1 MPa) 3 (Fig. 3f). At such high undercooling, however, the PEO solutions are much more susceptible to thermal nucleation and have to be used in a shorter period (within hours). In order to obtain detailed structural information, the flowinduced nucleation of 60% w/w solutions were further investigated using in-situ X-ray scattering techniques (Fig. 4). SAXS patterns show an abrupt change from isotropic weak-scattering of the initially amorphous PEO solution to highly-anisotropic strong-scattering corresponding to an oriented, semi-crystalline, lamellar morphology as shear time increased. The total scattering intensity and P 2 orientation function calculated from SAXS ( Fig. 4 and see "Methods" section) show this change at t p = 60 s, consistent with SIPLI experiments (Fig. 3), and highlight the boundary between highly orientated crystals nucleated after long shear times, and the amorphous polymer solution persistent at short shear times. Concurrent with the SAXS, WAXS patterns show a broad amorphous peak at t p ≤ 60 s, whereas at t p > 60 s clear Bragg peaks can be observed indicating the formation of 7 2 helical PEO crystal structure (space group P2 1 /a) 20 . Thus, both optical and X-ray scattering techniques produce consistent results confirming that nucleation and crystallisation of PEO from aqueous solution occurs under shear flow. Discussion It was impossible to initiate crystallisation at _ γ below~10 s −1 in the aqueous solutions studied (Fig. 3b). This observation is consistent with the low shear-rate threshold for 2 MDa PEO stretching defined by _ γ RC (Supplementary Table 1). In particular, _ γ RC estimated for the PEO molecules with higher-weight-average molecular weight (M z = 2851 kDa), indicative of higher molecular weight polymers present in the polymer ensemble, is similar to the experimentally detected value of the lowest shear rate resulting in PEO flow-induced nucleation (Fig. 3b). Thus, this observation confirms theoretical findings 13,30 that in order to initiate crystallisation of PEO molecules in an aqueous solution the molecules have to be stretched to remove the sheath of bound water exposing dehydrated polymer segments to each other for the crystal nucleation. There are many similarities between the flow-induced nucleation and crystallisation of polymers from aqueous solution and the flow-induced crystallisation of thermoplastics from the melt state. For polymer melts the process can be subdivided into three stages: stretching, nucleation, and alignment of the nuclei formed 33 . The stretching introduces conformational order into the polymer chains, reducing the energy barrier for nucleation, and flow delivers one stretched segment to another until they collide and form an aggregate which is larger than the critical size of a stable nucleus. Once formed, the nuclei align along the flow direction and oriented crystals grow. The same three stages are also seen in this study of aqueous polymer solutions with one crucial difference: in this case the stretching process not only induces conformational order but also removes the solvent sheath and, therefore, reduces two barriers to polymer nucleation and the subsequent crystallisation. To enable crystallisation both barriers must be affected by flow, it is not sufficient to just stretch the polymer in solution, the solvent sheath must also be removed allowing the stretched chains to aggregate. Attempts to carry out flow-induced nucleation of the bimodal PEO blend without water, from the melt state, were complicated due to the very small undercooling range. It was impossible to initiate crystallisation of PEO by shear flow at temperatures close to, but above, the PEO peak melting point (≥65°C). Conversely, at temperatures below the melting point (60, 62, and 63°C) thermal nucleation dominated, and in the experimental protocol used the samples crystallised while reaching thermal equilibrium before the shear pulse. Experiments at 64°C indicated some flowinduced nucleation ( Supplementary Fig. 3), however, a combination of temperature-driven nucleation together with the shear flow produced a different morphology with a low degree of orientation ( Supplementary Fig. 3 insets). These results demonstrate the importance of water in protecting PEO from thermal nucleation. The formation of an H-bonded sheath of water around each polymer chain (Fig. 1a) allows initiation of PEO crystallisation exclusively by a shear flow (Fig. 1b, c) over a much wider range of temperatures above and below the melting point of the solution. A metastable aqueous solution of a synthetic polymer that is converted into a crystalline solid with the flow overcoming the energetic barrier to nucleation is reported. This transformation occurs under ambient conditions requiring no chemical reaction, removal of heat, or evaporation of solvent. In common with silks, the PEO solution behaves like previously reported thermoplastics, that is, a specific amount of mechanical work needs to be performed before flow-induced nucleation can occur, and facilitate solidification. The results herein demonstrate the qualitative difference between metastable, aqueous polymer solutions, and nonpolar thermoplastics, where a large window of metastability can be accessed due to the conformation-dependent solubility of water-soluble polymers 12,13,30 . The polar nature of both the polymer and solvent means that solubility is not just the nonspecific effect of thermal motion (like in the regular solution model 34 ) but is based on specific interactions. In this manner, one of the natural world's methods of polymer processing, flowinduced phase transitions in aqueous solutions, has been replicated using synthetic materials, leading to vastly more energy efficient polymer processing. This behaviour could be a universal phenomenon in polymer solutions that have a specific interaction that is dependent on the conformation of the polymer, allowing polymer processing with much lower energy consumption. Considering the fact that PEO aqueous solutions show a stable cold crystallisation (as a result of water molecules binding to the polymer), this phenomenon could be used as a criterion for selecting systems suitable for this method of polymer processing. A similar suggestion was made for screening biocompatible polymer systems, where a stable cold crystallisation had been proposed as an index for bound water, indicating that those polymers would be biocompatible 27 . Methods Preparation and rheology of aqueous PEO bimodal blends. Poly(ethylene oxide) s with nominal molecular weights of 2 MDa (number-average molecular weight M n = 678 kDa, weight-average molecular weight M w = 1799 kDa, higher-weightaverage molecular weight M z = 2851 kDa and dispersity index M w /M n = 2.65, Supplementary Fig. 4) and also 20 kDa (M n = 19.3 kDa, weight-average molecular weight M w = 21.5 kDa, higher-weight-average molecular weight M z = 23.6 kDa and dispersity index M w /M n = 1.11, Supplementary Fig. 4) were purchased from Sigma-Aldrich and used as received without further purification. Deionised water from a PureLab source with a resistivity of 18.2 MΩ was used to make aqueous PEO solutions. In analogy with a previous research on thermoplastics 33,35 a bimodal PEO blend was used. Blending a small fraction of long polymer chains (7.5% w/w, M w = 2 MDa) with a polymer matrix of short chains (92.5% w/w, M w = 20 kDa) allows the polymer crystal nucleation to be triggered at a relatively small shear rate, that is above inverse Rouse time, _ γ> _ γ R ¼ τ À1 R , thereby stretching the long chains 7 while maintaining a reasonably low viscosity. A solvent mixing approach was used to prepare a homogenous bimodal PEO blend: the polymers were initially dissolved in water and the solvent was evaporated at a later stage. To form aqueous PEO solutions sheets of the blended PEO (100% w/w) where cut into strips and placed inside a polypropylene disposable syringe (10 ml) followed by adding the required mass of water. The syringe tube was then sealed by attaching a plug to the needle hole before being heated to 70°C to aid mixing. The PEO strips had visibly dissolved after a few hours. To homogenise the samples a second syringe was attached to the first and the mixture pumped back and forth between the tubes several times. The syringe tubes were then allowed to stand overnight to equilibrate the distribution of water, before being centrifuged at 4000×g for 10 min to remove any air bubbles. These aqueous solutions of bimodal PEO blends were then used in subsequent experiments. The rheological properties of the bimodal PEO blend and its aqueous solutions were measured using a stress-controlled rheometer (Physica MCR 301, Anton Paar, Graz, Austria) in parallel-plate rotational geometry (radius of the rotating shearing disk was 12.5 mm, gap between the plates was set at 0.5 mm). A sample was loaded by applying a small amount of PEO/water mixture from a syringe to the rheometer and then melted by heating to 80°C in the presence of a saturated water atmosphere to prevent evaporation. The heating step was used for homogenising the PEO/water mixture before the shear pulse. FTIR spectroscopy monitoring of PEO aqueous solutions at the elevated temperature indicated that there was no effect of this treatment on the sample composition ( Supplementary Fig. 1D, E). The fixture was then slowly lowered to prevent trapping air bubbles before being trimmed. A strain sweep performed at angular frequency 10 rad s −1 confirmed linear visco-elastic behaviour up to~10% strain, and subsequently frequency sweeps at 0.1% strain where performed at steps decreasing from high to low angular frequency (Fig. 3c and Supplementary Figs. 5 and 6). After the data collection, the magnitude of complex viscosity was fitted using a Cross model 36,37 1þ kω ð Þ m , where A 1 , A 2 , k, and m are variables ( Supplementary Fig. 5, right column). Estimation of relaxation times of the studied PEO molecules. Flow has two main effects on polymer behaviour in a melt or concentrated solution state: (i) an orientation of the polymer primitive path along the flow direction at shear rates _ γ d >τ À1 d , where τ d is disengagement time associated with time required for a polymer molecule to escape a topological tube created by its surrounding chains and (ii) stretching of the polymer segments at higher shear rates _ γ R >τ À1 R , where τ R is Rouse relaxation time. The latter is responsible for flow-induced nucleation of polymers 7,38 . The relaxation times of PEO molecules can be estimated using Likhtman-McLeish theory for linear polymers 39 : τ d ¼ 3τ e Z 3 ð1 À 3:38 Z 1=2 þ 4:17 Z À 1:55 Z 3=2 Þ and τ R = τ e Z 2 , where Z = M w /M e is the number of entanglements per polymer chain, and M e and τ e are the molecular weight between entanglements and the Rouse time of an entangled polymer segment, respectively, available for PEO from literature (M e = 2 kDa 40,41 and τ e = 5 × 10 −8 s at 70°C 41 ). In order to estimate the PEO relaxation times at the experimental temperatures a time-temperature horizontal (frequency axis) shift coefficient obtained from Williams-Landel-Ferry (WLF) equation log 10 a T ¼ ÀC 1 ðT À T ref Þ C 2 þðT À T ref Þ was used. The C 1 and C 2 parameters taken from literature (C 1 = 6.9 and C 2 = 88 K at T ref = −52°C corresponding to PEO glass transition temperature) 42,43 were consistent with rheological data obtained for the PEO bimodal blend studied ( Supplementary Fig. 6), a vertical shift coefficient was calculated from b T ¼ ρðTÞðT þ 273:15Þ ρðT ref ÞðT ref þ 273:15Þ assuming that the temperature dependence of the PEO density is expressed as ρðTÞ ¼ ρ 0 À C 3 T, where the polymer density at 0°C, ρ 0 , and C 3 are available from literature (1.14 g cm −3 and 8.08 × 10 −4 g cm −3°C−1 , respectively) 40,43 . The WLF parameters were recalculated for then applied by shifting the τ e value to a desired temperature in order to estimate τ d and τ R using Likhtman-McLeish theory 39 . Finally, the effect of polymer content in the aqueous solutions was accounted for by correcting the obtained τ R values using an empirical scaling law τ RC = ϕ 3.5 τ R proposed for concentrated solutions 44 , where ϕ is the polymer mass concentration (Supplementary Table 1). Differential scanning calorimetry of PEO materials. By varying the concentration of aqueous PEO solutions, the melting point can be suppressed from that of pure polymer (66°C) down to ambient temperature (~20°C) for a 60% w/w solution, and down to below 0°C for a composition of 50% w/w (Fig. 2a) 45 . The peak T m of these compositions was measured by DSC to be 66°C, 21°C and −2°C, respectively using a DSC instrument (Pyris 1, Perkin-Elmer, Waltham, Massachusetts, USA) to measure the thermal response during both temperature scans and isothermal treatments. A rate of 10°C min −1 was used for both the heating and cooling cycles during temperature scans. A typical experiment consisted of cooling the sample to −30°C and holding at this temperature for 5 minutes to allow crystallisation to occur, then heating to 80°C before cooling to −30°C once again. In order to crystallise a 50% w/w PEO aqueous solution (Fig. 2a), the hold temperature was reduced to −35°C and the hold time increased to 10 min. Isothermal DSC of 50% w/w PEO (Fig. 2b) was performed by cooling from 25°C to the target temperature at a rate of 10°C min −1 , and then holding at this temperature for 60 min. The hysteresis in DSC measurements clearly demonstrates that a hydration shell is present around PEO chains that prevents crystallisation until it is removed. If the hydration shell is absent or incomplete (less than 1.6 water molecules per monomer unit 17,18 ), the difference between T m and the temperature of crystallisation, T c , is consistently around 20°C (Fig. 2c and Supplementary Fig. 7) whereas in solutions containing more than 1.6 water molecules per monomer unit (complete hydration shell), crystallisation peaks are not observed by DSC, even at temperatures 50°C below the melting point ( Fig. 2c and Supplementary Fig. 7). In situ polarised light imaging. A Physica MCR 301 rheometer setup for parallelplate rotational geometry (radius of the rotating shearing disk is 12.5 mm) is combined with an optical attachment for SIPLI to carry out the measurements. In order to visualise flow both plates of the rheometer are used as optical components: the bottom plate, made of glass, functions as a window and the top shearing plate, made of polished steel, a mirror. Thus, linear-polarised light passing through bottom glass plate is reflected from the top shearing disk, making a double pass through the sample and then passing a second polariser (analyser) crossed with the original plane of polarisation before the PLI is recorded by a CCD camera 28 . Initially a PEO solution is loaded at room temperature (~21°C) before being heated to 80°C to melt any residual polymer crystals, remove thermal history and set the shear geometry gap (usually 0.5 mm) (Fig. 2c). It is then cooled to the desired temperature (25°C or 0°C for 60% w/w or 50% w/w PEO solutions, respectively) and allowed to reach thermal equilibrium over 15 min in a saturated water atmosphere (the instrument is equipped with a solvent trap) to prevent evaporation ( Supplementary Fig. 8). At t = 0 s a rectangular shear pulse is applied to the sample by rotating the top plate at the desired angular speed (0.2 rad s −1 ≤ ω ≤ 4.0 rad s −1 ) for the desired time (9 s ≤ t s ≤ 300 s), following the cessation of flow temperature is held constant while flow-induced nucleation is further monitored using the SIPLI. Since a parallel plate geometry is used, the shear pulse generates a range of shear rates across the sample increasing radially, _ γ ¼ ωr d , where r is the radial position and d is the sample thickness. Thus, the sample experiences a minimum shear rate at the centre of rotation, _ γ min ¼ 0 s À1 , and maximum shear rate at the edge, _ γ max ¼ ωR d , where R is the upper disk radius. A thermallyequilibrated PEO aqueous solution or PEO melt is thus sheared resulting in the formation of oriented PEO nuclei (not visible), which, after the cessation of shear, over time, grow into larger-oriented crystals and become visible in the polarised light as a truncated Maltese Cross around the outer part of the sample (Fig. 3a). This ring propagates towards the centre of the sample and ceases at a certain radius corresponding to the shear rate required for flow-induced nucleation 28,29 . The central non-birefringent (dark) part of the sample remains in a liquid state. A frame rate of 0.2 s −1 has been normally used to record PLIs during shear experiments. The entire experiment can be conveniently presented as a slice of all PLIs stacked together (sliced by a plane oriented at 45°to the polariser and analyser plane), where y-axis corresponds to shear rate experienced by the sample and x-axis is time (Fig. 3a, rectangular image). Critical specific work for flow-induced crystallisation. In general, the specific work performed by a flow on a sheared sample is obtained from an integral calculated over the flow time t p : W ¼ R t p 0 η½ _ γðtÞ _ γ 2 ðtÞdt, where is the sample viscosity represented as a function of shear rate and _ γðtÞ is the shear rate as a function of time. Since a shear pulse of a rectangular shape has been used for experiments (Fig. 2c), it can be assumed that shear rate is independent of time over the pulse duration. Thus, a critical specific work value required for the PEO nucleation under shear flow conditions was calculated using a simplified equation: is the shear rate corresponding to the radial position of the boundary between the crystalline and liquid parts of the sample (Fig. 3a) and ηð _ γ b Þ is the viscosity at this shear rate. Since steady-state viscosity measurements of aqueous PEO solutions at flow-induced nucleation conditions is complicated by an initiation of the polymer crystallisation, dynamic viscosity measurements have been performed instead ( Fig. 3c and Supplementary Fig. 5) and the Cross model fitted to the experimental data ( Fig. 3c and Supplementary Fig. 5, right column). It has been assumed for the specific work calculations that there were no transient effects due to flow and the Cox-Merz rule holds: η * ð_ γÞ ¼ η * ðωÞ for ω ¼ _ γ 37 . In situ X-ray scattering measurements. Small/wide-angle X-ray scattering (SAXS/WAXS) patterns were collected using a Xenocs Xeuss 2.0 laboratory beamline equipped with a high flux gallium metal jet source (Excillum, Sweden, Xray wavelength λ = 0.134 nm), and 2D Pilatus 1 M and 100 K pixel detectors (Dectris, Switzerland). Simultaneous measurements of SAXS and WAXS were collected over a q range of 0.004 Å −1 < q < 0.3 Å −1 and 1.19 Å −1 < q < 3.53 Å −1 (14.5°< 2θ < 44.3°), respectively, where q ¼ 4πλ À1 sin θ is the modulus of the scattering vector and θ is one-half of the scattering angle. The 2D patterns were used without the correction for background and amorphous PEO solution scattering, and radially integrated using Foxtrot software supplied with the X-ray instrument. A CSS 450 shear cell (Linkam, Tadworth, UK) fitted with steel discs with a hole (static disk) and circularly-segmented milled slots (shearing disk) and Kapton TM windows was used for in-situ scattering measurements. In a typical experiment, a PEO sample was loaded into a rotational parallel plate shear cell while horizontal, then the shear cell was mounted vertically onto the SAXS/WAXS laboratory beamline. The shear/temperature protocol was then performed in analogy with the SIPLI measurements (Fig. 2c) and an isothermal crystallisation allowed to proceed after a shear pulse with SAXS and WAXS collected simultaneously for 14 min with a frame rate of 1 min −1 . The formation of lamellar structure associated with PEO crystallisation was estimated from SAXS intensity calculated over q range of 0.01 Å −1 < q < 0.03 Å −1 corresponding to the first-order diffraction peak of the lamellar structure (Fig. 4). Calculation of Hermans's P 2 orientation function. SAXS data have been used to calculate degree of orientation of the PEO lamellar structure after flowinduced crystallisation (Fig. 4). Two dimensional SAXS patterns were azimuthally integrated over a q-range corresponding to the first-order and second-order lamella diffraction peaks, 0.01 Å −1 < q < 0.06 Å −1 , in steps of one degree. The intensity values, I(φ), at specific azimuthal angles φ were used to calculate Hermans's orientation function 46 IðφÞ sin φdφ is the average angle that the lamella normal makes with a chosen direction, which in this work is associated with the flow direction. It has also been NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-17167-8 ARTICLE assumed in the expression of the average angle that there is an uniaxial orientation with symmetry around the shear direction, which enables the integration over the whole solid angle to be reduced to a quadrant. Since the experimental intensity data have a discrete distribution, analytical integration described by the formula has been replaced by numerical integration. A P 2 value of 1, 0, and −0.5 indicates orientation parallel to the flow direction, random orientation, and orientation perpendicular to the flow direction, respectively. FTIR spectroscopy of PEO in melt state and aqueous solutions. In order to detect intermolecular interactions between PEO and water, infra-red spectroscopy, complementing DSC measurements ( Fig. 2c and Supplementary Fig. 7), has been performed ( Supplementary Fig. 1). An experimental setup has been assembled combining the environmental control of a rheometer (Physica MCR 502, Anton Paar, Graz, Austria) and an FTIR spectrometer (Nicolet iS50, Thermo Fisher Scientific), based on an approach exploited in a previously published work 47 . The instruments were coupled together by using an attenuated total reflection (ATR) accessory (Golden Gate, Specac, UK) acting on one side as the bottom plate of the rheometer parallel-plate geometry and on the other as an external sample holder of the FTIR spectrometer. The ATR accessory were incorporated in the optical path of the spectrometer via the external beam port, a set of infra-red mirrors and an external mercury cadmium telluride (MCT) detector mounted on an optical table. A PEO sample (either an aqueous solution or a bulk material) was loaded between the ATR crystal area and the top plate (a disk with a radius of 6 mm) of the rheometer equipped with a solvent trap to replicate sample environment used for the SIPLI measurements, heated to 80°C and IR spectra were subsequently recorded ( Supplementary Fig. 1A) and analysed ( Supplementary Fig. 1B-E). Data availability The raw data that support the findings of this study are available in figshare.shef.ac.uk with the identifier https://doi.org/10.15131/shef.data.12044556 (see ref. 48 ).
8,506.4
2020-07-06T00:00:00.000
[ "Materials Science" ]
Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus 19 C We analyse the Coulomb breakup of 19 C measured at 67 A MeV at RIKEN. We use the Coulomb-Corrected Eikonal (CCE) approximation to model the reaction and describe the one-neutron halo nucleus 19 C within Halo Effective Field Theory (EFT). At leading order we obtain a fair reproduction of the measured cross section as a function of energy and angle. The description is insensitive to the choice of optical potential, as long as it accurately represents the size of 18 C. It is also insensitive to the interior of the 19 C wave function. Comparison between theory and experiment thus enables us to infer asymptotic properties of the ground state of 19 C: these data put constraints on the one-neutron separation energy of this nucleus and, for a given binding energy, can be used to extract an asymptotic normalisation coefficient (ANC). These results are confirmed by CCE calculations employing next-to-leading order Halo EFT descriptions of 19 C: at this order the results for the Coulomb breakup cross section are completely insensitive to the choice of the regulator. Accordingly, this reaction can be used to constrain the one-neutron separation energy and ANC of 19 C. Introduction Measurements of nuclear reactions along several isotopic chains show that the neutron distribution becomes extended as the neutron dripline is approached [1,2].This has led to the identification of "neutron halos": situations where a significant fraction of the neutron probability distribution resides in the classically forbidden region [3].Up to Z = 6 we already have examples of fourneutron halos, e.g., 8 He, two-neutron halos, e.g., 22 C, 19 B, and one-neutron halos, e.g., 11 Be, 19 C.This last nucleus demonstrates the striking features of a one-neutron halo.Following the dissociation of the halo neutron from the 18 C core, the momentum distribution of either of these fragments is narrow [4][5][6][7], as one would expect from a spatially extended system.Moreover, the breakup cross section of this fragile structure is large [8][9][10].This is particularly true on a heavy target such as Pb, for which the reaction is strongly Coulomb dominated.In that case an enhanced E1 strength is observed at low core-neutron relative energy, which is sometimes called the "pygmy dipole resonance".It is perhaps counterintuitive that properties of the neutron distribution can be probed through an electromagnetic observable, but this significant low-energy E1 strength is a consequence of the extended neutron distribution dragging the center-ofmass of the halo away from the center-of-charge.It is thus related to the halo physics that yields a significant isotope shift in these systems-and this Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus relation can be formalised through the non-energy-weighted sum rule.For swave one-neutron halos this physics is "universal" in the sense that it depends only on the one-neutron separation energy and the Asymptotic Normalisation Coefficient (ANC) of the ground-state wave function. The Coulomb dissociation of 19 C was measured by Nakamura et al. at RIKEN at 67A MeV already in the last millenium [8,9].The large E1 strength below 1 MeV outgoing relative energy of the 18 C-neutron system indicates the presence of a neutron halo.Comparison with models of the reaction implied that this is an s-wave halo, with a one-neutron separation energy S n = 530 ± 130 keV. Halo Effective Field Theory (Halo EFT) provides a systematic way to analyse the Coulomb dissociation of one-neutron halos.(For a general introduction to Halo EFT and a review of the method's status as it stood in 2017, see Ref. [11].)Halo EFT expands the amplitude for the nuclear reaction in powers of the expansion parameter R core /R halo , where, in this case, R core is the size of 18 C, which amounts to approximately 2.5 fm, and R halo is the size of the neutron halo in 19 C, estimated to be about 6.5 fm.The calculation of Coulomb dissociation in Halo EFT confirms that the amplitude is universal at leading order, depending only on S n and the charge-to-mass ratio of the target [11][12][13].At next-to-leading order the asymptotic normalisation coefficient of the halo affects the amplitude.But, once S n and the ANC are fixed the amplitude is predicted-at least for s-wave halos-up to errors of order Rcore R halo 3 in the Halo EFT expansion.In Ref. [13] the photodissociation of 19 C was computed in Halo EFT and the equivalent photon approximation was used to convert the photodissociation cross section of 19 C into a Coulomb-breakup cross section.Acharya and Phillips extracted the value S n = 575 ± 55(stat.) ± 20(EFT) MeV from the low-energy (E < 1 MeV) and small-angle (θ < 2.2 • ) portion of the data from Ref. [8]. The reaction-theory employed in Ref. [13] was quite rudimentary.In this work we couple an EFT description of the 19 C bound state to a more advanced treatment of the reaction on the 208 Pb target that uses the Coulomb-Corrected Eikonal approximation (CCE) [14][15][16].This approximation corrects the erroneous treatment of the Coulomb interaction within the usual eikonal description of breakup reactions.It enables a computation of breakup cross sections at intermediate beam energies on both light and heavy targets that attains excellent agreement with fully dynamical reaction models while also retaining the simplicity and numerical efficiency of the usual eikonal approximation [15].Within this implementation of the CCE, the 19 C bound state, and the 18 C-neutron continuum, are described within Halo EFT, viz.using a set of 18 C-neutron potentials of Gaussian shape.In a leading-order calculation the depth of the Gaussian is adjusted to reproduce a particular S n .In a nextto-leading-order calculation, an additional term is added to the potential, and its parameter is adjusted to produce a specific ANC.Performing the calculation for a range of Gaussian widths checks whether the breakup cross section is insensitive to details of the potential.This imitates the strategy successfully employed for 11 Be and 15 C reactions on various targets in Refs.[17][18][19][20].Coupling a reliable model of the reaction to a Halo-EFT description of the nucleus provides a detailed account of the reaction mechanism, while enabling us to study very systematically the influence that the halo nucleus' structure has on the reaction cross sections. The calculations described in this paper were initially performed as part of a week-long set of exercises at the TALENT school "Effective Field Theories in Light Nuclei: from Structure to Reactions" that took place at the Mainz Institute for Theoretical Physics in July-August 2022 [21].Students at the school (the majority of the authors in this paper) tuned the Gaussian potentials to reproduce specific scattering and bound-state parameters for the 19 C system.They then ran CCE calculations, predicted the Coulomb dissociation cross sections, and compared the result with data.The following sections describe their work and its outcomes, as follows.In Sec.2.2 we provide a brief summary of the reaction model and its implementation within the CCE.Section 3 lays out the leading-order (LO) calculation, presenting results for the 19 C system for Gaussians of widths ranging from 0.5 to 2.5 fm.These potentials are then used, together with the CCE, to predict the Coulomb breakup cross section.We find that the cross section scales with the square of the 19 C ANC, demonstrating that the reaction is almost exclusively peripheral.In Sec. 4 we explore the sensitivity of the results to the optical potentials chosen for the 18 C-208 Pb and neutron-208 Pb systems.In Sec. 5 we confirm that the cross section is insensitive to the interior of the 18 C-neutron wave function by performing a NLO calculation and finding (almost) the same result irrespective of the width of the Gaussian employed.In Sec. 6 we return to the LO potentials and vary the binding energy, in order to check the confidence interval given by Acharya and Phillips in Ref. [13].Finally, in Sec.7 we offer some conclusions and point out some interesting aspects of the EFT description of these Coulomb-dissociation data that, we believe, can motivate further theoretical and experimental studies of 19 C. Reaction model 2.1 Three-body model of Coulomb breakup To describe the breakup of 19 C on 208 Pb, we consider the usual three-body model of the reaction [16].The projectile P is seen as a two-body structure: a halo neutron (n of mass m n and charge nil) loosely-bound to a 18 C core assumed to be in its 0 + ground state (c of mass m c and charge Z c e).This two-body structure is described by the effective Hamiltonian where r is the c-n relative coordinate, µ = m c m n /(m c + m n ) is the c-n reduced mass, and V cn is an effective potential that describes the c-n interaction.As Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus discussed in Secs.3.1 and 5, we consider Halo-EFT interactions up to NLO [11,22].The eigenstates φ of H 0 describe the different states of the projectile.The negative-energy eigenstates correspond to the c-n bound states.They are discrete and, in addition to quantum numbers of the c-n orbital angular momentum l, the total angular momentum j, and its projection m, they are identified by the number of nodes in their radial wave function n r .Asymptotically, the radial part of these bound-state wave functions behaves as where k l is a modified spherical Bessel function of the second kind, κ nrlj is related to the eigenenergy of the state E nr lj = − 2 κ 2 nrlj /2µ, and C nrlj is the asymptotic normalisation coefficient (ANC) associated with that bound state.The positive-energy eigenstates describe the c-n continuum part of the projectile spectrum, viz. the broken up projectile.As such they are identified by their c-n relative energy E, in addition to the quantum number defining the partial wave l, j, and m. The lead target T is assumed to be a structureless cluster of mass m T and charge Z T e.Its interaction with the projectile components c and n is described by the optical potentials V cT and V nT , respectively.These potentials are found in the literature as explained in Sec. 4. Within this three-body model of the collision, studying the P -T collision reduces to solving the following Schrödinger equation with the three-body Hamiltonian where R is the coordinate of the projectile center of mass relative to the target, µ P T = m P m T /(m P + m T ) is the P -T reduced mass-with m P = m c + m nand R cT , resp.R nT , are the c-T , resp.n-T , relative coordinates.The total energy E T in Eq. ( 3) is related to the initial P -T kinetic energy and the eigenenergy of the projectile in its initial ground state φ nr0l0j0m0 through where K 0 is the wave vector of the incoming P -T relative motion; that direction defines the Z axis of the system of coordinates.The Schrödinger equation (3) has to be solved with the incoming condition that the projectile, in its ground state, is impinging on the target.Accordingly, the three-body wave function behaves as Various numerical techniques, based on different approximations, have been developed to solve this equation, see Ref. [16] for a recent review.For this study, we have considered the CCE [14][15][16], which is very efficient at the intermediate beam energy considered here. Coulomb-Corrected Eikonal approximation At sufficiently high energy, the eikonal approximation is quite reliable to describe the P -T collision [16,23].Within that approximation, the three-body wave function after the collision reads where the eikonal phase is given by with v = K 0 /µ P T the P -T initial velocity. Being based on an adiabatic description of the reaction, the usual eikonal approximation is valid only for reactions that take place over a short time, viz. that are dominated by the short-ranged nuclear interaction.When the Coulomb interaction is non-negligible, such as for the lead target considered in this study, the breakup cross section inferred from the expression (7) diverges [14,15].Margueron et al. have developed a correction, that efficiently solves that divergence [14].The main idea is to use the first order of the perturbation theory, which accounts for the projectile dynamics, to correct for the erroneous treatment of the Coulomb interaction at the eikonal approximation.In the CCE, the Coulomb contribution to the eikonal phase is replaced, at first order, by its corresponding perturbative estimate [15]: where χ N and χ C are, respectively, the nuclear and Coulomb contributions to the eikonal phase χ (8), and where the first-order phase reads with η = Z c Z T e 2 /(4πǫ 0 v) the P -T Sommerfeld parameter and ω = E − E nr0l0j0 the energy difference between the final continuum state and the initial bound state of the projectile.Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus By accounting for the projectile dynamics in the first-order treatment of the Coulomb interaction, this correction solves the aforementioned divergence issue.Moreover the expression (9) enables us to account also for the nuclear part of the P -T interaction at all orders, its interference with the Coulomb force, and, although only in an approximate way, for higher-order Coulomb effects.This CCE leads to breakup cross sections in excellent agreement with fully dynamical models [15].It is thus well suited to describe breakup reactions at intermediate energies on both light and heavy targets, while exhibiting the simplicity and numerical efficiency of a usual eikonal code.In this study we consider the CCE to compute the breakup cross section of 19 C impinging on 208 Pb at 67A MeV and compare these theoretical results with the data of Ref. [8]. 3 Leading-order calculation 3.1 Leading-order description of 19 C The one-neutron halo nucleus 19 C has a 1 2 + ground state that lies slightly more than half an MeV below the one-neutron separation threshold (S n = 0.58 ± 0.09 MeV [24]).Various experiments have confirmed the one-neutron halo structure of that state [4][5][6][7][8][9][10].Therefore it is usually described as a 18 C in its 0 + ground state to which a valence neutron is loosely bound in the s 1/2 partial wave.As mentioned earlier, we consider in the present study a Halo-EFT description of 19 C, assuming the halo neutron sits in a 0s 1/2 bound state, i.e., with n r = 0 nodes in the radial wave function. At leading order the 18 C-n interaction is described by a Gaussian potential: where the standard deviation of the Gaussian, σ, acts as a regulator.In the limit σ → 0 this becomes a three-dimensional δ-function.We consider σ = 0.5, 1, 1.5, 2, 2.5 fm.For each σ, the potential strength C 0 is adjusted to produce a 0s 1/2 19 C state that is bound by 0.58 MeV with respect to the 18 C-neutron threshold.The corresponding values of the C 0 s and the ANCs C 0s1/2 predicted for 19 C are given in Table 1. Figure 1 shows the reduced radial wave functions obtained within this LO Halo-EFT model of 19 C (a) normalised to unity and (b) divided by their ANC.Panel (b) shows that the wave functions have the same asymptotic behaviour (thin black dashed line), up to a multiplicative constant-as should be the case given the way in which they were constructed.It also shows they differ markedly at short range, viz.for r ∼ < 3 fm.This provides a straightforward way to test if the reaction is peripheral: if it is, the reaction cross section will scale as the square of the ANC.If the reaction is sensitive to the short-range piece of the 18 C-n wave functions then that scaling will break down. Coulomb breakup cross sections at LO We now execute the CCE code with the LO 18 C-n potentials of Sec.3.1 1 .As explained in Sec. 2, the interactions between the projectile constituents and the target are simulated by optical potentials selected from the literature.The reasons for this selection, and its effect on our calculations, will be discussed in Sec. 4. Figure 2 gives the direct CCE results-viz.without data-for the five values of the Gaussian range σ considered in Sec.3.1.The breakup cross section plotted as a function of the 18 C-n relative energy after dissociation is shown in Fig. 2(a), whereas Fig. 2(b) displays it as a function of the scattering angle of the 18 C-n centre of mass for a continuum energy 0 ≤ E ≤ 0.5 MeV.For σ = 1.5 fm (blue dash-dotted lines), the contributions to the cross section from s, p, and d waves in the 18 C-n continuum are shown separately.It is immediately clear that the reaction is dominated by an E1 transition from the s ground state to the p continuum, as expected for the part of the cross section mediated by a single E1 photon exchange between the 208 Pb nucleus and the 19 C projectile.For our LO calculation we take the p-wave phase shifts to be 1 An open-access version of the fortran code "Chaconne" has been developed for the TALENT school "Effective Field Theories in Light Nuclei: from Structure to Reactions" [21].The program, a user's manual, test input file, and the corresponding output, can be downloaded from the school website, see the documents attached to the lecture on nuclear reaction theory in the third week of the school https://indico.mitp.uni-mainz.de/event/279/timetable/#20220808.Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus 0, because there is no known state with negative parity at low energy.This makes the overall result rather simple, cf.Eq. ( 15) of Ref. [13].However, the presence of nuclear interactions between 18 C and 208 Pb and between the neutron and 208 Pb, as well as the possibility of multiple photon exchanges, produce noticeable contributions to the cross section from s and d waves in the 18 C-neutron continuum.Both s-and d-wave contributions become a larger fraction of the breakup cross section as the angle increases, although the d-wave piece stays a factor of a few below the s-wave one throughout the angular range of interest here.The s-wave effect is more important at lower relative energy, with the d-wave one growing as energy increases.This is a significant finding, because the CCE is nearly as simple mathematically as the first-order E1 treatment carried out in Ref. [13], but it allows us to quantify the nuclear contribution to breakup, its interference with the Coulomb force, and other quantal interferences seen in the oscillatory pattern of the angular distribution [15]. While the way that the reaction mechanism populates different partial waves in the continuum is interesting, the key finding from Fig. 2 is that population of anything other than the continuum p is small enough that the total cross section scales (nearly) perfectly with C 2 0s1/2 .This shows that reaction is purely peripheral, since it demonstrates that the breakup does not probe the short-range physics of the projectile. Because the cross sections scale with the C 2 0s1/2 and because the cross section exhibits little sensitivity to the choice of the nuclear part of the optical potential (see Sec. 4), we can infer an ANC by fitting the calculations to the data.To avoid the regions where the nuclear interaction plays a role and where the d waves, which are not well constrained, might affect the calculation, we focus on the forward-angle region-viz.θ < 2 • -of the angular distribution, which is restricted to small 18 C-n relative energies-viz.0 ≤ E ≤ 0.5 MeV.As seen in Fig. 2, that region is dominated by the p-wave contribution. In order to extract a reliable value for C 0s1/2 from data it is necessary to account for the experimental resolution.This is done by folding the theoretical cross sections with the resolution provided in the experimental paper [8].After folding, we scale the calculations to the data.Minimizing the χ 2 with respect to the scaling factor enables us to infer the ANC: This value, and its uncertainty, are independent of the value of σ chosen for the 18 C-n potential (11), confirming the independence of the calculations to the short-range physics, and hence the accuracy of the method. (In)Sensitivity of the calculations to the nuclear optical potentials To test the sensitivity of the calculations to the choice of optical potentials, they have been repeated with different interactions found in the literature. The results of the previous section followed Typel and Shyam in Ref. [25] and chose for the 18 C-208 Pb interaction a potential developed by Buenerd et al. to reproduce the elastic scattering of 13 C off 208 Pb at 390 MeV (30A MeV).The fact that this is a rather different energy than the one employed in Refs.[8,9] is ignored, but the radii are scaled to the actual size of the core of the projectile. For the n-208 Pb interaction, in the previous section we also followed Typel and Shyam and use the Becchetti and Greenlees global optical potential (BG) [26]. For a second 18 C-208 Pb optical potential, we use the one considered by Typel and Shyam in Ref. [25] to simulate the interaction between 11 Be and 208 Pb at 70A MeV.That potential is based on an α-208 Pb potential developed by Bonin et al. to reproduce that elastic scattering at 288 MeV (72A MeV), from which we rescale the radii to account for the size of the nucleus.As a second n-208 Pb potential choice, we opt for the Koning-Delaroche global optical potential (KD) [27]. The results of these different calculations obtained with the LO 18 C-n potential with σ = 1.5 fm are shown in Fig. 3.It is clear that these choices have very limited influence on this Coulomb-dominated reaction.We note that if we strictly follow Typel and Shyam's procedure from Ref. [25] and do not adjust the radius of the projectile carbon nucleus then we obtain a higher cross section than is seen here.The size of the core is an important parameter in these calculations, even if the results are not sensitive to the functional form of the potential's radial dependence. NLO calculations At NLO the 18 C-neutron potential takes the form: Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus where the parameter C0 is not necessarily-indeed not usually-the same as the parameter C 0 .This time we consider potentials with different σs and, in each case adjust them to produce S n = 0.58 MeV and C 0s1/2 = 0.81 fm −1/2 .We achieve this for σ = 1.0, 1.5, and 2.5 fm.The resulting cross sections predicted by the CCE are now completely independent of σ, see Fig. 4, where the theoretical cross sections have been folded with the experimental resolution [8].Moreover, despite the fact that we fit the ANC only to the forward-angle region of the angular distribution limited to E ≤ 0.5 MeV, we find that all calculations match the experimental energy distribution over nearly the entire experimental energy range, viz.out to E = 4 MeV.The excellent agreement with experiment, and the insensitivity of the NLO results to the regulator σ, confirm the value of the ANC we inferred from the data using our LO calculations.This also shows that it is not necessary to go beyond NLO to explain the main features of the data. Note that no NLO potential could be found for σ = 0.5 fm, i.e, we could not find parameters to fit simultaneously the binding energy and the ANC inferred from the data.This is a realisation of the Wigner bound [28][29][30] for this system: for any C 2 0s1/2 larger than 2(2µS n / 2 ) 1/2 the integral of the asymptotic wave function from zero to infinity is larger than one.It follows that for small enough σ it is simply impossible to produce a normalisable wave function with this ANC 2 . Significant discrepancies between theory and experiment appear at about E ≈ 1.3 MeV and 2.8 MeV in the energy distribution.At those energies the data seem to be notably larger than our calculations.The large error bars on the experimental data leave open the possibility that these are statistical fluctuations and not due to an effect of final-state interactions 2 .However In both cases the calculations have been folded by the experimental resolution [8]. these deviations could hint at the presence of resonances in the 19 C system at these energies.Refs.[7,10] suggest the existence of a 5 2 + resonance at either E = 1.42 (10) MeV [7] or E = 1.46 (10) MeV [10].This state might have a dominant single-particle structure with a 18 C core in its 0 + ground state and neutron in a d 5/2 resonance and could significantly affect the breakup cross section [31][32][33].Within the usual Halo-EFT power counting, it would therefore enter beyond NLO.Following what has been done in Refs.[17,33] an extension of this work could study this possibility. At large angles in the angular distribution, we also observe that the calculations slightly overestimate the data.This is a region where the nuclear interaction plays a more significant role, see Fig. 3(b), and hence is subject to caution because this difference might be related to the choice of optical potentials.It could also come from the 18 C-n final-state interaction in the d wave, which is not constrained at NLO; see Fig. 2(b). The good agreement with experiment suggests that, in absence of more precise measurements, a Halo-EFT description at NLO is both necessary and sufficient to describe most of the breakup data. Sensitivity to the binding energy The binding energy quoted in the most recent atomic mass database [24] exhibits a rather large uncertainty: S n = 0.58 ± 0.09 MeV.To gauge the influence of this observable on the calculations, we repeat breakup calculations using LO Halo-EFT 18 C-n potentials fitted to the lower (S n = 0.49 MeV) and upper (S n = 0.67 MeV) end of this 68% confidence interval.We consider σ = 1.5 fm for this test. Fig. 5 displays the results folded with the experimental resolution and fitted to the data by rescaling the CCE calculation.The ANCs hence obtained differ significantly from the one quoted above: C 0s1/2 (S n = 0.49 MeV) = 0.62 ± 0.02 fm −1/2 and C 0s1/2 (S n = 0.67 MeV) = 1.02 ± 0.03 fm −1/2 .This shows that Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus the ANC and binding energy are strongly correlated, as one would expect from the LO Halo EFT relation [11,34] We note that, while the strict scaling fo the ANC-squared at LO is with √ S n , the higher-order terms indicated in Eq. ( 14) are ultimately quite important in the case of 19 C. The ANC-squareds inferred for different binding energies scale markedly more strongly with S n than √ S n .Although the prediction with the lowest binding energy seems to better reproduce the angular distribution throughout the entire experimental angular range [see the green dashed line in Fig. 5(b)], the corresponding energy distribution does not fit the data at E > 0.5 MeV [see Fig. 5(a)].Using a higher binding energy leads to less good agreement with the data in both observables (red solid lines in Fig. 5).This suggests that the actual binding energy is probably close to the central value we have considered up to Sec. 5, viz.S n = 0.58 MeV.It also indicates that, with more thorough uncertainty quantification, including an estimate of the impact of higher-order effects in the EFT [35][36][37], and the uncertainty due to the choice of optical potential, analysis of these data could yield a new, more precise, value, for the 19 C one-neutron separation energy. Conclusion and Needed Future Work Many experiments have shown that 19 C exhibits a clear one-neutron halo structure in its 1 2 + ground state [4][5][6][7][8][9][10].However, in contrast to the well-studied cases of 11 Be and 15 C, there is still much to learn about this nucleus, including its one-neutron separation energy S n .In this paper, we present a new analysis of the Coulomb breakup of 19 C on 208 Pb at 67A MeV, which has been measured at RIKEN [8].To this aim, we have used a Halo-EFT description of the projectile within the Coulomb Corrected Eikonal approximation (CCE), which has shown to provide reliable cross sections for this kind of reaction [15], while exhibiting a small numerical cost. As expected these cross sections are strongly dominated by an E1 transition from the 0s 1/2 ground state of the nucleus towards its 18 C-n continuum.Being Coulomb dominated, they exhibit a minor dependence to the optical potentials used to simulate the nuclear interaction between the projectile constituents ( 18 C and n) and the 208 Pb target. Using a LO description of 19 C, we have found out that the calculated cross sections are nearly proportional to the square of the ANC of the radial 18 C-n wave function C 0s1/2 .This clearly shows that the reaction is purely peripheral in the sense that it probes only the tail of the ground state wave function.It also indicates that an ANC for the actual nucleus can be inferred from the data.To reduce the uncertainty related to the choice of the optical potentials as well as to avoid the influence of the d-wave continuum, we select forwardangle data at low 18 C-n energy to scale our calculations to the experiment.The value of the ANC hence obtained, C 0s1/2 = 0.81 ± 0.02 fm −1/2 , is independent of the Halo-EFT regulator σ.NLO descriptions of 19 C fitted to reproduce both S n and that value of C 0s1/2 provide an excellent agreement with the data on nearly their entire energy and angular ranges, independently of the value of σ. Additional tests have shown a strong dependence of the calculations to the binding energy of the nucleus, which, unfortunately is not well known experimentally.However, our tests show that a systematic analysis of Coulombbreakup data, e.g., through Bayesian methods [35][36][37], could provide a significant constraint on that structure observable.Thanks to its small computational cost and accurate description of the reaction process, the CCE would be the ideal reaction-dynamics treatment for such a future statistical analysis. This theoretical study extends a series of analyses of reactions involving one-neutron halo nuclei, in which a Halo-EFT description of the exotic nucleus is coupled to realistic models of reactions [17][18][19][20].Our work confirms the validity of this approach for the Coulomb breakup of 19 C, and shows that crucial nuclear-structure information can be inferred from such a study.Unfortunately, the experimental uncertainty of the RIKEN data considered in this work [8] is too large to draw reliable conclusions on these structure observables.Accordingly, we advocate for new experiments with smaller uncertainties to pin down these values.Similar breakup data would help us constrain both the binding energy of 19 C and its ANC.Breakup data on a light target, viz. 12C or 9 Be, could help investigate the possible presence of single-neutron resonances in Effective field theory analysis of the Coulomb breakup of the one-neutron halo nucleus the continuum.Transfer measurements, such as 18 C(d,p) in inverse kinematics could help constrain the ANC of the ground state, especially if they are measured at low beam energy and forward angles [18].Knockout measurements with improved uncertainty compared to existing data [4][5][6] would also improve our understanding of this exotic nucleus [20]. Fig. 1 Fig. 1 Reduced radial wave functions of the 0s 1/2 18 C-n bound state (a) normalised to unity; (b) divided by their ANC C 0s1/2 for different values of σ, as indicated in the legend. Fig. 2 Fig. 2 Breakup cross section of 19 C on 208 Pb at 67A MeV (a) plotted as a function of the 18 C-n relative energy E after dissociation, and (b) plotted as a function of the scattering angle of the 18 C-n centre of mass for energies 0 ≤ E ≤ 0.5 MeV.In both cases, s, p, and d components are shown separately for the σ = 1.5 fm case (blue dash-dotted lines). Fig. 3 Fig. 3 Influence of the optical potential choice on the breakup cross section of 19 C on 208 Pb at 67A MeV; (a) energy distribution; (b) angular distribution. Fig. 4 Fig.4NLO calculations of the breakup of19 C on 208 Pb at 67A MeV compared to the data of Ref.[8].NLO Halo-EFT18 C-n potentials are fitted to reproduce the binding energy and the ANC inferred from the comparison of the LO calculations to the data (angular distribution restricted to forward angles); (a) energy distribution and (b) angular distribution.In both cases the calculations have been folded by the experimental resolution[8]. Fig. 5 Fig.5Sensitivity of breakup calculation to the 18 C-n binding energy for19 C impinging on 208 Pb at 67A MeV.LO Halo-EFT 18 C-n potentials are fitted to reproduce three binding energies at the centre (0.58 MeV, blue dash-dotted line), lower bound (0.49 MeV, green dashed line) and higher bound (0.67 MeV, red solid line) of the experimental uncertainty range[24].The calculations have been scaled to the data as explained in Sec.5; (a) energy distribution; (b) angular distribution.In both cases the calculations have been folded with the experimental resolution[8]. Table 1 Strengths of the LO 18 C-n potentials for the different regulators σ considered in this study [see Eq. (11)].They have been fitted to reproduce the ground state energy at E 0s1/2 = −0.58MeV.The corresponding ANCs C 0s1/2 are listed as well.
7,772.8
2023-01-16T00:00:00.000
[ "Physics" ]
A New Developed Airlift Reactor Integrated Settling Process and Its Application for Simultaneous Nitrification and Denitrification Nitrogen Removal This study presented the performance of simultaneous nitrification and denitrification (SND) process using a new developed hybrid airlift reactor which integrated the activated sludge reaction process in the airlift reactor and the sludge settling separation process in the clarifier. The proposed reactor was started up successfully after 76 days within which the COD and total nitrogen removal rate can reach over 90% and 76.3%, respectively. The effects of different COD/N and DO concentrations on the performance of reactor were investigated. It was found that the influent COD/N maintained at 10 was sufficient for SND and the optimum DO concentration for SND was in the range of 0.5 to 0.8 mg L−1. Batch test demonstrated that both macroscopic environment caused by the spatial DO concentration difference and microscopic environment caused by the stratification of activated sludge may be responsible for the SND process in the reactor. The hybrid airlift reactor can accomplish SND process in a single reactor and in situ automatic separation of sludge; therefore, it may serve as a promising reactor in COD and nitrogen removal fields. Introduction The environmental problems arising from nitrogenous compounds pollution, including oxygen depletion, toxicity to aquatic organism, and promotion of eutrophication, have attracted great attention in past decades [1]. Especially, some industrial wastewaters contain high concentration of ammonia nitrogen, such as coking wastewater, wastewater from fertilizer plant, and leachate. It is very important to remove nitrogen from these wastewaters before drainage. Biological nitrification and denitrification methods are the most widely used nitrogen removal methods [2,3]. Nitrification requires an aerobic condition, whereas denitrification occurs under anoxic condition [4]. Thus, twostage anoxic/oxic processes are generally used to meet the different condition requirements for nitrogen removal [1,5]. However, many recent studies have demonstrated that these two steps for nitrogen removal can occur simultaneously in a single reactor, known as simultaneous nitrification and denitrification (SND) process [6][7][8][9]. Compared to conventional biological nitrogen removal process, SND can offer several advantages including simplifying the treatment system, reducing carbon source and alkalinity consumption, and saving aeration energy requirement [4]. The most widely used reactor for SND nitrogen removal is sequencing batch reactor (SBR) [10][11][12], because it enables the formation of the alternate aerobic and anoxic conditions in a time sequence manner. However, SBR is a kind of intermittent flow reactor which is not appropriate for continuous flow wastewater treatment. In recent years, airlift reactor (ALR), whose advantages include low energy requirement, effective mass transfer rate and mixing, elimination of dead volumes, and little footprint [13], has been used for the SND process in a continuous aeration and feed mode [2,4,14]. Moreover, the authors have also detected the phenomenon of total nitrogen removal in a field airlift reactor which is used for coking wastewater treatment Pan et al., [15]. Apart from the specific structure of the airlift reactor, keeping the sludge retention time (SRT) at relatively long time is also a prerequisite for efficient SND nitrogen removal because the growth rate of nitrifiers is very slow [1]. In order to prolong the SRT, either increasing the recycling ratio of sludge or reducing the unnecessary activated sludge loss from the effluent is feasible. However, increasing the recycling ratio will lead to low wastewater treatment rate and energy consumption, so reducing the sludge loss has been a considerable method. A membrane filter device has been introduced into the ALR by Meng et al. [14] for the purpose of withholding the activated sludge in the reactor, and nitrogen removal has been achieved. Nevertheless, the accumulation of recalcitrant compounds and soluble microbial products (SMPs) together with the membrane fouling has confined the application of membrane filter bioreactors [16]. Based on the above consideration, high efficiency settling process may be used as a replaceable method for membrane filter. So, a new reactor which integrates the ALR reactor and inclined plate settling reactor is developed, and it is named as hybrid airlift reactor (HALR) in this study. It is expected that the ALR can accommodate proper environmental condition for SND process while the new coupled clarifier can ensure the in situ separation of sludge from the effluent, so the whole reactor can maintain adequate SRT and accomplish the SND nitrogen removal independently. In this study, the proposed HALR reactor was used to investigate SND nitrogen removal ability. Furthermore, some factors which will influence the performance of the reactor were determined, such as controlled DO concentration and ratio of chemical oxygen demand to nitrogen (COD/N). Besides, the possible reasons for nitrogen removal in the proposed reactor were analyzed. Experiment Setup. The schematic diagram of the experimental reactor is shown in Figure 1. The reactor was made of transparent Perspex with a working volume of 47.4 L. It can be divided into three zones: reaction zone, degassing zone, and settling zone. The reaction zone is composed of two concentric tubes with the inner diameter of 160 and 80 mm, respectively. There is a gas sprayer mounted at the bottom of the inner tube. When the reactor works, the gas bubbles from the sprayer move upward into the inner tube and drive the liquid circulation flow between the inner tube and the annule zone. The inner tube enables the liquid to move upward and is called the riser. The annule zone between the two tubes names as the downcomer in which the liquid moves downward. and 45 ∘ , respectively. Enlarged degassing zone and settling zone are mounted at the top of the reaction zone and they are connected with the reaction zone through an 80 mm conic shape transition. Degassing zone is just at the top of the riser with a diameter of 140 mm. The settling zone is just at the outer side of the degassing zone, and it is packed with inclined plates. The length of these inclined plates is 230 mm, and they are arranged 60 ∘ from the horizontal direction and about 20 mm in perpendicular distance. There is a buffer zone with the height of 100 mm under the inclined plates. Outflow weir is 50 mm above the top of the inclined plate which can drain the treated wastewater. Although the riser is aerated, the downcomer is gas-free because nearly all bubbles are escaped from the free liquid surface of degassing zone if the superficial gas velocity is relatively low [2]. Thus, a spatial distribution of dissolved oxygen (DO) may be formed at the presence of oxygen utilization in the reactor, and this is beneficial particularly regarding its application for SND [14] because the SND process requires the formation of aerobic and anoxic environment concurrently in the same reactor. Integrating the inclined plate clarifier on the top of the reaction zone can form a compact reactor and accomplish the in situ separation The Scientific World Journal 3 of activated sludge. When the mixed liquid of reaction zone flows into the clarifying zone through a horizontal gap, it changes its flow direction and moves upward through the condensed sludge layer and inclined plates zone, then the sludge can be effectively withdrawn as a result of flocculation and settling. Separated sludge can automatically slide into the reaction zone and rejoin the biological reaction process. Operation Condition. Synthetic wastewater was used as influent in all experiments. It contained three primary macronutrients consisting of sodium acetate, ammonium chloride, and sodium bicarbonate and micronutrients consisting of The concentrations of micronutrients were fixed throughout this study, but those of macronutrients were changed depending on the requirements of different experimental designs. The pH of synthetic wastewater was controlled in the range of 7.0 to 8.0 which was modulated by 1 M Na 2 CO 3 . The reactor was inoculated with activated sludge from local municipal wastewater treatment plant (Liede, Guangzhou, China) which was running in a modified anaerobic/anoxic/oxic process. The mixture sludge (20 L) taken from the anoxic tank (5.6 g MLSS L −1 , 10 L) and oxic tank (5.2 g MLSS L −1 , 10 L) was used as the seed inoculum. The reactor was continuously aerated with an air compressor whose flow rate can be adjusted and measured by a precalibrated air rotameter. The synthetic wastewater entered the bottom of the reactor by a peristaltic pump, and the flow rate was controlled and measured by a precalibrated liquid rotameter. DO and pH were monitored at the upper part of the reactor by a DO electrode (InPro6050, Mettler) and a pH electrode (InPro4010, Mettler), respectively. A transmitter (M300, Mettler) was used which allows the continuous recording of pH and DO data. Constant DO was maintained by frequently adjusting the valve of air rotameter. After activated sludge has accumulated to a certain concentration, excess sludge was withdrawn through the bottom valve periodically to maintain the sludge retention time (SRT) at about 25 days. The operation temperature was controlled at 28 ± 1 ∘ C. The run length of the HALR investigated has been lasted for 220 days in a continuous flow mode. In the first 76 days, the reactor was started up successfully and run steadily for a period of time. During the following 144 days, the effects of DO value and COD/N ratio on performance of SND were investigated. During the whole operating process, after each operation condition had been changed, at least three HRT cycles had been waited for reaching a relatively steady state. Analytical Methods. Concentrations of ammonium, nitrate, and nitrite in both influent and effluent were measured by spectrophotometry with commercial test kits (Hach, USA) after filtration of the samples through acetate filter device with pore size of 0.45 m. The COD, suspended solid (SS), and MLSS were analyzed according to the standard methods [17]. All samples were collected and analyzed at interval time of 48 h. The removal efficiency of total nitrogen (TN) can be calculated using the following expression: where the subscript in and out represent the influent and effluent, respectively. Reactor Performance during the Start-Up and Steady-State Running Period. In order to accumulate the nitrifiers and to avoid the excessive growth of heterotrophic microorganism in the HALR reactor, HRT was controlled at about 24 h at the initial running stage, and the reactor was fed with the synthetic wastewater with a low C/N ratio (COD: 480 mg L −1 , NH 4 + -N: 120 mg L −1 ). After being operated with an acclimation stage of 24 days, the reactor was amended with the synthetic wastewater with COD/N = 7 (COD: 840 mg L −1 , NH 4 + -N: 120 mg L −1 ) and HRT was changed to 16 h. Then the reactor was run for another 52 days until it reached a steady state for a period of time. The DO concentration was controlled at 0.8 ± 0.05 mg L −1 throughout this period. Figure 2 shows the variation of COD and MLSS concentrations as a function of the acclimation time. At the initial stage (a) with lower influent COD concentration, there was no obvious acclimation stage for COD removal and the effluent COD concentration was 67.5 mg L −1 on average. After 24-day operation, the COD loading was raised because higher COD/N wastewater was used in the stage (b). A dramatic increase in COD removal efficiency was observed, and approximately 90% of the initial COD was removed in In order to accumulate nitrifiers and raise the MLSS concentration, sludge was not discharged from the startup of the reactor until the MLSS concentration was beyond 5000 mg L −1 . At the same time, sludge was efficiently separated from the effluent and slid downward automatically from the clarifier zone. At last, it was entrained and returned to the reaction zone, so that sludge concentration was always increasing before being discharged. From Figure 2, the MLSS concentration increased relatively slowly during the first 7 days and increased more quickly in subsequent days. In the 43th day, MLSS concentration reached a maximum value up to 5084 mg L −1 . In the following days, a low concentration of active sludge was discharged through the bottom valve, and the typical concentration of biomass in the effluent is about 30 mg/L. The MLSS concentration in the reactor was maintained at about 5000 mg L −1 , and the sludge retention time (SRT) of this system was at about 25 days. The typical sludge in the HALR reactor is conventional activated sludge flocs, and no obvious granular sludge has been observed. It may be because that relatively low aeration strength in the experiment leads to low shear stress in the reactor, which is not favorable for the formation of granular biomass. Besides, the settling ability of the sludge is good, and the sludge volume index (SVI) can be remained at about 90 mL/g. Figure 3 shows and effluent TN concentration. The influent NH 4 + -N concentration was always fixed at about 120 mg L −1 in this period. From the startup of reactor to day 24, the effluent NH 4 + -N concentration was gradually reduced with the time as it decreased from 35.4 to 10.8 mg L −1 . Low nitrifying efficiency at the initial stage can be related to the low population of nitrifiers. The low COD/N wastewater used in stage (a) is favorable for the growth of autotrophic nitrifiers, because organics for the growth of heterotrophic microorganisms are limited. Therefore, nitrifiers can accumulate in the reactor, and nitrification effect can be raised gradually. From day 24 and onward, although the COD loading was increased, abundant nitrifiers were cultured, so that the efficiency of nitrification can be retained at a high level. The effluent NH 4 + -N concentration was 10.4 mg L −1 on average and the NH 4 + -N removal efficiency was over 90%. For all the 76day operation, the nitrite concentration in the effluent was always below 1.0 mg L −1 , indicating that no obvious nitrite accumulation occurred in the reactor. During the first 24 days, nitrate concentration in the effluent was 39.4 mg L −1 on average. However, from day 24 to 76, the nitrate concentration in effluent decreased rapidly with the increasing of influent COD concentration, and eventually the effluent nitrate reached a steady concentration of about 17.3 mg L −1 . Figure 4 shows the calculated removal efficiency of NH 4 + -N and TN removal as a function of the acclimation time. It can be seen that the NH 4 + -N removal efficiency was increasing with the time at initial days and reached a steady level from day 24 onward. Nevertheless, the TN removal efficiency exhibited a different trend, and two obvious stages could be partitioned in association with the different influent COD concentrations. TN removal efficiency increased gradually at a low influent COD concentration but increased rapidly with the increasing influent COD concentration. Approximately 76.3% of TN was removed during the steady-state running period. It can be inferred that The Scientific World Journal 5 if carbon source for denitrification is sufficient, nitrification is the critical factor that limited the nitrogen removal efficiency as evidenced from the fact that both NH 4 + -N and TN removal efficiency increased gradually with the accumulation of nitrifiers in stage (a). However, the insufficient carbon source confined the denitrification process and resulted in lower TN removal efficiency and relatively higher nitrate concentration in the effluent. Once the COD/N was increased to 7, obvious increasing of TN removal efficiency was achieved because of the relative balance of nitrification and denitrification process. It should be noted that in the present condition the nitrogen removal contribution of microorganisms assimilation is about 7.6%, and it may lead to an overestimation for the effect of denitrification. The TN removal efficiency in this work was similar with the results reported in the literature. Li et al. [4] investigated the performance of different singlestage continuous aerated submerged membrane bioreactors (MBR) for nitrogen removal and achieved the removal of 94.2% ammonia nitrogen and 64.5% TN. Meng et al. [14] reported that 78% TN removal efficiency was obtained in an airlift internal circulation membrane bioreactor. When the TN removal rate per unit volume was considered, it could reach about 140 gN m −3 d −1 in HALR. Fu et al. [8] has achieved TN removal rate of 119.2 ± 22.1 gN m −3 d −1 in a modified anoxic/oxic-membrane bioreactor (A/O-MBR) with an HRT of 1.5 d. Farizoglu et al. [19] have acquired 99% TN removal efficiency at a removal loading rate of 17∼ 436 gN m −3 day −1 in a jet loop membrane bioreactor. From these comparisons, it shows that the HALR with a relatively simple and compact structure can also achieve comparable TN removal rates to other reactors in the literature. Effect of COD/N on Reactor Performance. Denitrification is an anaerobic or anoxic biological process which is accomplished by heterotrophic microorganisms, and thus it is strongly dependent on the availability of organic carbon that serves as an electron donor of the process. For the proposed HALR, both aerobic carbon degradation microorganisms and anoxic denitrification microorganisms coexist in the reactor; accordingly, they compete for the limited available carbon source. This is the reason responsible for the relationship between the influent COD concentration and the effect of nitrogen removal. To disclose this relationship, four experiments with different COD/N ratios (COD/N = 4, 7, 10, 15) were carried out for an 80-day operation period. During these experiments, the DO was maintained at 0.8 ± 0.05 mg L −1 , HRT was 16 h, MLSS was about 5000 mg L −1 , and SRT was about 25 d. It can be seen from Figure 5 that the effluent COD concentration increased with the increasing of COD/N from 4 to 7. However, there was an insignificant change of the effluent COD concentration when the COD/N is changed from 7 to 10. The low value of effluent COD for COD/N = 4 is due to the lack of carbon source for denitrification, while the high value of effluent COD for COD/N = 15 is because that the influent was excessive. The NH 4 + -N and NO 2 − -N concentrations in the effluent were almost constant and were found to be about 10 mg L −1 and below 1 mg L −1 , respectively. So, despite the variation of COD/N, the NH 4 + -N removal efficiency can remain at a stable level. The effluent NO 3 − -N concentration was reduced with the increase of COD/N but had a small change when COD/N was in the range of 7∼15. The TN removal efficiency increased with the increasing of COD/N when COD/N is controlled below 10 but did not vary for further increase of COD/N. These results showed that when COD/N was over 10, the COD was sufficient for denitrification despite the competing of COD for aerobic microorganisms, and the TN removal efficiency was mainly determined by the nitrification effect of autotrophic microorganisms. The results of this experiment match well with the previous reports [14] in which COD/N = 10.04 was considered to be the optimal value for TN removal. [20]. Nakano et al. found that DO concentration enabling the highest SND performance was between 0.5 and 0.75 mg L −1 in a single reactor [21]. High DO concentration is favorable for nitrifiers but disadvantageous for anoxic denitrification process; therefore, in order to achieve the SND process in a single reactor, the DO concentration should be controlled in a properly middle level. For pure cultures of ammonium and nitrite oxidizers, the critical DO concentration below which nitrification does not occur is around 0.2 mg L −1 ; at the same time, denitrification can be ignored when the DO concentration is greater than 1.0 mg L −1 [22]. Taking into consideration DO concentration difference in riser and downcomer of HALR, it is reasonable to obtain optimum DO range between 0.5 and 0.8 mg L −1 for SND. SND Mechanism Analysis. In the available literature [4,21,23], two hypotheses are comprehensively accepted for explaining the mechanism of SND process: (1) macroscopic environment hypothesis that reveals the SND occurrence due to macroscale of different spatial DO concentrations in the reactor; (2) microscopic environment hypothesis that reveals the SND occurrence owing to the micro-scale via stratification of activated sludge or biofilm. To understand that both mechanisms may cocontribute to the removal of TN in the HALR, a batch experiment was conducted to demonstrate the effect of nitrogen removal when concentration gradient of DO in spatial distribution was excluded. In this batch test, three 1000 mL beakers were used as parallel reactors and they were all placed in water bath at 30 ∘ C. Each beaker was filled with 400 mL sludge which was taken out from the HALR and 400 mL synthetic wastewater with COD/N = 7. Thus, the initial concentration of COD and N in batch test are 420 and 60 mg L −1 , respectively. A gas sprayer connected with air compressor was submerged at the bottom of beaker, and DO concentration was controlled at 0.8 ± 0.05 mg L −1 . The initial MLSS concentration was 2476 ± 32 mg L −1 , and pH was adjusted to 8.0. The batch test lasted for 8 h, and samples were taken out and analyzed immediately every hour for NH 4 + -N, NO 2 − -N, and NO 3 − -N. The variations of ammonium, nitrite, and nitrate are shown in Figure 7. After 8 h operation, the NH 4 + -N concentration decreased from 60 to 4.2 mg L −1 , the NO 2 − -N concentration was kept at a low value (<0.6 mg L −1 ) throughout the operating period, and the NO 3 − -N concentration increased from zero to 37.7 mg L −1 . As the mass increase of the sludge between initial and after batch test was less than 2%, nitrogen removal by microorganism assimilation was neglected. It can be calculated that nearly 30% of the initial TN was removed by denitrification process. Because the beaker can be considered as a completely stirred tank reactor (CSTR), DO concentration in spatial distribution is homogeneous and the SND mechanism via macro-scale DO gradient is excluded. Therefore, for the batch test system, the possible reasons for SND can be explained by the fact that the micro-scale environment is effective for both nitrification and denitrification. According to the results ahead, the total SND nitrogen removal efficiency is 76.3% in HALR when COD/N is 7, subtracting the SND contribution of micro-scale, then the SND contribution of macro-scale is over 46.3%. To sum up, the SND mechanism in HALR includes both macroscopic and microscopic environment hypotheses. Conclusion An HALR reactor which integrated biological reaction in conventional internal loop airlift reactor and sludge separation in inclined plate clarifier was developed for the purpose The Scientific World Journal 7 of simultaneous carbon and nitrogen removal. Operated with synthetic wastewater, the HALR was successfully started up and reached a steady status after 76 days, during which both COD and NH 4 + -N removal efficiency were over 90% and TN removal efficiency was 76.3% on average. The TN removal efficiency increases when COD/N is increased from 4 to 10. However, it exhibits an insignificant variation with further increase in COD/N. DO was demonstrated as another critical factor influencing the SND nitrogen removal performance. DO concentration in the range of 0.5 to 0.8 mg L −1 was preferable for nitrogen removal. Batch test demonstrates that when DO concentration gradient in spatial distribution was excluded, only about 30% TN removal efficiency can be achieved, which indicates that both macroscopic and microscopic environment mechanisms govern the SND process in the proposed HALR. The HALR can accomplish SND process in a single reactor and in situ automatic separation of sludge. At the same time, it is simple in structure, energy saving, and high efficient in COD and nitrogen removal. Therefore, it may serve as a promising reactor in the field of wastewater treatment.
5,595.8
2013-07-15T00:00:00.000
[ "Engineering", "Environmental Science" ]
Empirical equations for viscosity and specific heat capacity determination of paraffin PCM and fatty acid PCM Phase change materials (PCM) used in thermal energy storage (TES) systems have been presented, over recent years, as one of the most effective options in energy storage. Paraffin and fatty acids are some of the most used PCM in TES systems, as they have high phase change enthalpy and in addition they do not present subcooling nor hysteresis and have proper cycling stability. The simulations and design of TES systems require the knowledge of the thermophysical properties of PCM. Thermal conductivity, viscosity, specific heat capacity (Cp) can be experimentally determined, but these are material and time consuming tasks. To avoid or to reduce them, and to have reliable data without the need of experimentation, thermal properties can be calculated by empirical equations. In this study, five different equations are given to calculate the viscosity and specific heat capacity of fatty acid PCM and paraffin PCM. Two of these equations concern, respectively, the empirical calculation of the viscosity and liquid Cp of the whole paraffin PCM family, while the other three equations presented are for the corresponding calculation of viscosity, solid Cp, liquid Cp of the whole fatty acid family of PCM. Therefore, this study summarize the work performed to obtain the main empirical equations to measure the above mentioned properties for whole fatty acid PCM family and whole paraffin PCM family. Moreover, empirical equations have been obtained to calculate these properties for other materials of these PCM groups and these empirical equations can be extrapolated for PCM with higher or lower phase change temperatures within a lower relative error 4%. Introduction Thermal energy storage (TES) systems use has been widely increased over recent years as a response to the energy efficiency improvement claimed by the governments [1,2]. TES systems have been applied in many fields to reach this energy efficiency enhancement: cold storage [3], domestic hot water [4], building comfort, solar power plants [5], etc. Phase change materials (PCM) are crucial to deploy the technology because the system requirements are easy to reach and the thermal performance of PCM are well known and well-controlled [6,7]. However, the properties of PCM are needed to be measured before the system design. Sometimes, during the design step these properties are difficult to measure or the designers have not access to equipment to proceed the properties evaluations. This study presents the main calculation to achieve empirical equation to predict the specific heat and viscosity of two of the most common PCM groups: fatty acids [8] and paraffin [9]. These empirical equations are useful not only to design TES systems as mention before but also to be used to calculate a number when simulation or modelling requires these properties. In summary, the main objective of this study is to present the main empirical equations calculated for Cp and viscosity of paraffin-PCM and fatty-acid PCM. Viscosity analyses: A Brookfield RST Controlled Stress rheometer was used to measure the viscosity of the materials [8]. The experimental conditions are as follows: 1 min isothermal stages, increasing the temperature 1 ºC with every isotherm. 1 ml samples were used and the measurements were performed with the RCT-50-1 cone spindle under a constant rotation speed of 1100 rpm. Specific heat capacity analyses: A Mettler Toledo DSC 822e was used to perform the Cp measurements. The experiments were conducted under 200 ml/min constant N2 flow, using 40 ml aluminium crucibles and sample mass around 10 mg. DSC areas method was used to calculate the Cp [10]. Empirical equations development and evaluation: The measured data have been numerically adjusted in order to find out empirical equations to calculate the viscosity and the Cp at both solid and liquid states. The best fits were selected according to their R2, and to complement the statistical analysis done with the just mentioned parameters and select the best equation type, the relative errors between each equation and the measured data have also been calculated. The selected empirical equation was validated with the last PCM of each group mention at the beginning of this section. Moreover, in order to fit the best empirical equation for each PCM group, a correction factor was calculated as a function of the melting temperature. Results The calculated empirical equations are listed in Figure 1.a) which were used to present the graphical results. The measured values of Cp and viscosity of the fatty acids under study are presented in Figure 1.b) [8]. Graphical results corroborate that the calculated values by using the empirical equation fit the measured values. Moreover, the same results are presented for the calculated empirical equation and Cp and viscosity measurements for paraffin PCM in Figure 1.c) and Figure 1.d) [9]. Results show that, empirical equations have been obtained for the properties under study in order to calculate this properties for other materials of these PCM groups and these empirical equation can be extrapolated for PCM with higher or lower phase change temperatures within a lower relative error 4%. calculated values. These equations represent an important advance for simulation and system design purposes in the thermal energy storage field. These are reliable tools that provide the viscosity, solid Cp and liquid Cp of the materials in advance without the need of experimental runs.
1,232
2017-10-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Nested Numeric/Geometric/Arithmetic Properties of shCherbak’s Prime Quantum 037 as a Base of (Biological) Coding/Computing Numerous arithmetical regularities of nucleon numbers of canonical amino acids for quite different systematizations of the genetic code, which are dominantly based on decimal number 037, indicate the hidden existence of a more universal ordering principle. Mathematical analysis of number 037 reveals that it is a unique decimal number from which an infinite set of self-similar numbers can be derived with the nested numerical, geometrical, and arithmetical properties, thus enabling the nested coding and computing in the (bio)systems by geometry and resonance. The omnipresent fractal structural and dynamical organization, as well as the intertwining of quantum and classical realm in the physical and biological systems could be just the consequence of such coding and computing. Thus, the crucial challenge biological organisms meet is the generation, transmission, reception, and storage of information with high fidelity due to continuous noise in the system, which according to the information theory requires involving the error-correcting codes.The simplest realization of error-correcting codes is achieved by a layered structure referred to as the nested codes, a special type of the concatenated error-correcting codes where the result of a previous encoding process is combined with new information and then encoded again, so that the deepest nested information is also the best protected one and does not demand very efficient individual codes, which in terms of biocodes means "...the older and more fundamental it is, the better it is protected" (Battail, 2007).Such biocodes nesting, i.e. their coexistence and overlapping, was first discovered by Trifonov (1980;1989) for the coexisting triplet code (a sequence of instructions for protein synthesis) and chromatin code (a sequence of instructions for nucleosome positioning).The logic of nesting or fractality reduces the problem to the first generative set/mapping -the fractal generator, which means, in terms of biological coding/computing, the reduction to the first biocode/biocomputation -genetic code and translation, since Woese (1965) shows in his early work that the translation process was highly developed at the bottom of the universal phylogenetic tree, even in comparison to the simpler process of transcription, while replication still did not exist at that level.Since the genetic code, as the first biocode, represents not only the origin of life, but also the link between physical and biological coding/computing, the understanding of mathematical logic of the genetic code is of special interest, and was suddenly made possible by shCherbak's revealing arithmetic inside the universal genetic code (Shcherbak's, 1994). ShCherbak's arithmetic inside the universal genetic code Almost two decades ago, shCherbak (1994) made astonishing discovery of the arithmetic regularities of nucleons inside the genetic code.The history of this discovery (shCherbak, 2003) began soon after the code decrypting, when it was recognized that the correlation between amino acid mass and codon distribution existed in a sense that the smaller amino acid size requires the greater number of codons for its translating and vice versa (Schutzenberger et al., 1969).This antisymmetrical correlation was confirmed by introducing an integer-valued parameter -a nucleon number (a sum of protons and neutrons in atomic nucleus) (Hasegawa and Miyata, 1980), which motivated very extensive shCherbak's researches of arithmetical regularities in the genetic code (Shcherbak, 1994;2003;2008).The initial shCherbak's key result is revealing the determination of symmetrical architecture of genetic code by decimal number 037 through arithmetical regularities of nucleon numbers for the free molecules of canonical amino acids and nucleotide bases, with remark that the number 037 is unique in decimal system in the sense that its three digit multiples remain multiples modulo 9 by cyclic permutations (Tab. 1) and that similar numbers also exist in some other numeral systems ([13] 4 , [25] 7 , [49] 13 ) (Shcherbak, 1994).The number 037 was called a Prime Quantum -PQ by shCherbak (Shcherbak, 1994;2003) or later just a Prime Number -PN (shCherbak, 2008). ShCherbak pointed out a variety of different nucleon arithmetic regularities, including those for the free form amino acids and peptide bonded amino acids (the standard block residues and the ionized and protonated side chains); for the compressed, life-size, and split representation of genetic code; for Rumer's and Gamow's division of genetic code, and many other regularities (shCherbak, 2003;2008).As an explanation for the found arithmetical regularities based on PQ 037, shCherbak (2003) suggested that "the divisibility by PQ as a validation criterion, if any, simplifies molecular machinery and facilitates the computational procedure of hypothetical organelles working as biocomputers", since to very simple divisibility rule of 37 for base 10 by the checking divisibility for the sum of three digit block of number, which "…requires only the three digit register".Generally, this divisibility "…criterion is valid for the is applied and the n-digit reading frame is used" (shCherbak, 2003). ShCherbak's results motivated other researchers to further reveal arithmetical regularities and their deeper mathematical and physical principles (Verkhovod 1994;Downes and Richardson, 2002;Rakočević, 1998;2004;Négadi, 2009;2011;Mišić, 2004;2010), but the purpose of such evident correlation between the genetic coding and the quantized nucleon packing of its constituents through the 037 nucleon packing quantum has remained unclear.So, the question arises -why 037? Self-similar numbers A good way of understanding the properties of a number is its generalization.Let us call Self-similar Number (S) every integer which has the property of decimal number 037 in an arbitrary numeral system, i.e. the analogue multiplicative table of number 037 (Tab. 1) (Mišić, 2010).This property of S has been defined as a special case of cyclic equivariability or, more precisely, an equidistant cycling digit property2 .It can be proved that the definition of S is valid both for the condition of multipliers equidistance and the condition of digits equidistance.Definition 1A (Mišić, 2010) Self-similar number ( ) p q = S S of a given numeral system, radix ∈ N \ {1} q , is the smallest nontrivial p-digit number, ∈N \ {1} p , whose successive cyclic permutations are equal to its own equidistant multiples, except for the permutation which results in S. , with cyclic digit property whose digits are equidistant. The trivial forms of numbers with equidistant cycling digit property are represented by the repdigits, q p aa aa , where ∈ − {0,1,2, , 1} a q is q-nary digit. Both definitions indicate that general solution, i.e. the solution for each p, exists only for right-shift cyclic permutations, while in the case of left-shift it does not exist, i.e. the solution exists only in the case of doublets (biplets) when it is equalized with that of the right-shift.This general solution for S is determined by the following equation (Mišić, 2010): S (1) where − is a q-nary digit on the ith position. The solution could be expressed in a simple form (Mišić, 2010;cf. shCherbak, 2003): where ( ) p R q is pth repunit in the numeral system of radix q, given as 1 2 ( ) 11 11 1 Eq. ( 1) gives the necessary and sufficient condition for the existence of S in numeral system of arbitrary radix q, which is minimally satisfied for = −1 p q , and thus in each numeral system there exists at least one S. The graphic representation of Eq. (1) (Fig. 1) shows interdependence between the equidistant multiplying and equidistant digit distribution of numbers with the cycling digit property.In Fig. 1, it can be observed that each S has the analogue numbers for other radix q and multipletness p, so called S analogues ( A S ) (Mišić, 2010).Two groups of A S can be distinguished -those with the same number of digits (vertically arranged) and those with similar digits [horizontally arranged, Eq. ( 6)], which is leading to the definition of "vertical" and "horizontal" S analogues (Fig. 1). Definition 2 Self-similar numbers ( ) p q = S S for constant p and variable q are the vertical analogues of class p or p-plets. The successive vertical A S have the radix difference p and the digit difference 0,1,2,..., 2, 1 p p − − , respectively.Definition 3 Self-similar numbers ( ) p q = S S for variable q and p with the constant ( 1) q p − are the horizontal analogues of order ( 1) q p − .The successive horizontal A S , for instance of order 3, enable the next transformation: The fact that some S have vertical and horizontal analogues in the same numeral system, enables the defining of the third kind of analogues (Fig. 1). Definition 4 Self-similar numbers ( ) p q = S S for variable q and p so that = + 2 1 q p are the diagonal analogues. Figure 1.The regular digits distribution of S in dependence of numeral system radix q and digit multiplicity p.The colored arrows denote three types of S analogues.Vertical and horizontal analogues are shown for the case of number 037, while the diagonal analogues are unique and independent of the particular case (Mišić, 2010). The main property of S results from Eqs. ( 2) and (3), which shows that S for (prime) multiplicity p are related to (irreducible) cyclotomic polynomials and thus to the pth roots of unity in complex domain, (Mišić, 2010).This relationship of self-similar numbers with cyclotomic polynomials, which describe regular polygons, is in the correlation with their definition as the numbers which are equidistantly multiplied by regular cycling of their digits. Cyclotomic polynomial, Eq. ( 7), can be regarded as a complementary form of generalized Golden polynomial whose largest root on the open interval (1, 2) is the generalized Golden Mean (Miles, 1960), wherefrom the relation of S to generalized Golden Means follows (cf.Tab. 6) (Mišić, 2010). In the case of the basic form of Golden Mean, , and its golden polynomials, complementary cyclotomic polynomials are obtained, 2 3 ( ) 1, q q q Φ = + + (11) whose roots are , give the vertices of regular hexagon (this complementarity for more general case is given in Tab. 6).This is in consistence with the fact that triplets S represent the centered hexagonal numbers (Mišić, 2004;2010). Fractal properties of decimal varieties of number 037 In the previous Section it has been shown that S has analogues in other numeral systems, so it is questionable whether it has its varieties in the same numeral system.The extension of S for the given numeral system can be done by modifying repunit in Eq. (2). Since S is fundamentally related to equidistantness (Defs.1A and 1B), then the equidistant extension of repunit, Eq. ( 3), with preserving divisibility in Eq. ( 2), can be done by the two operations -equidistant insertion ( ) , that are respectively described by ( ) According to Eqs. ( 2), ( 13) and ( 14), it is possible to define two types of S varieties Definition 5 The nth vertical variety of S, S S , for the given numeral system q and digit multiplicity p is , ( ) Definition 6 The nth horizontal variety of S, S S , for the given numeral system q and digit multiplicity p is From Eqs. ( 13) and ( 14) it follows that V S can be extended only in the direction of increasing values of q and p ( → n q q and → × p p n), while A S can be extended in both directions ( − ← → + q p q q p and − ← → + 1 1 p p p ), which is the main difference in their notation (Tab.2). Table 2.The notations of ( ) p q S modifications. q -base radix, p -number of digits, m -order of analogue, n -order of variety. The V S are defined so that the first S variety, for = 1 n , reduces to S and Eqs. ( 15) and ( 16) to Eq. ( 2).For ≥ 2 n , Eq. ( 15) reduces to which means that the initial extension of pplet S to p×n-plet in same radix q is equivalent to the scaling of p-plet S in radix n q (for instance, S S ), and which is actually vertical A S for radix n q and thus this type of varieties is also named vertical (Tab.3A). In contrast, the extension of p-plet S according Eq. ( 16) results in has not p-plet S counterpart for the scaled radix n q , but leads to p×n-plet in the same radix q, which is comparable to horizontal A S (variable p-plets, ↔ S ) and hence the name horizontal V S , → S .The relation between vertical and horizontal V S for the number 037 is given in Tab.3A. S show very interesting property of numerical scaling, we will focus on them, especially on the triplets S which are the centered hexagonal numbers, in order to examine their potential geometrical and arithmetical scaling.It can be proved that all ↓ S for the triplets ( 3 ( ), for n q n ↓ ∀ ∈N S ) also represent the centered hexagonal numbers, which are given specifically for the vertical decimal varieties of number 037 (Tab.3B). The erasing of digits in V S (Tab. 3)which result from inserted and concatenated positions by Eqs. ( 13) and ( 14) (normal formatted digits in V S and in the indexes) reduces expressions to the original ones (the first row in Tab. 3).T n -nth trigonal (triangular) number, c H n -nth centered hexagonal number. Each ↓ S reflects the same kind of vertical and horizontal analogy as the original S (Defs. 2 and 3; Fig. 1), since ↓ S is reduced to S for powered radix, i.e. n q .Consequently, the vertical successive analogues of ↓ S have the radix difference p and the digit differences, for instance in the case − , respectively, while the horizontal successive analogues of ↓ S , for instance of order 3, enable the following analogue transformation to Eq. ( 6): Further scaled or nested properties of ↓ S are manifested in their multiplicative table (Tabs.4 and 5), because according to Eqs. ( 17) and (18) all these numbers satisfy and thus are related to the p×nth roots of unity, , and represent the numbers with cyclic digit property.Concretely, for S are both centered hexagonal numbers (Tab.3), it is interesting to examine whether their multiples also have some geometrical meaning.For the first three multiples of 3 (10) it is shown that 3 (10) S are related to polygonal numbers (Fig. 2A-C) and according to nested principle it is also valid for S , which is consequently valid for 037 and its varieties.13) and ( 15)].Grey digits are the result of the inserted positions introduced by Eq. 13, and their erasing reduces all numbers to the original Tab. 1 (for bold and normal formatted numbers, respectively).Normal numbers are obtained by the right-shifting of leftmost digit and, together with the first congruence class, they consist of a set of most regular (almost perfectly equidistant) multipliers whose successive differences are 10 (grey shaded fields).700033366 733400066 766766766 800133466 833500166 866866866 900233566 933600266 966966966 The explanation of Tab. 5 is the same as in Tab. 4, except the fact that the multiplicand is 000333667 and the scaling is done by 3 [n=3 in Eqs. ( 13) and ( 15)], as well that the most equidistant multipliers successively differ by 100 (grey shaded fields).Nested properties of decimal varieties of 037 can be also deduced from the fact that 037 almost perfectly divides all its varieties, for instance: 3367=37×91, 333667=37×9018+1, 33336667=37×900991, 3333366667=37×90090991, 333333666667=37×9009018018+1, and so on (cf.Tab.6B). The indicated arithmetical regularities of ↓ S (Tabs. 4and 5) are actually the consequence of a deeper regularity in the numeral system.Namely, it follows from Eqs. ( 2) and ( 4) that the biggest S for the given numeral system is for 1 p q = − , when a number is obtained in the form (the second raw in Fig. 1): 1 ( ) 0123 ( 3)( 1) The importance of 1 ( ) q q − S is in the fact that it represents the period of q-nary numeration of natural numbers, so called fundamental period of q-nary numeral system (Fig. 4) (Mišić, 2004).Fundamental period 1 ( ) q q − S and its length − 1 q are related to the basic arithmetical periodicity (modularity) in q-nary system, and thus to the divisibility rule which follows from modular arithmetic.Decimal numeration of natural numbers results in the periodic number with period 012345679, while its triple value results in the periodic number with period 037 (Mišić, 2004). S for p prime and − 1 p q represent the elementary periods of q-nary numeral system where p is their period length. is in its resulting from the product of ( ) p q S and its vertical variety, so that for = − ( 1) n q p and from Eqs. ( 17), (18), and (23) it follows q p q p q p p p p q q q q q S S S S S (24) For diagonal A S (Def.4), when − = 2 1 q p , Eqs. ( 23) and ( 24) are reduced to S . ( 28) From Fig. 3 and Eq. ( 27), it also follows that the fundamental period of the decimal system is the product of two polygonal numbers, and thus it can be considered as a composite polygonal number (Fig. 5). Definition 7 Composite polygonal number is a positive integer that can be entirely factorized into two or more other polygonal numbers.A particular property of composite polygonal numbers is that they have multivariate geometrical form, as in the case of composite number 259, which is 7 th multiple of 037 (Fig. 5). The last particular property of S to be mentioned in this paper is their relation to the generalized Golden Mean, Eqs. ( 8), also valid for ↓ S according to Eq. ( 17), thus transforming Eqs. ( 8)-( 12) as it is shown in Tab. 6.These two types of complementary polynomials that differ only in the signs, give two geometrically complementary solutions.The first solution relates to the Golden Mean and the corresponding numbers 89, 109 with their scaled values, but also to the Fibonacci numbers (Tab.6A).The second solution relates to the nth roots of unity and cyclotomic values 111, 91 with their scaled values, but also to 037 and its decimal varieties (Tab.6B), and thus to the centered polygonal numbers (Fig. 3).Table 6.Comparation between Golden Mean polynomials and cyclotomic polynomials for q = 10. S S S and S S S and which means that 037 analogue precursors are the analogue successors of the previous diagonal analogue 03 5 , Eq. ( 30), and vice versa for next diagonal analogue 048D 17 , Eq. (31) (Fig. 1).Thus, Eqs. ( 30) and ( 31) enable general concatenation of diagonal analogues, which are also the only S with a fundamental period of q-nary system in the form of the product of ( ) p q S and its scaled value in the powered radix, Eq. ( 26).The uniqueness of 037 and thus the decimal system follows from Eq. ( 29) which has the general form for and shows that only 10 n -nary systems are completely determined by the centered hexagonal numbers, and thus correlated with the hexagonal lattice or equilateral triangular lattice.This lattice belongs to the Bravais lattice, an infinite regular array of points in which each lattice point has exactly the same environment in 2 R (and in 3 R ). The condition for this regular point arrangement is ∈ 2cos( 2) N π Z, which implies that = 1,2,3,4,6 N , for which N is said to be crystallographic number and lattice Nfolded Bravais lattice.Using the cyclotomic ring of order N in the plane, i.e. by the Zmodule, it can be obtained: 2), ( 3) and ( 7), and thus really correlated with hexagonal lattice, Eqs. ( 33).Similarly, it can be proved that the doublets, 2 ( ) n q ↓ S , are correlated with square lattice and with centered square numbers, which will be better explained in the next paper. All the analyses carried out in this Section indicate that, from the aspect of selfsimilarity, the number 037 and decimal system are unique integer and numeral systems in the whole realm of numbers and systems, and thus the number 037 has a central place in the whole set of A S (Fig. 1). Discussion The most general correspondence between the 037 based coding/computing and the genetic code and its evolution in Woese's sense (Woese, 2002;Vetsigian et al. 2006) can be derived from the existence of horizontal and vertical 037 analogues and its central place among them.According to Woese's dynamical theory of genetic code evolution, such process represents the first stage in cellular evolution at the root of the universal phylogenetic tree and it is the result of communal evolution of the early life by the horizontal gene transfer, so that the universal genetic code is actually a generic consequence of such process and the precondition for the later individual evolution by the vertical gene transfer. Similarly, the horizontal 037 analogues are geometrically very distinctive, while the vertical are self-similar.This mathematical logic is also comparable to Barbieri's mechanism of macroevolution (Barbieri, 1998;2008) based on two distinct processes, coding and copying, the first of which creates absolute novelties and involves a collective set of rules (for instance, translation), while the second creates relative novelties and operates on individual molecules (transcription).Because of the need for simultaneous action of both mechanisms, the existence of a universal mechanism which can act in both directions is demanded, and that is the characteristic of 037 based coding/computing by its diagonal analogues. The reflection of 037 based nested coding/computing must be embodied in other biocodes, like the genomic code.It means that the genomic code must be generally established on the sequence length of 1000 nucleotide as the basic computing frame of number 037 (Mišić, 2010), consistent with the detrended fluctuation analysis that "…clearly supports the difference between coding and noncoding sequences, showing that the coding sequences are less correlated than noncoding sequences for the length scales less than 1000, which is close to characteristic patch size in the coding regions" (Havlin et al., 1995).Moreover, the frequencies of the 64 codons in the whole human genome scale are a self-similar fractal expansion of the universal genetic code and strongly linked to the Golden Mean, indicating that the universal genetic code table predetermines global codon proportions and populations, thus governing both micro and macro behavior of the genome (Perez, 2010), which is also the reflection of 037 based nested coding/computing, since we have shown the complementarity of the number 037 with the Golden Mean (Tab.6).Rakočević (1998) pointed out that the universal genetic code table is in itself determined by Golden Mean. The next correspondence with 037 based nested coding/computing will be examined on the nucleic level, since shCherbak (1994) , where 125 = 3 5 is 5 th cubic number), the same pattern which appears in Eq. ( 42).Although the nucleon sums of canonical DNA base pairs and RNA base U are the multiples of 037, they can be also expressed in the form of the composite polygonal numbers (Fig. 5 and 2C) and S arithmetic, (5) ( ). In spite of the "imperfect" divisibility of nucleon number for C G ≡ pair, it can be a perfect computational feature due to the modular arithmetic, since which is the universal periodical pattern (GCU) n in mRNA and appears to be a fossil of a very ancient organization of codons (Trifonov and Bettecken, 1997), and the reason for that can be the fact that the repetitive sequence of this triplet also enables counting. The last correspondence is on the level of shCherbak's arithmetic inside the genetic code.Starting from the first shCherbak's result of the arithmetical regularities of the genetic code compressed representation for division according the amino acids degeneracy (Shcherbak, 1994) [also Fig. 9 in (shCherbak, 2008)], the relation of nucleon numbers for blocks+chains=whole molecules can be expressed in the form of figurate numbers and/or S arithmetic for the four-codon amino acids as and where 4 n Py is the nth square pyramid [037 is also the number of points in a square lattice covered by a disc centered at (0,0) in the form of an octagon (Sloane and Teo, 1984)]. According to the Gamow's division of genetic code [Fig. 7 in: (shCherbak, 2008)], the sum of the side chains for the one half of the set is 3 4 packing quantum which describes a collective mass of particles and they are also in correlation with the regular geometrical arrangement and space measurement); 5) the involving of periodic, crystal-like lattice dipole structure with long range order (the doublet and triplet S represent the packing quantums for square and hexagonal lattice).Penrose and Hameroff (2003) proposed cytoskeletal microtubules as a biological structure particularly suitable for quantum computation and for whose coherence sustaining, among others, an important role belongs to the coherently ordered water trough the dynamical coupling to the protein surface.It is important that microtubules are the cylinders whose walls are hexagonal lattices of subunit proteins known as tubulin, while the ordered water next to hydrophilic surfaces such as the tubulin, according to research by Pollack and his collaborators, behaves like a liquid crystalline (Zeng et al., 2006) with the ice-like structure in the form of hexagonal layers whose oxygens are not linked by proton bonds like in the ice, but the layers are stacked by interaction of opposite charges (Pollack, 2012).Since mathematical properties of number 037 are also fundamentally related to hexagonal lattice, then biological coding/computing might be actually fundamentally based on hexagonal symmetry and packing.Therefore, water might be the perfect biological coding/computing medium, as the DNA sequence reconstitution from the treated water indicated (Montagnier et al., 2010), and could have had a crucial primordial role (Pollack et al., 2009) both in the selection of life building block, such as canonical nucleic and amino acids, and in principally predeterminate evolution of genetic code with small degree of freedom. Generally, the presented mathematical properties of number 037 and its realization in the genetic code and to a lesser presented extent in genomic code, indicate that the biological coding/computing is essentially the process both geometrical in nature and determined by the self-similar symmetry, giving the base for the biological large-scale coherence systems and biological quantumclassical intertwining. Conclusions ShCherbak's arithmetic inside the genetic code has a firm mathematical foundation in the sense that it is related to the number 037 -a unique decimal number from which an infinite set of self-similar numbers can be derived, with the nested numerical, geometrical, and arithmetical properties.Their correlation with self-similar symmetry, but also with the cyclotomic polynomials and thus the crystallographic lattices, can explain the numerous consistent arithmetical regularities of nucleon numbers of canonical amino acids for quite different systematizations of the genetic code.Biological coding/computing based on the self-similar numbers enables the realization of the nested organic codes, not only as the simplest error-correcting codes, but also as the biological systems with the holistic fractal structural and dynamical organization and thus the large-scale coherence systems, which is one of the main properties of biological organisms.Since such coding/computing is based both on the geometry and optimal space quantization, and on resonance and long-range interactions, the biological organisms reflect the principles of coding/computing in the physical world and deeply interact with it, which also justifies the fact that they are understood as information determined systems.The suggested coding/computing is also correlated with liquid crystalline water, emphasizing its crucial role in life origin and evolution.Mathematical possibility of the infinite nested coding/computing with selfsimilar numbers enables a limitless physical domain transition, which can potentially explain biological quantum-classical intertwining and general quantum-classical duality. Acknowledgment Author thanks to Professors Tidjani Négadi, Aleksandar Tomić, and Miloš Milovanović for their valuable comments.This research has been partially funded by the Ministry of Science and Technological Development of the Republic of Serbia, through Projects TR-32040 and TR-35023. Dedication This article I dedicate to my scientific "teacher", Professor Miloje M. Rakočević, who bridged my scholarly knowledge to the original scientific work. much more obvious if the extension of digit notation for higher radix digits is based on decimal numbers, i.e . (20)].The partial multiplicative table of number 003367, given in Tab. 4, represents its most regular multiplier distribution whose successive differences for (a)-rows are 1, and for (b)-rows are 10.The second and third (a)-rows in Tab. 4, for numbers made up of different digits, are obtained by cyclic permutation of two digit blocks of the first (a)-row according to the same rule in Tab. 1, and generally, (a)-rows (bold numbers) represent original multiplicative table of 037 scaled by 2 [ = 2 n in Eqs.(13) and (15)].The (b)-rows, obtained by right-shift cyclic permutation of the leftmost digit of congruent numbers in (a)-rows, together with the first congruence class form the set of most regular (almost perfectly equidistant) multipliers whose differences are 10 (grey shaded fields in Tab.4).Similarly, the partial multiplicative table of number 000333667, given in Tab. 5, represents its most regular multiplier distribution whose successive differences for (a)-rows are 1, for (b)-rows are 10, and for (c)-rows are 100.The second and third (a)rows in Tab. 5, for the numbers made up of different digits, are obtained by cyclic permutation of three digit blocks of the first (a)-row according to the same rule in Tab. 1, and generally (a)-rows (bold numbers) represent the original multiplicative table of 037 scaled by 3 [ = 3 n in Eqs.(13) and (15)].Erasing every third digit in the numbers of (b)-rows in Tab. 5, reduces them to (b)-rows in Tab. 4 (for instance, a multiple ), while erasing the grey digits in the numbers of any row results in Tab. 1, indicating that in Tab. 5 are nested both Tab. 1 and Tab. Figure 2 . Figure 2. Polygonal numbers related to the first three multiples of number 037.A) The first multiple of 037 exactly corresponds to the 4 th centered hexagonal number, c H 4 = 37; B) The second multiple of 037, with a unit difference, corresponds to the 4 th star number, S 4 = 73; C) The third multiple of 037 exactly corresponds to the 2 nd composite triangular number, T 2 × c H 4 = 111; D) The third multiple of 037, with a unit difference, corresponds to the sum of the 4 th centered hexagonal and star number, S 4 + c H 4 = 110. Figure 3 . Figure 3. Nested polygonal numbers related to the first three multiples of number 037 decimal vertical varieties.The first multiples of 037 varieties exactly successively correspond to the …3334th centered hexagonal numbers, while the second multiples are bigger for 1 than successive …3334 th star numbers.The third multiples of 037 varieties again exactly correspond to the 2 nd composite triangular numbers. Figure 4 . Figure 4. Decimal numeration of natural numbers results in the periodic number with period 012345679, while its triple value results in the periodic number with period 037(Mišić, 2004). Figure 5 . Figure 5. Noncommutative multiplication of the composite polygonal numbers, shown in number 259, which is the product of two centered hexagonal numbers and also triplet S, gives two different geometric forms [cf.Eqs.(34), (35), and (37)]. lattice.Comparing this with S reveals that the triplets, 3 ( ) the centered hexagonal numbers (Figs. 2 and 3), are also the reduced cyclotomic values for 3 p N = = , Eqs. ( be geometrically interpreted as in Fig.2D.For the bases which do not individually correspond to nucleon number 037divisibility, i.e. bases T, A, and G, their nucleon number differences satisfy the squares of the first three Pythagorean numbers ( ( the nucleotide sequence into the nucleon binary string, so that the pair C G ≡ would have the meaning of the unit element and counting, while T=A would have the meaning of the neutral element for additive operation.For the triplet, the similar modular unit mass has the combination of U, C, and G bases Table 4 . Partial multiplicative table of number 003367.the multipliers of 003367, while big numbers are the proper multiples of 003367.Bold numbers result from the original multiplicative table of 037 (Tab. 1) scaled by 2 [n=2 in Eqs. ( Table 5 . Partial multiplicative table of number 000333667.
7,541.8
2011-11-21T00:00:00.000
[ "Computer Science" ]
Analysis of crystallographic orientation influence on thermal fatigue with delay of the single-crystal corset sample by means of thermo-elasto-visco-plastic finite-element modeling The influence of a delay time at the maximum temperature on the number of cycles before the macrocrack initiation for two thermal loading programs was investigated for single-crystal nickel-based superalloy VZhM4. An analytic approximation of a delay time influence was proposed. Comparison of the computational results and analytic formula on the basis of constitutive equations with the experimental data was performed for various single-crystal nickel-based superalloys and showed a good accuracy. The influence of crystallographic orientation of the corset sample on the thermal fatigue durability with delay times was investigated for various thermal loading programs and single-crystal nickel-based superalloys. Introduction Single-crystal nickel based superalloys [1,2] are promising used for production of gas turbine engines (GTE) [3]. These materials have a pronounced anisotropy and temperature dependence of properties. Cracking in the turbine blades is caused often by thermal fatigue [4,5]. For the investigation of thermal fatigue durability under a wide range of temperatures with and without delay times the experiments are carried out on different types of samples, including corset (plane) specimen [4] on the installation developed in NPO CKTI [6] (see Fig. 1). Fixed in axial direction by means of two bolts with a massive foundation the corset sample (see Fig. 2) is heated periodically by passing electric current through it. The fixing of sample under heating leads to the high stress level and inelastic strain appearance. The local strain and stress concentration is observed in the central (working) part of sample. The FE simulation is required for the computation of inhomogeneous stress and inelastic strain fields. The aim of the research is to study systematically the effect of delay at maximum temperature on the thermal fatigue durability on the base of the deformation criterion [7][8][9][10][11] for single crystal superalloys using the results of finite element (FE) simulation of full-scale experiments and results of analytical formulae and to study systematically the effect of crystallographic orientation on the thermal fatigue durability. The results of simulation and their verification are obtained for single-crystal nickel-based superalloy VZhM4. Methods Modeling of inelastic deformation in the corset samples has been performed with taking into account of the temperature dependence of all material properties, anisotropy of mechanical properties of single crystal sample, inhomogeneous temperature field, mechanical contacts between bolt and the specimen, between specimen and foundation, temperature expansion in the specimen. The two FE formulations for the thermomechanical problem have been considered: • with taking into account equipment; • without taking into account equipment (simplified formulation [12] for the sample only). The validity of the simplified formulation is based on the comparison with the results of full-scale formulation (with taking into account equipment), as well as on the comparison with the displacements of two markers measured in experiments. The problem was solved in a three-dimensional, quasi-static formulation. As boundary conditions the symmetry conditions were set: zero displacements on the y-axis on the xz plane and zero displacements on the x-axis on the yz plane. On the lower side of the equipment zero displacements along the x and z axes were set. Tightening force was applied on the bolt cap. The temperature field distributions were set from the experimental data at maximum and minimum temperature with linear interpolation in time [13]. The results of finite element heat conduction simulations [13,14] consistent with experimental temperature field distributions. The mechanical properties for alloy VZHM4 were taken from the paper [15] are presented in Table 1. The mechanical properties of bolts are taken for pearlitic steel [16]. Used material properties consistent with considered in [17,18]. In simplified formulation (see Fig. 3) we consider only the sample without equipment, in which zero displacements on the symmetry planes xz and yz were set, the outer face of the sample parallel to the symmetry plane xz was fixed in the direction of the axis x. To exclude solid body motions, a number of points on this face were also fixed in the direction of the y and z axes. The full effective length for superalloy VZhM4 for several temperature modes was 42 mm [13]. In the FE simulations the full length of the specimen for all alloys was taken to be 40 mm. Simulation of inelastic cyclic deformation of corset samples were performed with using of the FE program PANTOCRATOR [19], which allows to apply the micromechanical (physical) models of plasticity and creep for single crystals [20][21][22]. The micromechanical plasticity model accounting 12 octahedral slip systems with lateral and nonlinear kinematic hardening [20] was used in the FE computation for single crystal alloy. FE computations were carried out for a part of a corset sample (simplified FE model with half-effective length of sample equal 20 mm, see Fig. 3b). The temperature boundary conditions were set from the experimental data at maximum and minimum temperature with linear interpolation in time. y x z The influence of the delay at maximum temperature and the influence of crystallographic orientation on the number of cycles to the formation of macrocrack is analyzed in the range from 1 min to 1 hour for the cyclic loading regimes (see, for example, Fig. 9b) with: • maximum temperature of 1050 °C and a temperature range of 350 °C; • maximum temperature of 1050 °C and a temperature range of 550 °C; The heating times in the cycle were 24s and 7s, the cooling time was 15 s for VZhM4. The mechanical properties for the alloy VZhM4 were taken from the paper [15]. The problem was solved in a quasi-static 3-dimensional formulation. The boundary conditions were zero displacements in the direction of the x-axis on two side faces of the sample with the normal along the x-axis. To exclude solid-state motions, a number of points on these faces in the direction of the y and z axes were also fixed ( fig. 5). Damage calculation and estimation of the number of cycles before the formation of macrocracks were made on the basis of deformation four-member criterion [7][8][9][10][11]: where the first term takes into account the range of plastic strain within the cycle, the second term is the range of creep strain within the cycle, the third term is accumulated plastic strain Analytic approximation is offer to enter for describing of delay time influence on thermal fatigue strength. We consider the principle of deformation additivity in case of uniaxial loading: where Ɛ is the full initial strain, Ɛ = is the elastic strain, Ɛ is the plastic strain, Ɛ is the creep strain and Ɛ is the temperature strain. Differentiation (2), using Ɛ ̇ = ̇, where H is the hardening modulus [23], Norton law Ɛ ̇ = A , taking into account E+H= is the tangent modulus [24] and dividing the equation by we put: −̇ = -A (3) Splitting variables, integrating from 0 to time t and using Ɛ ̇ = A we put: Ɛ ̇ = A ( 0 1− + ( − 1) ( − 0 )) 1− (4) Using variables changing τ = 0 1− + ( − 1) ( − 0 ) and integrating from 0 to time t we obtain: that leads to: Using simplified deformation criterion with taking into account creep deformation terms: where Ɛ is the ultimate strain of creep under uniaxial tension, N is the number of cycles of macrocrack initiation we obtain: In the simulations we use = 8. Results and discussion The comparison of the results of FE simulations and experiments concerning the effect of the delay time at the maximum temperature on the thermal fatigue durability for single-crystal superalloys VZhM4 and is given in Fig. 7. Comparison of results of experiment and analytical approximation concerning the effect of the delay time at the maximum temperature on the thermal fatigue durability for singlecrystal superalloy VZhM4 is given in Fig. 8. Note that the additive experimental verification is required for the near to horizontal branches of curves in fig. 7 and 8 corresponding to remarkable delays. Influence of crystallographic orientation (CGO) on thermal fatigue strength for superalloys VZhM4 for two temperature modes is presented in fig. 9. The thermal fatigue durability of samples from superalloy VZhM4 with CGO <001> exceeds the thermal fatigue durabilities of CGO <011> and <111> ( fig. 9) for all considered loading programs. Further improvement of the accuracy of thermal fatigue durability calculations with delays can be achieved by considering more complex creep models [25,26] and taking into account the rafting process [27] at high temperatures. Conclusions The results of the computations and the analytical approximations of delay-time influence on thermal fatigue durability show a good agreement with the experiment, which suggests that the finite-element and analytical computations in combination with application of deformation criterion (7) can be used to predict the thermal-fatigue strength of various singlecrystal superalloy samples with different delays. Researching of CGO influence has showed that thermal fatigue durability of specimens with crystallographic orientation <001> is the highest among all considered variants and specimens with crystallographic orientation <111> is the weakest among all variants of orientations.
2,118.2
2018-01-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Fine representation of Hessian of convex functions and Ricci tensor on RCD spaces It is known that on $\mathrm{RCD}$ spaces one can define a distributional Ricci tensor ${\bf Ric}$. Here we give a fine description of this object by showing that it admits the polar decomposition $${\bf Ric}=\omega\,|{\bf Ric}|$$ for a suitable non-negative measure $|{\bf Ric}|$ and unitary tensor field $\omega$. The regularity of both the mass measure and of the polar vector are also described. The representation provided here allows to answer some open problems about the structure of the Ricci tensor in such singular setting. Our discussion also covers the case of Hessians of convex functions and, under suitable assumptions on the base space, of the Sectional curvature operator. Introduction A classical statement in modern analysis asserts that a positive distribution is a Radon measure.This fact extends to tensor-valued distributions so that, for instance, the distributional Hessian of a convex function on R d , that for trivial reasons is a symmetric non-negative matrix-valued distribution, can be represented by a matrix-valued measure.The proof for the tensor-valued case follows from the scalar-valued case simply by looking at the coordinates of the tensor.To put it differently, the fact that on R d we can find an orthonormal base of the tangent bundle made of smooth vectors allows to regard a tensor-valued distribution as a collection of scalar-valued ones and thus to transfer results valid in the latter case into the former one. The fact that positive functionals defined on a sufficiently large class of functions are represented by measures can be extended far beyond the Euclidean setting, up to at least locally compact spaces: this is the content of the Riesz-Daniell-Stone representation theorems.In this paper we are concerned with the tensor-valued case when the underlying space is an RCD(K, N ) spaces.These class of spaces, introduced in [18] after [31], [33,34], [3] (see the surveys [1], [22] and references therein) are the non-smooth counterpart of Riemannian manifolds with Ricci curvature ≥ K and dimension ≤ N .One of the key features of these spaces, and in fact the essence of the proposal in [18], is that calculus on them is built upon the notions of "Sobolev functions" and "integration by parts".As such, it is perhaps not surprising that distribution-like tensors appear frequently in the field.Let us mention three different instances when this occurs, where the relevant tensor is non-negative (or at least bounded from below): i) The Hessian of a convex function.As observed in [28,36], to a regular enough function f on an RCD space one can associate a suitable "distributional Hessian" that acts on sufficiently smooth vector fields: reformulating a bit the definition in [28], the Hessian of f is the map and it turns out, see [28,Theorem 7.1] that under suitable regularity assumptions on f we have f is κ-convex ⇔ Hess(f )(X, X) ≥ κ for every X, thus matching the Euclidean distributional characterization of convexity.ii) The Ricci curvature of an RCD(K, ∞) space.As discussed in [20], one can use the Bochner identity to define what the Ricci tensor is in this low regularity setting, by putting and it turns out that, see [20], in a suitable sense we have the space (X, d, m) is RCD(κ, ∞) ⇔ Ric(X, X) ≥ κ|X| 2 m for every X. iii) The Sectional curvature of an RCD(K, ∞) space.As discussed in [21], one can give a meaning to the full Riemann curvature tensor on a generic RCD space.In general, one cannot expect any sort of regularity on it, as the lower bound on the Ricci, encoded in the RCD assumption, cannot give any information on the full Riemann tensor.Still, this opens up the possibility of saying when is that the "sectional curvature of an RCD space is bounded from below".The geometric significance of this statement is still unknown. In each of these cases, a better understanding of the relevant tensor is desirable and a first step in this direction is to comprehend whether the given bound from below forces it to be a measure-like object.To fix the ideas, let us discuss the case of the Ricci curvature: what one would like to know is whether the operator Ric described above can be represented via a sort of polar decomposition as Ric = ω |Ric|, (0.1) where |Ric| is a non-negative measure and ω is a tensor of norm 1 |Ric|-a.e., meaning that the identity Ric(X, Y ) = ω • (X ⊗ Y ) |Ric| holds as measures for any couple of sufficiently smooth vector fields X, Y .The main result of this manuscript is that, yes, a representation like (0.1) holds for the three tensors discussed above. Few important remarks are in order (we shall discuss the case of the Ricci curvature, but similar comments are in place for the Hessian and the sectional curvature): -A writing like that in the right hand side of (0.1) requires the tensor field ω to be |Ric|-well-defined.In this respect notice that on one side on RCD spaces tensor fields can be well-defined up to Cap-null sets, where Cap is the 2-capacity (in some sense, thanks to the fact that one can speak about Sobolev vector fields -see [16]). On the other hand, the mass measure |Ric| is absolutely continuous with respect to Cap (because the distributional definition of Ricci tensor is continuous on the space of Sobolev vector fields).The combination of these two facts makes it possible the writing in (0.1).This perfect matching between the regularity achievable by ω and the sets that can actually be charged by |Ric| is far from being a coincidence.-The construction of the polar decomposition as in (0.1) follows the same rough idea described at the beginning of the introduction: we would like to take a pointwise orthonormal base X 1 , . . ., X n of sufficiently regular vector field and then study the real valued functionals ϕ → ´X ϕ dRic (X i , X j ).Clearly, even in a smooth Riemannian manifold one in general cannot find such a global orthonormal base, but a first problem we encounter here is that such bases only exists on suitable Borel sets A k ⊆ X (whose interior might in general be empty).This causes severe technical complications in handling the necessary localization arguments, see for instance the proof of Theorem 2.9.-Related to the above there is the fact that the mass measure |Ric| turns out to be a σ-finite Borel measure that in general is not Radon.More precisely, on the sets A k ⊆ X on which we have a pointwise orthonormal base, the restriction of |Ric| is finite (whence σ-finiteness).However, in general it might very well be that there is some point x ∈ X such that every neighbourhood of x encounters infinitely many of the A k 's.This happens even in very simple examples such that the tip of a cone, as it is known, see [15] and reference therein, that at the tip of a cone every sufficiently regular vector field must vanish.-Despite the above, for any couple of sufficiently regular vector fields X, Y we have w -Since |Ric| is not Radon, in constructing the representation (0.1), and more generally in understanding these distributional objects we have discussed, we cannot rely on the theory of local vector measures that we recently developed in [10].-The representation (0.1) marks a clear step forward in the understanding of the Ricci tensor on RCD spaces, as what was previously manageable only via integration by parts -and thus required regularity of the vector fields involved -now is realized as a 0th-order object and thus has a more pointwise meaning.For instance, it allows to quickly solve a problem that was left open in [20].The problem was as follows: suppose that X i , X, Y , i = 1, . . ., n, are smooth vector fields, that f i ∈ C b (X) and that i f i X i = X.Can we conclude that i f i Ric(X i , Y ) = Ric(X, Y )?One certainly expects the answer to be affirmative, but if the only definition of Ric involves integration by parts -as it was the case in [20] -then it is not clear how to conclude, given that in general f i X i is not regular enough to justify the necessary computations.On the other hand, the representation (0.1) immediately allows to positively answer the question. 1. Preliminaries 1.1.RCD spaces.In this note we are going to consider RCD spaces, which we now briefly introduce.An RCD(K, N ) space is an infinitesimally Hilbertian ( [18]) metric measure space (X, d, m) satisfying a lower Ricci curvature bound and an upper dimension bound (meaningful if N < ∞) in a synthetic sense according to [33,34,31], see [1,37,22] and references therein.We assume the reader to be familiar with this material.Whenever we write RCD(K, N ), we implicitly assume that N < ∞, unless otherwise stated. Also, we assume that the reader is familiar with the calculus developed on this kind of nonsmooth structures ( [18,20], see also [19,23]): in particular, we assume familiarity with Sobolev spaces (and heat flow), with the notions of (co)tangent module and its tensor and exterior products (see also Section 1.2.1 and 1.2.2), and with the notions of divergence, Laplacian, Hessian and covariant derivative, together with their properties. We give now our working definition for the space of test functions and test vector fields.Following [20,32] (with the additional request of a L ∞ bound on the Laplacian), we define the vector space of test functions on an RCD(K, ∞) space as and the vector space of test vector fields as To be precise, the original definition of TestV(X) given by the second named author was slightly different.However, when using test vector fields to define regular subsets of vector fields such as H 1,2 C (T X) and H 1,2 H (T X), the two definitions produce the same subspaces, see for example the proofs of [9,Lemma 3.3 and Lemma 3.4].The advantage of working with this slightly more general class lies in the fact that whereas the drawback is that for v ∈ TestV(X), in general we do not have div(v) ∈ L ∞ (m).Nevertheless, we are still going to need the classical definition of test vector fields (used, in particular in the references [20,21] for what concerns Ricci and Riemann tensors): we call such space V, i.e. The following calculus lemma will serve as a key tool in proving, in a certain sense, a strong locality property of some measures. Let then Proof.By [16, Lemma 2.5], ϕ n ∈ H 1,2 (X) for every n with In particular, (1.2) Notice that integrating by parts and using standard approximation arguments, taking into account (1.2) (which also gives the membership of the right hand sides to the relevant spaces) we have Now, by dominated convergence, ϕ n X → 0 in L 2 (T X) so that by the closure of the operators div and d (see [20,Theorem 3.5.2]), it holds that ϕ n X → 0 in W 1,2 H (T X).Remark 1.3.Inspecting the proof of Lemma 1.2, we see that if (X, d, m) is an RCD(K, ∞) space and X ∈ H 1,2 H (T X), then div X = 0 m-a.e. on {X = 0} and similarly dX = 0 m-a.e. on {X = 0}.1.2.Cap-modules.In this subsection we recall the basic theory of Cap-modules for RCD spaces.We assume familiarity with the definition of capacitary modules, quasi-continuous functions and vector fields and related material in [16].A summary of the material we will use can be found in [12,Section 1.3].For the reader's convenience, we write the results that we will need most frequently. First, we recall that exploiting Sobolev functions, we define the 2-capacity (to which we shall simply refer as capacity) of any set A ⊆ X as An important object will be the one of fine tangent module, as follows (QCR stands for "quasi continuous representative"). Uniqueness is intended up to unique isomorphism, this is to say that if another couple (L 0 Cap (T X) ′ , ∇′ ) satisfies the same properties, then there exists a unique module isomorphism ) is a Hilbert module that we call capacitary tangent module. Notice that we can, and will, extend the map QCR ( [16]) from H 1,2 (X) to S 2 (X) ∩ L ∞ (m) by a locality argument.Also, we often omit to write the map QCR.We define Test V(X) : We define also the vector subspace of quasi-continuous vector fields, QC(T X), as the closure of Test V(X) in L 0 Cap (T X).Recall now that as m ≪ Cap, we have a natural projection map ) denotes the Cap (resp.m) equivalence class of f .It turns out that Pr, restricted to the set of quasi-continuous functions, is injective ([16, Proposition 1.18]). We have the following projection map Pr, given by [16, Proposition 2.9 and Proposition 2.13], that plays the role of Pr on vector fields. Moreover, for every v ∈ L 0 Cap (T X), and Pr, when restricted to the set of quasi-continuous vector fields, is injective. Notice that Pr(Test V(X)) = TestV(X).When there is be no ambiguity, we omit to write the map Pr. Theorem 1.6 ([16, Theorem 2.14 and Proposition 2.13]).Let (X, d, m) be an RCD(K, ∞) space.Then there exists a unique map We will often omit to write the Q CR operator for simplicity of notation (but it will be clear from the context when we need the fine representative).This should cause no ambiguity thanks to the fact that (1.4)This can be proved easily by locality and using the fact that the continuity of the map QCR implies that QCR(g) Q CR(v) as above is quasi-continuous and the injectivity of the map Pr restricted the set of quasi-continuous vector fields yields the conclusion. The following theorem, that is [12, Section 1.3], will be crucial in the construction of modules tailored to particular measures (see [9,Theorem 3.10] for an explicit proof of this result). Theorem 1.7.Let (X, d, m) be a metric measure space and let µ be a Borel measure finite on balls such that µ ≪ Cap.Let also M be a L 0 (Cap)-normed L 0 (Cap)-module.Define the natural (continuous) projection We define an equivalence relation ∼ µ on M as v ∼ µ w if and only if |v − w| = 0 µ-a.e. Define the quotient module M 0 µ := M/∼ µ with the natural (continuous) projection πµ : M → M 0 µ .Then M 0 µ is a L 0 (µ)-normed L 0 (µ)-module, with the pointwise norm and product induced by the ones of M: more precisely, for every v ∈ M and g ∈ L 0 (Cap), (1.5) that is a L p (µ)-normed L ∞ (µ)-module.Moreover, if M is a Hilbert module, also M 0 µ and M 2 µ are Hilbert modules. Similarly as for Q CR, we often omit to write the πµ operator (and also the π µ operator) for simplicity of notation (but it will be clear from the context when we need the fine representative).Again, this should cause no ambiguity thanks to (1.4) and (1.5). The following lemma, that is [12,Lemma 2.7], provides us with the density of test vector fields in quotient tangent modules. Lemma 1.8.Let (X, d, m) be an RCD(K, ∞) space and let µ be a finite Borel measure such that µ ≪ Cap. In what follows, with a little abuse, we often write, for v ∈ L 0 Cap (T X), v ∈ D(div) if and only if Pr(v) ∈ D(div) and, if this is the case, div v = div( Pr(v)).Similar notation will be used for other operators acting on subspaces of L 0 (T X). Tensor product. In this subsection we study the tensor product of normed modules.We focus on Cap-modules, as in these spaces we are going to find the main objects of this note.Fix then n ∈ N, n ≥ 1.We assume that (X, d, m) is an RCD(K, ∞) space, even though this is clearly not always needed.Just for the sake of notation, we set Let now C ⊆ L 2 (T X) be a subspace.We define C ⊗n as the vector subspace of the Hilbert L 2 (m)-normed L ∞ (m)-module L 2 (T ⊗n X) that consists of finite sums of decomposable tensors of the type v 1 ⊗ • • • ⊗ v n where v 1 , . . ., v n ∈ C, endowed with the structure of module (included the pointwise norm) induced by the one of L 2 (T ⊗n X).Notice that we can equivalently define C ⊗n as follows.First, we consider the multilinear map that factorizes to a well defined linear map and see that C ⊗n coincides with the image of this map.Notice that, unless we are in pathological cases, the map we have just defined is not injective.This is equivalent to the fact that not every map defined on C ⊗ alg R • • • ⊗ alg R C induces a map defined on C ⊗n , in general.Let now M be an Hilbert L 0 (Cap)-normed L 0 (Cap)-module.We define the L 0 (Cap)normed L 0 (Cap)-module M ⊗n repeating the construction done to define the tensor product of L 0 (m)-normed L 0 (m)-modules in [23, Subsection 3.2.2](originally of [20]), that is endowing the algebraic tensor product M with the pointwise Hilbert-Schmidt norm and then taking the completion with respect to the induced distance. If µ is a Borel measure finite on balls and such that µ ≪ Cap, we set where the right hand side is given by Theorem 1.7. Remark 1.9.Let µ be a Borel measure, finite on balls, such that µ ≪ Cap.Let also M be an Hilbert L 0 (Cap)-normed L 0 (Cap)-module.Then, using the notation of Theorem 1.7, we have a canonical isomorphism This isomorphism is obtained using the map induced by the well defined multilinear map and noticing that such map turns out to be an isometry with dense image between complete spaces.Therefore we also have the canonical inclusion In particular, with the obvious interpretation for L 0 Cap (T ⊗n X), where Now we consider TestV(X) ⊗n .Notice that TestV(X) ⊗n is a module over the ring S 2 (X) ∩ L ∞ (m) in the algebraic sense and, by Lemma 1.11 below, By the following lemma (with µ = m), TestV(X) ⊗n is dense in L 2 (T ⊗n X), in particular, TestV(X) ⊗n generates in the sense of modules L 2 (T ⊗2 X).The following lemma is proved with an approximation argument as in [23,Lemma 3.2.21],and is a generalization of Lemma 1.8. Lemma 1.10.Let (X, d, m) be an RCD(K, ∞) space and let µ be a finite Borel measure such that µ ≪ Cap.Then TestV(X) ⊗n is dense in . This is to say that |w| ∈ L p (µ) and we can find a sequence of tensors In this case, by dominated convergence, we see that w k → w in L p µ (T ⊗n X).As a consequence of this discussion, an orthonormalization procedure and a truncation argument, we see that we can reduce ourselves to the case w = w 1 ⊗ • • • ⊗ w n where w i ∈ L ∞ µ (T ⊗n X) for every i.Then we can conclude iterating Lemma 1.8. m) such that for every i = 1, . . ., m and j = 1, . . ., n it holds Following a standard argument as e.g. in the proof of [16,Lemma 2.5] it is enough to show that |∇|v| 2 | ≤ g v |v| m-a.e. for some g v ∈ L 2 (m).For Z ∈ L 0 (T X) we define ∇ Z v as in the discussion right above [20,Proposition 3.4.6],that is the unique vector field in L 0 (T X) such that for every Y ∈ L 0 (T X) Clearly, Now we have finished, as, by the arbitrariness of Z ∈ L 0 (T X), the equality above implies that We consider now the multilinear map and we notice that (the left hand side is well defined thanks to Lemma 1.11 -but as there is a squared norm here, this is indeed trivial) As usual, we often omit to write the maps Q CR and QCR. 1.2.2.Exterior power.Much like the previous section dealt with tensor product, in this section we deal with exterior power (see [20]).As the arguments are mostly identical, we are going just to sketch the key ideas and we will assume familiarity with the related theory.We assume again that (X, d, m) is a RCD(K, ∞) space.Similarly to what done before, we set L 2 (T ∧n X) := L 2 (T X) ∧n and define, for C ⊆ L 2 (T X), C ∧n as before. Let now M be an Hilbert L 0 (Cap)-normed L 0 (Cap)-module.We define the L 0 (Cap)normed L 0 (Cap)-module M ∧n as the quotient of M ∧n with respect to the closure of the subspace generated by elements of the form v 1 ⊗ • • • ⊗ v n , where v 1 , . . ., v n ∈ M are such that v i = v j for at least two different indices i and j.This definition is the trivial adaptation of [20] to our context and it is possible to prove that M ∧n is indeed an L 0 (Cap)-normed L 0 (Cap)-module and that its scalar product is characterized by Similarly to what done before, if µ is a Borel measure finite on balls such that µ ≪ Cap, we set Remark 1.12.Now we adapt Remark 1.9.Let µ be a Borel measure, finite on balls, such that µ ≪ Cap.Let also M be an Hilbert L 0 (Cap)-normed L 0 (Cap)-module.Then, using the notation of Theorem 1.7, we have a canonical isomorphism This isomorphism is obtained using the map induced by the well defined multilinear map and noticing that such map turns out to be an isometry with dense image between complete spaces.Therefore we also have the canonical inclusion In particular, then, with the obvious interpretation for L 0 Cap (T ∧n X), where L p µ (T ∧n X) Now we consider TestV(X) ∧n , that is a module over the ring S 2 (X)∩L ∞ (m) in the algebraic sense and, by Lemma 1.14 below, As a consequence of of Lemma 1.10, we have the following result. The following result corresponds to Lemma 1.11. Proof.The proof is very similar to the one of Lemma 1.11, we simply sketch the key computation for the sake of completeness.Take as in the proof of Lemma 1.11.We will prove that and thus the proof will be concluded as in Lemma 1.11.We take Z ∈ L 0 (T X) and we compute (here S n denotes the symmetric group)  so that we have proved the claim. As before, we consider the multilinear map defined by (1.6) and we notice that (the left hand side is well defined thanks to Lemma 1.14 -but as there is a squared norm here, this is indeed trivial) Cap-a.e. (1.8) so that the map in (1.6) induces a map Q CR : As usual, we often omit to write the maps Q CR and QCR. Main Part 2.1.Decomposition of the tangent module.The following theorem provides us with a dimensional decomposition of the Cap-tangent module, along with an orthonormal basis made of "smooth" vector fields of the Cap-tangent module on every element of the induced partition.This will be the first step towards the construction of the most relevant objects of this note.Notice that one should not expect the relevant dimension to be unique: in a smooth manifold of dimension n with boundary, the Cap-tangent module sees the boundary, thus it has dimension n in the interior of the manifold and dimension n − 1 at the boundary.It is unclear if the situation on RCD spaces can be more complicated than that. Theorem 2.1 ([11, Theorem 3]).Let (X, d, m) an RCD(K, N ) space of essential dimension n.Then there exists a partition of X made of countably many bounded Borel sets {A k } k∈N such that for every k there exist n(k where, in particular, 2.2.Hessian. Convexity. In the following definition we restrict ourselves to the case of RCD spaces.This restriction is clearly unnecessary for items (1) and ( 2), however, we preferred this formulation for the sake of simplicity, taken into account that all the results of this note are in the framework of RCD spaces.Definition 2.2.Let (X, d, m) be an RCD(K, ∞) space and let f : X → (−∞, +∞].Let also κ ∈ R. We say that (1) f is weakly κ geodesically convex if for every x 0 , x 1 ∈ X, there exists a constant speed geodesic γ : [0, 1] → X joining x 0 to x 1 satisfying (2) f is strongly κ geodesically convex if for every x 0 , x 1 ∈ X, for every constant speed geodesic γ : [0, 1] → X joining x 0 to x 1 , (2.1) holds. X) and the space is locally compact, then item (3) of the definition above implies that (2.2) holds for every h ∈ S 2 (X) ∩ L ∞ (m) and g ∈ TestF(X).This follows from an approximation argument, taking into account also the existence of good cut-off functions as in [5, Lemma 6.7] together with the algebra property of test functions. Some implications among the various items of the previous definition have already been extensively studied in the literature, see e.g.[28,27,30,24,17,35] for similar statements.Notice that (2) ⇒ (1) is trivially satisfied in geodesic spaces.The implication (1) ⇒ (3), recalled in Proposition 2.5 below, is particularly important in the sequel, as it motivates Theorem 2.9, one of the main results of this note.For the proof of Proposition 2.5 we are going to follow the lines of the proof of [28,Theorem 7.1].As we are going to work with weaker regularity assumptions, we give the details anyway.Indeed, the fact that we do not assume f ∈ D(Hess) forces us to proceed through a delicate approximation argument.Finally, under additional regularity assumptions, e.g.(X, d, m) is a locally compact RCD(K, ∞) space and f ∈ TestF(X), it turns out that items (1), ( 2) and ( 3) are all equivalent (see [28] and [35]).The equivalence of these notions of convexity is expected to hold even under weaker assumptions on f (see, in this direction, Proposition 2.5) but this investigation (in particular (3) ⇒ (1)) is beyond the scope of this note. Remark 2.4.The implication (3) ⇒ (1) seems anything but trivial, if one does not assume that f ∈ D(Hess).Indeed, one could hope to follow [28] and start by proving that (X, d, e −f m) is RCD(K + κ, ∞) whenever (X, d, m) is RCD(K, ∞) and Hessf ≥ κ.In this context, the natural way to verify the RCD(K + κ, ∞) condition is via the Eulerian point of view, i.e. via the weak Bochner inequality.However, in order to so, we would want to exploit an approximation argument, to plug in the weak Bochner inequality for (X, d, m) and the fact that Hessf ≥ κ and such approximation argument seems to require that the heat flow on (X, d, e −f m) maps regular enough functions to Lipschitz functions, and we were not able to prove this fact (that we remark is linked with the RCD condition). Proposition 2.5.Let (X, d, m) be an RCD(K, ∞) space and let f ∈ H 1,2 loc (X) be a continuous and weakly κ geodesically convex function, for some κ ∈ R. Assume moreover that f is bounded from below and locally bounded from above, in the sense that f is bounded from above on every bounded subset of X.Then Hessf ≥ κ. Proof.As remarked above, we follow the proof of [28].Define m := e −f m.When we want to stress that an object is relative to the space (X, d, m), we add the symbol ˜. Step 2. Notice that, as f is locally bounded, [2, Lemma 4.11] implies that φ ∈ H 1,2 loc (X) if and only if φ ∈ H1,2 loc (X) and, if this is the case Also, polarizing, we obtain that the • product between gradients is independent of the space, so that we will drop the ˜on gradients.Moreover, if φ ∈ H 1,2 (X), then φ ∈ H1,2 (X) and, if φ ∈ TestF bs (X), then φ ∈ D( ∆) and ∆φ = ∆φ − ∇f • ∇φ. (2.3) By the equivalence result in [4] (see also [6,17]), we know that if k, g ∈ D( ∆) with ∆k ∈ L ∞ (m), ∆g ∈ H1,2 (X) and k ∈ L ∞ (m), k ≥ 0, By an approximation argument based on the mollified heat flow (for the space (X, d, m)) on g, we can use what we just proved to show that if g ∈ TestF bs (X) and k is as above, Then, with an approximation argument based on the mollified heat flow on k, we have that if g ∈ TestF bs (X) and k ∈ H 1,2 (X) ∩ L ∞ (m), it holds that We choose then k = he f to obtain (h ∈ TestF bs (X)), recalling (2.3), Step 3. Let now α > 0. We repeat the same computation of Step 2 but starting from the RCD(α 2 K, ∞) space (X, α −1 d, m) and the κ convex function α −2 f (of course, with respect to α −1 d) to obtain (all the differential operators are with respect to (X, d, m)) Dividing this inequality by α 2 and letting α ց 0 yields the claim. Measure valued Hessian. In this section we state and prove the first main result of this note, namely Theorem 2.9.More precisely, we show that convex functions have, in a certain sense, a measure valued Hessian.In the Euclidean space, this is an immediate consequence of Riesz's Theorem for positive functionals and it implies that gradients of convex functions are vector fields of bounded variation.Hence we have that Hessian measures are absolutely continuous with respect to Cap, and this is the case even on RCD spaces.This absolute continuity allows us to build the measure valued Hessian on RCD spaces as product of a Captensor field and a σ-finite measure that is absolutely continuous with respect to Cap.We remark that, as the decomposition of the Cap-tangent module given by Theorem 2.1 induces a decomposition of the space in Borel sets (not open ones), we are not able to prove that the total variation of the Hessian measure is a Radon measure. Before dealing with the main theorem of this section, we define when a H 1,2 (X) function has a measure valued Hessian (cf.[20, Definition 3.3.1])and study a couple of basic calculus properties of this newly defined notion.Definition 2.6.Let (X, d, m) be an RCD(K, ∞) space and f ∈ H 1,2 loc (X).We write f ∈ D(Hess), if there exists a σ-finite measure |Hessf | that satisfies |Hessf | ≪ Cap and a symmetric tensor field (2.4) Proposition 2.7.Let (X, d, m) be an RCD(K, ∞) space and let f ∈ D(Hess).Then the decomposition of Hessf is unique, in the sense that, adopting the same notation of Definition 2.6, the measure |Hessf | is unique and the tensor field ν f is unique, up to |Hessf |-a.e.equality. Using the by now classical calculus tools on RCD spaces, the following proposition easily follows, starting from (2.4). Now we show that convex functions (recall Definition 2.2) have measure valued Hessian.As a notation, here and below, a − := −a ∨ 0 for every a ∈ R. Theorem 2.9.Let (X, d, m) be an RCD(K, N ) space and f ∈ H 1,2 loc (X) satisfying Hessf ≥ κ, for some κ ∈ R.Then, f ∈ D(Hess), say Hessf = ν f |Hessf |.Moreover, we have that More precisely, we have the explicit bound, for every Proof.We divide the proof in several steps. Step 1.We define for h ∈ S 2 (X) ∩ L ∞ (m) with bounded support and and we write for simplicity G f (h, X) := G f (h, X, X).We shall frequently use the fact that for given f, h, Y as above H (T X)-norm on sets of vector fields with uniformly bounded L ∞ -norm, (2.7) and similarly for Y .Notice that G f (h, X, Y ) equals the right hand side of (2.4) for X, Y ∈ H 1,2 H (T X) ∩ L ∞ (T X) and h ∈ S 2 (X) ∩ L ∞ (m), as a consequence of a simple approximation argument on f .Indeed, as h has bounded support, a locality argument shows that it is not restrictive to assume also f ∈ H 1,2 (X), then we can approximate f in the H 1,2 (X) topology with functions in TestF(X) and see the equality of the two quantities.This shows also that, as Hessf ≥ κ, Step 2. By (2.8), the Riesz-Daniell-Stone Theorem yields that for every g ∈ TestF(X) there exists a unique Radon measure µ ∇g such that for every h ∈ LIP bs (X). (2.11) We show now that µ ∇g ≪ Cap.Being µ ∇g a Radon measure, it is enough to show that if K is a compact set such that Cap(K) = 0, then µ ∇g (K) = 0.As K is compact, we can find a sequence [7,Lemma 5.4] or [9,Lemma 2.3]).Also, it is easy to see that we can assume with no loss of generality also that {u n } n ⊆ LIP bs (X) have uniformly bounded support.Therefore, using (2.11), dominated convergence and (2.10), → 0, having used also that m(K)=0 in the last step.In particular, µ ∇g (K) = µ ∇g (K)−κ ´K |∇g| 2 dm and thus the above proves that µ ∇g ≪ Cap, as claimed. As µ ∇g ≪ Cap, using dominated convergence, we can show that where we implicitly take the quasi continuous representative of h.We define by polarization the signed Radon measure, absolutely continuous with respect to Cap, so that, by the properties of (X, Notice that the map (g 1 , g 2 ) → µ ∇g 1 ,∇g 2 is symmetric and R-bilinear by its very definition. Step 3. We show that for every (2.12) By dominated convergence and [9, Lemma 3.2], we see that it is enough to assume X ∈ TestV(X), say By the properties of the map (X, Y ) → G f (h, X, Y ) (in particular, recall (2.9)) we see that (2.12) will follow from It suffices then to show then that, as measures, m i,j=1 By dominated convergence and localizing, we further reduce to the case in which f i = c i ∈ R for every i = 1, . . ., m, this is to say, as measures, that follows by (2.11) with m i=1 c i g i in place of g.Step 4. Building upon (2.12) and arguing as in Step 2, we can define the Radon measure µ for every h ∈ LIP bs (X). More precisely, we first define µ X,X for X ∈ H 1,2 H (T X) ∩ L ∞ (T X) and then we define µ X,Y for By the properties of (X, Y ) → G f (h, X, Y ) (in particular, recall (2.9)) it follows that, if for some m and l = 1, 2, then we have that In particular, this definition is coherent with the one given in Step 2. Recall that, by (2.12), if Step 5. We use now Theorem 2.1 to take a partition {A k } k∈N and, for any k, an orthonormal basis of L 0 Cap (T X) on . Fix for the moment k and define, for i, j = 1, . . ., n(k), Notice that this is a good definition as Pr(v k i ) ∈ H 1,2 H (T X) for every i = 1, . . ., n(k) and that the measures above are finite signed measures absolutely continuous with respect to Cap. We define Finally, and that (2.4) holds.By polarization, there is no loss of generality in assuming that X = Y and we can also assume, by linearity, that h is non negative.Notice indeed that the right hand side of (2.4) is equal to G f (h, X, Y ) (recall Step 1) and hence is symmetric in X, Y . Consider the Borel partition {A k } k as in Theorem 2.1.Assume for the moment that for every k, for every h ∈ LIP bs (X), (notice that the restriction of |Hessf | to A k is the finite measure Hess k f , so the right hand side is well defined).Then, it holds that that yields local integrability.We can then compute, by dominated convergence and (2.14) (recall that h has bounded support), We show then (2.14).Fix k and recall the notation of Step 6.Notice that, considering the left hand side of (2.14), we have, by the very definition of where satisfies then X = X Cap-a.e. on A k .As we have reduced ourselves to check that µ X,X A k = µ X, X A k , taking into account also the bilinearity of the map Let {ϕ n } n ⊆ H 1,2 (X) be as in Lemma 1.2 for the vector field X ∈ H 1,2 H (T X) ∩ L ∞ (T X).We compute, if h ∈ LIP bs (X), by (2.9), By the very definition, ϕ n (x) ց χ {|X|=0} Cap-a.e. so that, by dominated convergence, On the other hand, as Step 7. We prove (2.5).By a locality argument, we reduce ourselves to the case v ∈ L ∞ Cap (T X).By density and dominated convergence, it is enough to show (2.5 that proves the claim. Step 8. We prove the last claim.Again, we assume with no loss of generality that X = Y .It is enough to show that if moreover f ∈ H 1,2 (X), then X ⊗ X • ν f ∈ L 1 (|Hessf |), then the rest will follow from dominated convergence.The integrability follows from (2.15) if we show that µ X,X is a finite signed measure.Inequality (2.13) implies that the measure µ X,X − κ|X| 2 m is non negative, but now, using an immediate approximation argument and monotone convergence together with (2.4), we see that it is also finite.As |X| 2 m is a finite measure, we see that µ X,X is a finite signed measure and that (2.6) holds. Ricci tensor. As done for the Hessian, we give a fine meaning the the Ricci tensor defined in [20].Namely, we represent the Ricci tensor as a product of a Cap-tensor field and a σ-finite measure that is absolutely continuous with respect to Cap.As in the proof of Theorem 2.9, we are going to use a version of Riesz representation Theorem for positive functional, this time leveraging on the bound from below for the Ricci tensor ensured, in a synthetic way, by the definition of the RCD condition. We recall now the distributional definition of the objects that we are going to need and, to this aim, we recall also that the definition of V is in (1.1). Step 1. Uniqueness follows as in the proof of Proposition 2.7, by a localized version of Lemma 1.10. Step 2. We remark that for every X, Y ∈ H 1,2 H (T X), it holds |Ric(X, Y )| ≪ Cap, as a an immediate consequence of (2.16) together with a density and continuity argument (notice that it is enough to show that Ric(X, Y )(K) = 0 whenever K is a compact set with Cap(K) = 0). Step 3. We proceed now as in Step 5 of the proof of Theorem 2.9.In particular, we use Theorem 2.1 to take a partition of X, {A k } k .We fix for the moment k and take, (following Theorem 2.1), an orthonormal basis of L We define Clearly, |Ric| is a σ-finite measure, |ω| = 1 |Ric|-a.e. and ω is symmetric. Step 4. Let X, Y ∈ H 1,2 H (T X).We verify that X ⊗ Y • ω ∈ L 1 (|Ric|) and that (2.18) holds.This will be similar to Step 6 of the proof of Theorem 2.9 and we keep the same notation.By polarization, there is no loss of generality in assuming that X = Y and it is enough to show that for every k, as measures, (notice that the left hand side of (2.21) is a finite signed measure).Assume for the moment that also X ∈ L ∞ (T X).Notice that (2.21) also yields integrability (still in the case X ∈ L ∞ (T X)), as it shows that (2.22) Also, by (2.22), we see that the additional assumption H (T X) topology.For example, we can define H (T X) topology thanks to the calculus rules of Lemma 1.1 and the computation Then, by the continuity of Ric and (2.22), the sequence {X n ⊗ X n • ω} n ⊆ L 1 (|Ric|) is a Cauchy sequence, whose limit coincides then with X ⊗ X • ω so that this implies the general case. We prove now (2.21), under the additional assumption X ∈ L ∞ (T X).We first remark that it holds f Ric(X, . This is a consequence of [20, Proposition 3.6.9]together with an approximation argument, see Lemma 1.1 (here we use that X ∈ L ∞ (T X)).Therefore, with the same computations of Step 6 of the proof of Theorem 2.9, we see that H (T X) by Lemma 1.1).We know that ˆX hϕ n dRic(X, Y ) = ˆX hdRic(ϕ n X, Y ).By Lemma 1.2 and the continuity of the map Ric, the right hand side of the equation above converges to 0, whereas the left hand side converges to (as in Step 6 of the proof of Theorem 2.9) ˆX hχ {|X|=0} dRic(X, Y ).This is to say that for every h ∈ LIP bs (X) ˆX hχ {|X|=0} dRic(X, Y ) = 0 which means that Ric(X, Y ) {|X| = 0} = 0, whence the claim. Step 5. Inequality (2.19) follows by an approximation argument as in Step 8 of the proof of Theorem 2.9, (2.18) and (2.17). At the very end of [20], it has been asked how one may enlarge the domain of definition of the map Ric, and, towards this extension, whether whenever X 1 , . . ., X n , Y ∈ H 1,2 H (T X) and f i ∈ C b (X).It seems that basic algebraic manipulations based on the formulas involving the Ricci curvature as in Theorem 2.10 do not imply this fact.However, exploiting Theorem 2.11, we immediately have an affirmative result to this question, at least in the finite dimensional case, and we record this result in the following proposition.More generally, Theorem 2.11 gives a natural way to enlarge the domain of definition of the map Ric.Proposition 2.12.Let (X, d, m) be an RCD(K, N ) space.Let X, Y, X 1 , . . ., X n ∈ H 1,2 H (T X) Notice also that an immediate consequence of Theorem 2.11 (the point is Step 4 of its proof, which relies on an approximation argument based on Lemma 1.2) is that for every X, Y ∈ H 1,2 H (T X), Ric(X, Y ) {|X| = 0} = 0, thus providing a different proof of the implication 3) ⇒ 1) of [25,Proposition 3.7] (see the comments at [25,Pag. 3 and Pag. 4]). Remark 2.13.Exploiting to the representation of Ric = ω|Ric| given by Theorem 2.11, we can easily give a meaning to the trace the "tensor" measure Ric.However we do not expect that the trace of this polar measure has a meaning to represent the scalar curvature, if one does not add artificial correction terms (cf. the characterization of the scalar curvature on smoothable Alexandrov spaces in [29]). First, already in the setting of a smooth weighted Riemannian manifold (M, d g , e −V Vol g ), Ric represents the modified Bakry-Émery N -Ricci curvature tensor, defined as (2.24) Nevertheless, even if we restrict ourselves to non-collapsed spaces ( [14]), which play the role of "unweighted" spaces, we can see that looking at the scalar curvature as trace of Ric is not yet meaningful: for example Ric vanishes on sets of 0 capacity, whereas the scalar curvature such behaviour is not expected (e.g. if we want to have an analogue of Gauss Theorem for a two dimensional cone we have to allow the scalar curvature to recognize the singularity at the tip of the cone). 2.4.Riemann tensor.As done for Hessian and Ricci tensor, now we provide a representation for the Riemann curvature tensor defined in [21] as the product of a Cap-tensor field and a σ-finite measure that is absolutely continuous with respect to Cap.In order to do so, we again employ Riesz's representation Theorem for positive functional, and hence we have to impose that the tensor representing the sectional curvature is bounded from below (hence, we will add the assumption of a bound on the distributional sectional curvature).Then, by standard algebra, we recover the full Riemann tensor out of the sectional curvatures. We follow [21] to define and R(X, Y, Z, W ) on an RCD(K, ∞) space (X, d, m).Even though we assume familiarity with [21], we recall briefly the (distributional) definitions.First, recall the definition of V in (1.1) .Then we have what follows. (1) Distributional covariant derivative.If X, Y ∈ L 2 (T X) with X ∈ D(div) and at least one of X, Y is in L ∞ (T X), then It is clear that the distributional covariant derivative and the distributional Lie bracket coincide with the covariant derivative ∇ X Y and the Lie bracket [X, Y ] := ∇ X Y −∇ Y X, whenever both the objects are defined.We are going to exploit this property throughout. Remark 2.14.We want to extend the definition of R(X, Y, Z, W ) to the case X, Y, Z, W ∈ TestV(X) do not necessarily belong to V. Clearly, as TestV(X) ) are still well defined.Also the third term ∇ [X,Y ] Z makes sense for this choice of vector fields.Notice that in [21], Following the same lines, we see that R(X, Y, Z, W ) makes sense for every X, Y, Z, W ∈ H 1,2 H (T X) ∩ L ∞ (T X) and then f ∈ S 2 (X) ∩ L ∞ (m).Remark 2.15.We do some trivial algebraic manipulations in order to deal with the quantity In the sequel, we will tacitly extend the definition of R(X, Y, Z, W ) to the case X, Y, Z, W ∈ H 1,2 H (T X) ∩ L ∞ (T X) and f ∈ S 2 (X) ∩ L ∞ (m), according to Remark 2.14 and Remark 2.15. For future reference, we recall here [21,Proposition 2.7].Notice that an immediate approximation argument (recall Remark 2.15) allows us to extend the claim to the slightly larger class of vector fields and functions that we are considering.Proposition 2.16 (Symmetries of the curvature).For any X, Y, Z, W ∈ H The following definition has been implicitly proposed in [21, Conjecture 1.1].Definition 2.17.Let (X, d, m) be an RCD(K, ∞) space.We say that (X, d, m) has sectional curvature bounded below by κ, for some κ ∈ R, if for every X, Y ∈ TestV(X) and f ∈ TestF Remark 2.18.It would be interesting to analyse the links between sectional curvature bounds in the sense of Definition 2.17 and in the sense of Alexandrov.This question is the content of [21, Conjecture 1.1]. Remark 2.19.It is well known that sectional curvatures (i.e.R(X, Y, Y, X)) are sufficient to identify a unique full Riemann curvature tensor R(X, Y, Z, W ). We write here an explicit expression, as we are going to need it in the sequel.For X, Y ∈ H The claim follows by algebraic manipulation, by Proposition 2.16.See e.g.[26,Lemma 4.3.3]for the expression. Theorem 2.20.Let (X, d, m) be an RCD(K, N ) space with sectional curvature bounded below by κ, for some κ ∈ R. Then there exists a unique σ-finite measure |Riem| that satisfies |Riem| ≪ Cap and a unique, up to |Riem|-a.e.equality, tensor field ν ∈ L 0 Cap (T ⊗4 X) with |ν| = 1 |Riem|-a.e.such that for every X, Y, Z, W ∈ H The tensor field ν has the following symmetries.Let I, J , K : L 0 Cap (T ⊗4 X) → L 0 Cap (T ⊗4 X) be characterized as follows Then, with respect to |Riem|-a.e.equality, Proof.We divide the proof in several steps. Step 1. Uniqueness follows as in the proof of Proposition 2.7, by a localized version of Lemma 1.10. Step thanks to an approximation argument that exploits the computations of Remark 2.15 and [9, Lemma 3.2].Therefore, Riesz--Daniell-Stone Theorem yields that for every X, Y ∈ H 1,2 H (T X)∩L ∞ (T X) there exists a unique Radon measure µ X,Y such that R(X, Y, Y, X)(f ) = ˆX f dµ X,Y for every f ∈ LIP bs (X). Clearly, for every X, Y ∈ H 1,2 H (T X) ∩ L ∞ (T X), µ X,Y ≥ κ|X ∧ Y | 2 dm and the assignment H 1,2 H (T X) ∩ L ∞ (T X) ∋ (X, Y ) → µ X,Y is symmetric, by the symmetries of R (Proposition 2.16).Also, we can prove, following Step 2 of the proof of Theorem 2.9 that µ X,Y ≪ Cap, so that (see Step 2 of the proof of Theorem 2.9 again and use an approximation argument based on the computations of Remark 2.15) for every f ∈ S 2 (X) ∩ L ∞ (m), where we implicitly take the quasi continuous representative of f .This expression, together with the positivity of µ X,Y − κ|X ∧ Y | 2 yields that µ X,Y is indeed a finite measure. Step 4. This is similar to Step 5 of the proof of Theorem 2.9, we use the same notation.Let then {A k } and {v k i } be as in Step 5 of the proof of Theorem 2.9, building upon Theorem 2.1.Fix for the moment k and define, for i, j, l, m = 1, . . ., n(k), Step 5. We claim that for every X, Y, Z, W ∈ H 1,2 H (T X) ∩ L ∞ (T X) and for every k.Recall that ´X f dµ X,Y,Z,W = R(X, Y, Z, W )(f ) for every f ∈ S 2 (X) ∩ L ∞ (m), so that the claim will imply (2.25) and also the fact that Fix k and take then X, Y, Z, W ∈ H 1,2 H (T X) ∩ L ∞ (T X).We write X := n(k) i=1 X i v k i , for X i := X • v k i and similarly for Y, W, Z. Notice X i , Y i , Z i , W i ∈ H 1,2 (X) ∩ L ∞ (m) and X, Ỹ , Z, W ∈ TestV(X).Notice that these newly defined functions and vector fields depend on k, but as we are working for a fixed k, we do not make this dependence explicit.We compute, on A k , i,j,l,m=1 i,j,l,m=1 where the next to last equality is due to (2.26).We verify now that µ X, Ỹ , Z, W A k = µ X,Y,Z,W A k , which will conclude the proof of the claim.This will be similar to Step 6 of the proof of Theorem 2.9.By multi-linearity and Proposition 2.16, it is enough to show that for every X, Y, Z, W ∈ H 1,2 H (T X) ∩ L ∞ (T X), then µ X,Y,Z,W {|X| = 0} = 0. We take {ϕ n } n be as in Lemma 1.2 for the vector field X ∈ H 1,2 H (T X) ∩ L ∞ (T X).We take also h ∈ LIP bs (X) and we compute (recall Lemma 1.1) ˆX hϕ n dµ X,Y,Z,W = ˆX hdµ ϕnX,Y,Z,W = R(ϕ n X, Y, Z, W )(h). By Lemma 1.2 and the expression for the map R in Remark 2.15, the right hand side of the equation above converges to 0, whereas the left hand side converges to ˆX hχ {|X|=0} dµ X,Y,Z,W . Step 6.By approximation (Lemma 1.10), it is enough to show the claim for X, Y ∈ TestV(X). Then the claim follows from (2.25) and the assumption on the bound from below for the sectional curvature. Remark 2.21.Notice that, thanks to its symmetries, the tensor field ν of Theorem 2.20 can be seen as an element of L 0 Cap (T X) ∧2 ⊗2 . Remark 2.22.Comparing the main results of Section 2.3 and Section 2.4, we may wonder whether Ric is linked to the trace of R. By what remarked in Remark 2.13, we see that this question makes sense only on non-collapsed spaces.However, the non-smooth structure of the space, in particular, the lack of a third order calculus and charts defined on open sets, prevent us to give an easy proof of this fact.
12,520.2
2023-10-11T00:00:00.000
[ "Mathematics" ]
Comparison of Nested PCR and Conventional Analysis of Plasmodium Parasites in Kano, Nigeria Plasmodium identification represents the crucial factor in malaria diagnosis and treatment across developing countries. Conventional microscopy and the use of rapid diagnostic kits have been extensively applied towards human malaria diagnosis. Recombinant DNA techniques have been applied towards malaria diagnosis as well as in the species specific identification using Plasmodium 18s-rRNA gene. This study was undertaken amongst patients attending the Murtala Mohammed Specialist Hospital, Kano. Blood samples were collected from 350 malaria suspected patients. Microscopic analysis via Giemsa-staining revealed that 220 patients were positive for malaria. RDT analysis showed that 248 test samples were positive for Plasmodium infection. DNA products obtained from the blood samples were analyzed by nested PCR to amplify the 18S ssrRNA Plasmodium gene with genus and specific primers rPLU1/5, rPLU3/4, rVIV1/2, rFAL1/2, rMAL1/2 and rOVA1/2. Data obtained showed that 58.64% of specimens tested by microscopy were false positives while 60.62% of false positives were obtained using RDTs in comparison to nPCR which proved that on 91 out of 350 patients were infected with Plasmodium falciparum, representing 26% of tested specimen. Comparative analysis of nPCR to microscopy showed that the sensitivity and positive predictive values of the nPCR were determined as 100 and 41.36%, respectively, while against RDTs it was 100 and 39.38% respectively. nPCR was determined to be more sensitive and specific than either microscopy or RDTs. This study revealed that the accurate diagnosis of malaria by nPCR was compulsory in malaria-prone regions of Nigeria such that nPCR should be applied routinely in laboratory studies. Introduction Malaria is a major disease, emanating from parasites of the Plasmodium genus, which plagues mankind with over 200 million cases reported annually [1]. Reports indicate that the scourge of malaria claims the lives of between 438,000 to 655,000 patients per year of which over 80% of the victims are children between 1-5 years of age, the majority of which cuts across sub-Sahara Africa [1][2][3]. Data revealed that in Nigeria, the high occurrence of malaria, particularly in the northern region of the country is elicited by one of the four species of plasmodia; Plasmodium falciparum [4]. Other known members of this protozoan parasite family include Plasmodium malariae, Plasmodium ovale and Plasmodium vivax which collectively form cause malaria in humans of which P. falciparum is the cause of morbidity and mortality [5]. The number of cases still remains on the high side despite the availability of anti-malaria medication that has been commercially available for decades. Strategies employed to avert the spread of malaria include the increased use of indoor residual spraying (IRS) canisters, application of insecticide treated nets (ITN), combination therapies using drug bases like artemisinin, phyto-medicines and nutraceutical approaches according to folklore as well as the use of rapid diagnostic test (RDT) have been embraced by affected regions but has not however eliminated the fatality rates associated with the disease [3], [6]. So far, early detection of asymptomatic infections presents the best possible approach to reduce and prevent the transmission of malaria which would otherwise go unnoticed [7]. Proper and rapid clinical identification of the species of Plasmodium is another crucial area in the fight against malaria. Studies show that in endemic regions, some affected individuals exhibit mixed infections which stem from two or more Plasmodium species and in some cases, these mixed infections; P. malariae and P. ovale, are overlooked [8][9]. Even with the use of RDTs, clinicians in developing countries still experience degrees of difficulty in the detection of mixed infections thus resulting in cases of ineffective treatment modalities that gives rise to mortality in severe cases [10]. The most common of the conventional methods used in the diagnosis of malaria is via light microscopic examination of thick or thin blood smears stained with Romanowsky stain. This procedure is however labour intensive and requires experts in clinical parasitology in addition to being time consuming. A quicker modern alternative involves the use to flow cytometric techniques or immuno-chromatographic approaches whereby detection is confirmed via antigen-antibody interaction [11][12][13]. In both approaches, clinicians are capable of detecting 50-100 parasites/µL of infected blood sample. The success of the latter approach is however dependent on low levels of parasitemia and is specific for only the Plasmodium falciparum specie, negating the others [14]. In endemic regions where the pool of parasitemia is predominantly high and the possibility of mixed infections is high, these techniques would be ineffective towards the proper identification of the disease causing agent, thus leading to false-negative results [15]. As such, highly endemic areas like Nigeria require a simple, specie sensitive method for the early detection of malaria. Recombinant DNA technologies has provided the ideal platform for the identification and distinction of Plasmodium species which in turn enhances diagnosis of even low-density parasitemia that are not picked up by conventional light microscopy [16][17]. The paradigm shift towards the use of molecular diagnostic approaches is catapulted by its ease of detecting multiple parasites per µl of blood irrespective of whether the sample contained single or multiple species through the use of gene specific primers for targeting [18][19][20][21]. Despite such reports, not much has been covered in terms of genomic determination of Plasmodium species infecting patients who visit general and specialist hospitals in northern Nigeria. The aim of this study was then to access the possibility of using gene specific primers for Plasmodium towards malaria diagnosis from blood samples amongst patients in northern Nigeria. Study Area The Ethical Permission Approval to conduct the study was obtained from the Kano State Ministry of Health Ethical Committee before sample collection from patients. Informed consent forms were also sought from willing patients and their guardians at the general outpatient department (GOPD) of Murtala specialist hospital Kano state, Nigeria. Blood Collection and Screening About 10ml of venous blood was collected from each study subject by venipuncture into Ethylenediamine tetra acetate (EDTA) tubes. Screening for malaria parasite was done using a rapid diagnostic kit; (SD BIOLINE Malaria Ag RDT) in accordance with the manufacturer's instruction. This kit is also used for the detection of specific antigen for the four Plasmodium species. 5µl of blood samples were drawn using capillary pipettes and added into the round sample well touching the sample pad of the test kits. About 4 drops of assay diluent were added into the assay diluents well and allowed to stand for 12-15 minutes. The development of red band in both the control and test sample kits served as a positive test and absence of a red color on the test band signified a negative result. Molecular Analysis Template DNA was obtained from blood samples using the QIAamp Blood Kit (Cat. No.51106; Qiagen Inc., USA) according to the manufacturers' instruction. Species diagnostics was achieved through the amplification of the PCR (polymerase chain reaction) products of the 18S rRNA genes of Plasmodium species using the rPLU1 (5'-TCAAAGATTAAGCCATGCAAGTGA-3') and rPLU5 (5'-CCTGTTGTTGCCTTAAACTCC-3') primers for the first nested PCR [22]. In the first PCR, 2µl of template DNA (corresponding to approximately 0.25 to 0.5µl of blood) was added to a 20µl PCR mixture that consisted of 0.4µM each universal Plasmodium genus specific primer rPLU1 (forward primer) and rPLU5 (reverse primer), 200 µM each deoxynucleoside triphosphate, 25 mM MgCl 2 , 1 X PCR Gold Buffer II (50 mM KCl, 15 mM Tris-HCl, pH 8.0), and 0.25 U AmpliTaq Gold DNA polymerase. The PCR cycle ran at 94°C for 4 min and then 35 cycles at 94°C for 30 s, 55°C for 1 min, and 72°C for 1 min, followed by a final extension at 72°C for 5 min. The first PCR product was diluted 20-fold in sterile water. 1µL of this solution was used for the second nested PCR cycle, performed at 94°C for 4 min and then 35 cycles at 94°C for 30 s, 58°C for 1 min, 72°C for 1 min, followed by a final extension at 72°C for 5 min with the Plasmodium genus specific primer forward primer rPLU3 (5'-TTTTTATAAGGATAACTACGGAAAAGCTGT-3') and the rPLU4 species-specific reverse primer (5'-TACCCGTCATAGCCATGTTAGGCCAATACC-3'). Results The age distribution, number of people examined per category and percentage positive of the participants revealed that the highest number of infected patients were between 21-30 years old (Table 1). Data obtained revealed that the highest incidence of malaria was amongst the adolescents; 47.4% from all 19 patients between ages 10 years and below, 32.5% equal distribution among all participating between ages 11-20 years and 21-30 years respectively. The older generations displayed lowest incidences of infection per head count (Table 1). Analysis of the DNA amplicons using Plasmodium genus genes revealed that 91 of the tested 350 specimens produced bands with a size of 240 base pairs, viewed with a 50 base pair ladder. A depiction of the first 19 specimens, some of which tested positive for the Plasmodium while others where negative is displayed (Figure 1). PCR analysis of the PCR products using Plasmodium falciparum gene specific primers revealed that all of the 91 specimens that tested positive using the Plasmodium genus specific primers formed bands of 206 base pairs, viewed with a 50 base pair ladder ( Figure 2). The figure depicts 14 of the first Plasmodium falciparum positive specimens of the total 91 positives specimens analyzed. Analysis of the PCR products using Plasmodium malariae and P. ovale gene specific primers (rMAL1/rMAL2 and rOVA1/rOVA2) all did not produce any bands and where therefore tested negative (Figure 3). In the present study, 62.86% (220) of the specimens were detected as Plasmodium-positive by microscopy (Table 2). On the other hand, of all the 350 patients, 70.86% (248) were determined as Plasmodium-positive using RDTs (Table 3). However, using both genus and specie-specific primers via nested PCR (nPCR) 91 (26%) out of the total 350 patient samples were tested positive for malaria caused by Plasmodium falciparum (Tables 2 and 3). Discussion The prevalence of Plasmodium infections in northern Nigeria is high and widely distributed amongst all sexes' as well as socio-economic status in the country. Recent statistics from a neighboring state; Kaduna, revealed that the prevalence of malaria parasite amongst a similar number and distribution of participants stood at 35.7% [23]. In this study, the prevalence of the parasite was 26% (Table 1). Significant association was observed (p value= 0.043) with respect to age with most of the percentage of the positive cases recorded were found within the underage children and elderly respondents that participated in this study. This could be attributed to the fact that the immune levels of under age categories are either low or underdeveloped [24]. This finding corroborates data obtained by Aduragbenro and colleagues [25] who reported that 25-30% of all mortalities caused by malaria in southern Nigeria was among children. These findings in this study suggest that the lower risk groups (31-40, 41-50, 61 and above) are less likely to be exposed to stagnant bodies of water; the ideal environment for mosquito propagation as could be the case for participants between ages 1-10 years who tested positive for malaria. Furthermore, the data suggests that an acquired immunity may be also responsible for the gradually reducing incidence of infection (table 1). Asymptotic manifestation including chills, fever and sweating, nausea and general weakness presented by 259 participants in this study was due to some other infectious agent. Plasmodium analysis using the 4 respective genus specific primers revealed that only Plasmodium falciparum primers displayed amplification thus indicating that this specie alone was the most prevalent in the region, representing 26% (91 out of 350) of this study population (Figures 1-3). While such figures are within the reported statistics that revealed approximately a third of the populace from north central Nigeria are diagnosed with Plasmodium falciparum infection (36.6%), particularly among children [23]. Data obtained in this study suggested that previously considered good standards for malaria detection via microscopic examination of blood smears by microscopy is inadequate for the diagnosis of malaria in this region ( Table 2). The study showed that only 58.64% of the specimens identified as positive for malaria via microscopic technique were negative or false positives when compared against the nested PCR. This finding reaffirms the summation of other researchers which found that microscopic detection of Plasmodium infection is restricted to certain limits of parasitemia whereby if the latter falls below 100 parasites/ml, acute determination via microscopy becomes a problem [26]. This study also showed that while conventional microscopy displayed a specificity or ability to correctly detect patients who are actually not infected with malaria of 37.14%, nested PCR revealed a specificity of 74% ( Table 2). The sensitivity and positive values of nPCR when microscopy was used as the reference standard was 100% and 41.36% respectively. In the same manner, the RDT kit utilized in this study revealed a 29.14% specificity value (Table 3). When RDTs were used as the reference standard, nPCR exhibited sensitivity and positive values of 100% and 39.38% respectively. This ability for PCR to more accurately detect low-grade parasitemia with a higher degree of sensitivity than microscopy and RDTs supports the reports by other researchers [22], [27][28][29][30]. The high false negative obtained using RDTs has been reported and suggested that all RDT negative results be confirmed by microscopic analysis [31]. A study showed that the critical success behind the identification of Plasmodium in the blood stems from the primers used which amplifies the dihydrofolate reductase (DHFR) and cytochrome oxidase III genes, peculiar to this genus [21], [32]. Precise detection should thus be achieved via recombinant DNA techniques. The relatively low levels of Plasmodium falciparum infection via PCR (26%) in comparison to that obtained by microscopy (62.86%) and RDTs (70.86%) may account for the dwindling potency of several over the counter antimalarial drugs in the country. It is conceivable that the misuse of anti-malarial drugs due to false positive results obtainable from conventional microscopic and RDT diagnosis maybe the culprit behind the emergence of drug resistant strains of Plasmodium. In the quest to combat Plasmodium infections leading to the development of malaria, highly sensitive and specific techniques like PCR should become the prerequisite method for diagnosis, particularly in endemic areas like northern Nigeria. Conclusion This study underlined that nested PCR is a first-rate technique and a most have in all medical centers across Nigeria for the accurate determination of Plasmodium infection. Results obtained indicated also misdiagnosis associated with the microscopy (129 false positive), the previously considered gold standard and RDTS (157 false positive) is not allied to PCR based diagnosis (91 positive). The misuse of anti-malarial drugs that triggered the advent of drug resistant Plasmodium species in Nigeria can be attributed to the false positives recorded by microscopic and RDT analysis. Microscopic and RDT analysis of 350 blood specimens suggested that 220 and 248 samples respectively were infected with the Plasmodium pathogen. The most vulnerable group for malaria infection in the study area where children between ages 1-10 representing 47% of the tested patients within that age bracket. Regular screening via PCR is essential for the under-aged population for early detection and treatment of the specie specific Plasmodium pathogen in order to curb the rate of mortality across northern Nigeria. Discrepancies between PCR and microscopy results in diagnosis of malaria could be due to the reporting of artifacts by the latter as parasite giving false positive result, inadequate staining reagents and probably human error.
3,558.8
2017-09-28T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
A New Inequality for Frames in Hilbert Spaces A frame for a Hilbert space firstly emerged in the work on nonharmonic Fourier series owing to Duffin and Schaeffer [1], which has made great contributions to various fields because of its nice properties; the reader can examine the papers [2–12] for background and details of frames. Balan et al. in [13] showed us a surprising inequality when they further investigated the Parseval frame identity derived from their study on efficient algorithms for signal reconstruction, which was then extended to general frames and alternate dual frames by Găvruţa [14]. In this paper, we establish a new inequality for frames in Hilbert spaces, where a scalar and a bounded linear operator with respect to two Bessel sequences are involved, and it is shown that our result can lead to the corresponding results of Balan et al. and Găvruţa. The notations H, IdH, and J are reserved, respectively, for a complex Hilbert space, the identity operator onH, and an index set which is finite or countable. The algebra of all bounded linear operators onH is designated as B(H). One says that a family {fj}j∈J of vectors inH is a frame, if there are two positive constants C,D > 0 satisfying Introduction A frame for a Hilbert space firstly emerged in the work on nonharmonic Fourier series owing to Duffin and Schaeffer [1], which has made great contributions to various fields because of its nice properties; the reader can examine the papers [2][3][4][5][6][7][8][9][10][11][12] for background and details of frames. Balan et al. in [13] showed us a surprising inequality when they further investigated the Parseval frame identity derived from their study on efficient algorithms for signal reconstruction, which was then extended to general frames and alternate dual frames by Gȃvruţa [14]. In this paper, we establish a new inequality for frames in Hilbert spaces, where a scalar and a bounded linear operator with respect to two Bessel sequences are involved, and it is shown that our result can lead to the corresponding results of Balan et al. and Gȃvruţa. The notations H, Id H , and J are reserved, respectively, for a complex Hilbert space, the identity operator on H, and an index set which is finite or countable. The algebra of all bounded linear operators on H is designated as (H). One says that a family { } ∈J of vectors in H is a frame, if there are two positive constants , > 0 satisfying The frame { } ∈J is said to be Parseval if = = 1. If { } ∈J satisfies the inequality to the right in (1), we call that { } ∈J is a Bessel sequence for H. For a given frame F = { } ∈J , the frame operator F , a positive, self-adjoint, and invertible operator on H, is defined by from which we see that where the involved frame {̃= −1 F } ∈J is said to be the canonical dual of { } ∈J . For any I ⊂ J, denote I = J \ I. A positive, bounded linear, and self-adjoint operator induced by I and the frame F = { } ∈J is given below The Main Results We need the following simple result on operators to present our main result. Lemma 1. Suppose that , , ∈ (H) and that + = . en for each ∈ [0, 1] we have * + ( * + * ) From this fact and taking into account that we arrive at the relation stated in the lemma. We can immediately get the following result obtained by Poria in [15], when putting = Id H in Lemma 1. Journal of Function Spaces A similar discussion yields We also have Thus the result follows from Theorem 3. Let { } ∈J be a Parseval frame for H; then F = Id H . Thus for any I ⊂ J, Similarly we have This together with Corollary 4 leads to a result as follows. Proof. Since { } ∈J is an alternate dual frame of { } ∈J , On the one hand we have On the other hand we have By Theorem 3 the conclusion follows. As a matter of fact, we can establish a more general inequality for alternate dual frames than that shown in Corollary 6. This completes the proof. Data Availability No data were used to support this study. Conflicts of Interest The author declares that he has no conflicts of interest.
1,051.4
2018-10-03T00:00:00.000
[ "Mathematics" ]
Steam reheater with helical tube bundle for wet steam turbine . Since steam heat exchangers, used at steam cycle of Russian nuclear power stations, were designed while the knowledge about the separation and the heat exchange processes was limited, deviations between its empirical and theoretical characteristics occur. This limitation also determined application of heating pipes with simple straight shape rather than curved. This study explores a steam heat exchanger with helical heating pipes. It was shown that the model may work stably within the range of parameters, simulating work conditions of the moisture separator and steam reheater at Leningrad nuclear power plant. The experiment included processing of pure water steam as well as mixture of steam and nitrogen. It was obtained a relationship between empirical the heat transfer coefficient and the steam mass flow rate. It was noted that presence of incondensable gas does not affect significantly the heat transfer from the coils, processing high pressure steam. Introduction Identification of the sections of 0.4 kV power Moisture separators and steam reheaters (MSR) are widely used at Russian nuclear power plants (NPP) with VVER, RBMK reactor types [1][2][3]. These installations process steam between the high pressure stage of the turbine the low pressure stage. MSR improve thermal parameters of steam at all modes of the reactor and even in the mode with natural circulation of the coolant [4,5]. Application of such equipment pursues several goals such as to improve the efficiency of the steam cycle and to avoid decay of turbine parts. The erosion process may occur when steam, leaving the high pressure stage, becomes saturated with moisture that may erode blades of the following turbine stages [6][7][8]. Furthermore, water droplets continuously contacting with warm surface of heating pipes induce cyclic stress which results in pipe ruptures. Notably, that pipes of horizontal helical shape are more prone to breaks in contrast to that of vertical and straight shape. This circumstance determined limited usage of MSR with helical coils and widespread of models with straight heating elements. At present, different constructions of MSR are in use. Since theories of steam separation and heat exchange were not well developed at the time when MSR were engineered, various deviations between simulated and real characteristics of MSR can be observed. For instance: -uneven profile of the outgoing steam from the separator which results in moisture transfer to reheater and consequently into nonattainment of the designed temperature value [9]; -appearance of overheated and overcooled zones within the heating pipes bundle because of uneven flow spread and different length of steam supply pipes. This circumstance creates undesirable tensions, and generates condensation areas, initiating vibration, respectively. Condensation from a two phase flow is more complex than that from a single phase because it hinders heat transfer between mediums as shown in [10]. Therefore, complex models like Hammouda are used instead of ordinary heat exchange formulas for the heat transfer coefficient estimation of the pipe bundle [11]. Recent studies show satisfactory precision of the approach [12][13][14]. In addition to these deviations, other disadvantages were discovered: -high friction loss along the steam duct due to arrangement of moisture separator above MSR (which is common for VVER-440 project); -appearance of incondensable gases (ICG) in heating pipes diminishes heat exchange between the heating steam and the pipe surface [15][16]; -vertical arrangement of heating pipes in the MSR bundle (which is typical for VVER-1000 project) has lower heat transfer coefficient in contrast to the horizontal arrangement. Furthermore, many researches were conducted within the last decades aiming on heat transfer intensification [17][18][19]. In order to improve disadvantages of existing constructions, several modernizations were made. For instance, after redesigning of separating bundles of the MSR model "SPP-500-1" at the Leningrad NPP, located near Saint-Petersburg, Russia, several goals were achieved. On first, uneven moisture distribution was eliminated and on second, the moisture concentration value reached its calculated range [20][21]. Aiming to overcome disadvantages, which are inherent to MSR with vertical arrangement of heating pipes, two experimental heat exchangers were designed. First unit, which consisted of two full-scale helical tubes and shell, justified that horizontal coils may work steady. Research objectives Since the first unit showed satisfactory results, it was suggested to increase the number of heat exchanging pipes in order to simulate construction of the MSR heating section. It was also suggested to expand experimental objectives as follows: -justify multicoil unit's ability to perform stably; -estimate the heat transfer coefficient at nominal mode and its deviation from the simulated coefficient; -study how presence of ICG in steam (0,02% by mass) affects heat transfer and hydrodynamic stability of the unit (simulating work conditions, occurring at the NPP with single loop reactor type). Materials and methods The multicoil experimental unit consist of 10 doubletwisted helical tubes, mounted inside cylindrical shell. A 3d model of the unit, which was designed by authors, illustrated at the Fig. 1. The unit has two steam loops: low pressure (LP) and high pressure (HP). From the inlet nozzle, the HP vapor is distributed between two vertical steam channels and then it is spread among 10 helical pipes. Steam transfers its heat to pipe's walls and condenses partially. The steam and condensate mixture flows towards one of two vertical condensate channel; then it fills up the condensate tank and drains after cooling. When the LP steam reaches the unit's inlet, it flows through the gap between flat round sheet and cylindrical shell. Then, it goes around several helical coils and warms up. At last, reheated steam is supplied back. Each heating pipe is plain and circular in cross section with diameter Ø18 mm, wall thickness of 2 mm and length of 8.5 m. Tubes are twisted in flat spirals (Archimedes' spirals) with step of 20 mm. Its dimensions are as follows: about Ø1,0 m (internal) and Ø1,7 m (external). Coils has square layout with inline pattern in the front section. The total surface of 20 spirals is F ~ 46.2 m2. Unit's cylindrical shell with diameter is 2100 mm is covered by insulation made from clay bricks, placed with a small gap between, and surrounded by external metal cylindrical shell. All unit parts were made of carbon steel alloy 'steel 20'. During the experiment, the unit's performance was tested within the range of parameters as shown in the Table 1 below. The theoretical linear heat transfer coefficient Ktheor, W/m 2 K can be obtained from the relationship: Results 1. The empirical heat transfer coefficient Kemp , W/m 2 K was estimated while the steam mass flow of both: HP and LP was changed in a wide range. The result of the estimation can be seen from the Fig. 2. As it can be noted from the Fig. 2, the heat transfer coefficient rises steadily with increase of steam flow mass rate. Though the heat transfer coefficients of the 10-coils model and that 2-coil model follow similar pattern, the gap between coefficients of models is ~30%. A relationship between the heat transfer coefficient, estimated experimentally, and the flow mass rate of the 10-coil model can be expressed using following equation: Analyzing the equation (1) it can be noted that the heat transfer coefficient rises steadily with increase of steam flow mass rate. Though the heat transfer coefficients of the 10-coils model and that 2-coil model follow similar pattern, the gap between coefficients of models is ~30%. 2. It is observed a 18% difference between theoretical and empirical values of the heat transfer coefficient as shown at the Fig. 3. Fig. 3. The empirical to theoretical heat transfer coefficient ratio by the low pressure steam mass flow rate: -experiment with pure vapor; -experiment with mixture of vapor and nitrogen; approximation (10-coils model). Discussion A remarkable deviation between heat transfer coefficients of 10-coil model and that of 2-coil model, which can be seen at the Fig. 2, caused by: -better precision while 10-coil model manufacturing; -more steady flow which multicoil bundle assure by contrast to that of 2-coil model. As it can be inferred from the Fig. 2, there is a positive relationship between the empirical heat transfer coefficient and HP mass flow rate value. The densest dots cluster on the left of the figure shows boundaries of the nominal mode. The cluster center indicates the coefficient value in nominal mode which is Kemp = 250 W/m 2 K when γW lp = 37,5 kg/m 2 . Theoretical values of linear heat transfer coefficient from inner side are larger than that from outer side. Thus, overall heat transfer coefficient of the model mainly depends from factors, affecting condensation such as: position coils in model's space, medium velocity, inner and outer diameter of the pipe; overall length of coil and the number of loops, inner surface roughness and other [24]. When experiment was completed, empirical heat transfer coefficients were benchmarked with its theoretically computed values. As it can be seen from the Fig. 3, empirical values lag behind theoretical because of: -presence of baffles, partly obscuring coil surface but needed for coils fixation; -poor heat exchange of coils, that located the top and in the bottom of the bundle, due to imperfect steam distribution which is determined by the shell and the plate geometry. Absence of impact of incondensable gas on the heat transfer coefficient, which can be observed at the Fig. 2 and Fig. 3, can be explained by high pressure of HP steam. This finding matches with [25] though discharge of incondensable gases from the model is suggested in [16]. Conclusion In result of the study it was obtained: -the multicoil model may work stably within wide range of parameters and modes, which corresponding to that of moisture separator and reheater MSR, installed at the Leningrad NPP; -3D-model of the experimental unit; -the empirical heat transfer coefficient lags behind theoretical. In nominal mode its value is 250 W/m 2 K; -incondensable gas does not hinder significantly the heat transfer from the coil surface.
2,305.8
2020-05-01T00:00:00.000
[ "Engineering", "Physics" ]
An Original Geometric Programming Problem Algorithm to Solve Two Coefficients Sensitivity Analysis Problem statement: It has been noticed by Dinkel and Kochenberger tha t t ey developed sensitivity procedure for Posynomial Geometric Prog ramming Problems based on making a small changes in one coefficient. Approach: This study presented an original algorithm for fin ding the ranging analysis while studying the effect of pertu rbations in the original coefficients without resol ving the problem, this proposed procedure had been trapp ed on two coefficients simultaneously. We also had developed one of the incremental strategies to make suitable comparisons. Results: Comparison results had been done between the gained result fro m the sensitivity analysis approach and the incremental analysis approach. Conclusion: In the standard Geometric Programming Problem, we obtained an original algorithm, for the first time, by changing two coefficients simultaneously in the objective function. INTRODUCTION This study deals with the sensitivity analysis in the case of less than type inequalities. Techniques designed to study the effects of small changes in the input variables on the optimal solution of an optimization problems, with out having to solve the entire problem again and again, are known in the literature as sensitivity analysis techniques [1] . Dinkel and Kochenberger studying the effect of changing coefficients separately on the optimal solution [2,4] . MATERIALS AND METHODS The mathematical formulation of the sensitivity analysis for posynomials (polynomials with positive coefficients) are discussed in the research of Dinkel et al. [3] as follow: Theorem 1: Suppose that the primal geometric program has d>0 and rank (a ij ) = m If the solution to the dual geometric program has δ*>0 and if the Jacobian matrix J(δ)with components is: The major restriction of this result, from an applications point of view, is that are no inactive primal constraints at the optimal solution * i ( 0for alli) δ > .Thus assuming the problem has been reformulated, if necessary , to meet this restriction. For differential changes dc i that maintain the positivity conditions on all dual variables, the new dual solution is estimated as: where, dδ i is given by (3). Once the dual solution is known the estimate of the new primal solution is computed as: and n o is the number of terms in the primal objective function. Theorem 2: Suppose the primal GPP has d>0 and let b(j), j = 1,…, m are linearly independent. If the submatrix with components b i (j), I = 1,…, n o and j = 1,…d has rank d then j(δ), given by (1), is nonsingular for each δ>0. Since we are interested in other than differential changes we will interpret dν ν and i i dc c as rates of change [3] . That is: An original GPP algorithm: Before we make some observations of the new original procedure, let us consider the outlines of this algorithm: Step 1: Put δ i + dδ i = 0 as an equations of the two variables ∆ 1 and ∆ 2 where i = 1, 2…,n is the number of dual variables. Step 2: Calculate the cofactors of ∆ 1 and ∆ 2 in those equations obtained in the step 1, we note that the sign of ∆ 1 is the opposite to the sign of ∆ 2 for each i = 1, 2…,n. Step 3: Categorized those equations in two groups: • The first group is containing the +ive cofactors of ∆ 1 and the -ive cofactors of ∆ 2 • The second group is containing the −ive cofactors of ∆ 1 and the +ive cofactors of ∆ 2 Step 4: From the first group, calculate the lower bound of ∆ 1 and the upper bound of ∆ 2 , while the upper of ∆ 1 and the lower bound of ∆ 2 will be calculated from the 2nd group. Step 5: Since our searching is concerned about the range of any two coefficients in the objective function by changing them simultaneously so any small change in the lower bound of ∆ 1 will effect on the upper bound of ∆ 2 similarly, upper bound of ∆ 1 and lower bound of ∆ 2 will be effected, this connection gives us an ability to construct the cross-shape in Fig. 1. Step 7: Determine the pieces of the those lines between the intersection points and study all points at that pieces to find the most important answer on the following most important question: At which point on the pieces of the 1st and 2nd groups will we find max∆ 1 with min∆ 2 simultaneously? Step 8: After finding those points, apply the following rule: • The upper bound on ∆ 1 is then the minimum of ∆ 1 >0 for those i when (14)<0 for which (13) is satisfied. If ∆ 1 <0 evaluating (13) for those I for which (14)>0 then the lower bound on ∆ 1 being the maximum such ∆ I [1] , by regarding the observations (a), (b) and (c) in Note 2 Step 9: End. Some theoretical observations: Note 1: • If we attempt to change the upper bounds of ∆ 1 and ∆ 2 simultaneously or the lower bounds∆ 1 and ∆ 2 this will shift the cross-shape right or left respectively. The important thing now, because we have consider the change in two coefficients, this yields two dimensional space for which ∆ 1 is the horizontal axis and ∆ 2 is the vertical axis .The equations δ i +dδ i = 0 are straight lines in ∆ 1 and ∆ 2 plane Note 2: (a) We suggest that the lower bound on ∆ 1 don't exceed the negative value of c 1 to maintain the posynomial nature. Also for ∆ 2 (b) We make the same steps on the bounds of ∆ 2 with replacing (A) by (B) (c) At changing in c 1 and c 2 simultaneously we must note that this changing is with respect (the cases if ∆ 1 >0 → ∆ 2 >0 and ∆ 1 <0 → ∆ 2 <0 are out of our ranges since it is contradict the conditions in the problems) Note 3: • The above algorithm is originally designed by us with a numerical evidence we put those results in Table 1-4 which are verified by using our programs writing in Matlab • If we try to change three coefficients simultaneously, this required to study three dimensional space and this is not the domain of our research in this research but it is a good field to study in future Here the degree of difficulty is d = 9-6-1 = 2. The dual objective function is: We have: Let: Evaluating (A) and (B) for i = 1, 2,….9 we will get Table 1. Substitute these values in (13) and solve the following nine optimization problems: The solutions of these problems can be tabulated as Table 2.
1,612.8
2009-06-30T00:00:00.000
[ "Mathematics" ]
Observability and controllability analysis of blood flow network In this paper, we consider the initial-boundary value problem of a binary bifurcation model of the human arterial system. Firstly, we obtain a new pressure coupling condition at the junction based on the mass and energy conservation law. Then, we prove that the linearization system is interior well-posed and $L^2$ well-posed by using the semigroup theory of bounded linear operators. Further, by a complete spectral analysis for the system operator, we prove the completeness and Riesz basis property of the (generalized) eigenvectors of the system operator. Finally, we present some results on the boundary exact controllability and the boundary exact observability for the system. 1. Introduction. In recent years, many doctors and researchers have devoted themselves to the prevention and treatment of cardiovascular diseases. For the blood flow in a single vessel shown as Figure 1(a), many models were used to obtain the physical property of the blood vessel and the interaction between blood flow and vessel in the previous literatures (e.g., see [1,3]). And some mathematical issues and the well posedness analysis of these multiscale models were presented (see, [2,4,5]). In fact, the arterial circulation is often regarded as a network of compliant vessels. Wave propagation in the arterial tree is very important to the understanding of arterial circulation, especially at bifurcation (see, [6,8,10,11]). Medical experiments and technology (see, e.g. [7,12]) also were devoted to the study and found that geometry played a key role in determining the nature of haemodynamic patterns at the human carotid arterial bifurcation. In this paper, we consider a new problem. The mathematical models previously used can be regarded as an initial and boundary value problem of partial differential equations. The solution including its numerical solution makes sense only when the initial data and boundary value are given. In the practical situation, we have no initial state. In most cases, we have some measure-datum. Therefore, how to find some property of the blood flow network from measurement, or the observability of the system briefly, is an interesting study. In the present paper, we will pay our attention to the observability and controllability of a simple vascular bifurcation system shown as Figure 1(b). CHUN ZONG AND GEN QI XU A(x,t) X (a) As usual, we describe every vessel by the following one-dimensional quasilinear system: that describes conservation of mass and balance of axial momentum in terms of the cross-section area A(x, t), internal pressure over the cross-section P (x, t), average velocity u(x, t) and the flow rate Q(x, t) = A(x, t)u(x, t), where η stands for the kinetic energy coefficient, x denotes the axial direction, ρ is the blood density and K is related to the viscosity assumed in the vessel (see, e.g., [4,5,6]). To simplify the model, we introduce the following constitutive law for the pressure to system (1) (see, e.g., [4,13,14]): where A 0 , E, h and σ are the cross-section constant at rest (u = 0), the Young modulus, the thickness and Poisson coefficient of the vessel wall, respectively. This expression entails that the pressure is a linear function of the vessel radius (see, e.g., [13,15]). Using the pressure law (2), we get And we choose P and Q as variables. By linearizing the system (1) around a constant state (A, u) = (A 0 , 0), we obtain a linearization model of the single vessel as follows As pointed out in [6], the linearization model is reasonable as it can reproduce most of the essential feature of the blood flow. In particular, it is used to model the whole systemic tree. For the binary vascular bifurcation (see, Figure 1(b)) under consideration, let P i and Q i be the pressure and flow rate, where i = 1 denotes the first branch and i = 2, 3 are the binary branches. Without loss of generality, we can assume that P i and Q i are the real measurable functions on interval [0, 1] and the direction of parameter increasing coincides with that of blood flow for i ∈ {1, 2, 3}. For the connective condition at the bifurcation, the mass balance condition implies that we can choose constant α ∈ (0, 1) as the mass partition coefficient (see, e.g., [6,9]). Then the flow rates satisfy conditions Q 2 (0, t) = αQ 1 (1, t), Q 3 (0, t) = (1 − α)Q 1 (1, t). Thus the model under consideration can be described as follows In [3,14], the energy functional is defined by that provides energy estimates for the linearization system as the nonlinear entropy function guarantees the stability of the solution to the nonlinear system. Taking the derivative of (5) with respect to t and using the equations (4), we get where the term P 1 (0, t)Q 1 (0, t) is the input energy, the terms P 2 (1, t)Q 2 (1, t) and P 3 (1, t)Q 3 (1, t) are the output energy, and the term is the energy dissipation rate due to the blood viscosity. Suppose that there is no energy loss at the bifurcation, we find out the pressure relation It's worth pointing out that the result (7) is different from that used before (see, e.g., [6,8,10,11]). Based on the flow rate distribution and pressure coupling contiguity condition, we can model the arteries at the bifurcation. Furthermore, supplementing the initial and boundary conditions, we get a complete description for a one-dimensional distributed parameter model of binary vascular network where u 1 (t) is input flow rate and u 2 (t), u 3 (t) are the output pressures, that are regarded as the control variables. Moreover, we have observation data about the pressure input P 1 (0, t) and the flow rate outputs Q 2 (1, t) and Q 3 (1, t), and we set So far we have obtained a complete control-observation system described by (8) and (9). The rest of this paper is organized as follows. In Section 2, we formulate the system (8) into a Hilbert space H and then prove the interior well-posedness and L 2 well-posedness of the network system by using the semigroup theory of bounded linear operators. In Section 3, we carry out a complete spectral analysis for the system operator. And we prove that the spectrum consists of all isolated eigenvalues, that is a union of finite many separated sets located in a strip parallel to the imaginary axis. In particular, we obtain some results on the multiplicity and separability of eigenvalues. In Section 4, we present the completeness and Riesz basis property of (generalized) eigenvectors of the system operator A. Finally, in Section 5, we discuss the exact controllability and exact observability of the system and then give the main result in Theorem5.3. 2. Well-posedness. In this section, we study the well-posedness of the linearization blood flow network (8). At first we reformulate system (8) into an appropriate Hilbert state space. For the sake of simplicity, we introduce some notations used later. Set 3 × 3 diagonal matrices equipped with an inner product Obviously, H is a Hilbert space. Define the system operator A in H by It is easy to check that A is a closed and densely defined linear operator in H. A simple calculation gives that the dual operator A * of A is We take the control functions space as L 2 loc (R + , U) = L 2 loc (R + , C 3 ). For any T > 0, and for U (t) = (u 1 (t), u 2 (t), u 3 (t)) ∈ L 2 loc (R + , U), its local norm is defined by Furthermore, we define control input operator B : U = C 3 → H −1 by where H −1 = (D(A), R(λ, A) · ) with λ being a resolvent point of the system operator A, and δ(x), δ(x − 1) are the Dirac Delta function. With the help of notations above, we can rewrite (8) into an abstract evolutionary equation in H: where U (t) = (u 1 (t), u 2 (t), u 3 (t)), X(t) = (P (x, t), Q(x, t)) T , and We take the observation space as . For each T > 0, and for Y (t) = (y 1 (t), y 2 (t), y 3 (t)) ∈ L 2 loc (R + , Y), the local norm is defined by The observation operator C : D(A) → C 3 is defined by Clearly, C has an extension on the continuous function space Thus the abstract form of the control-observation system (8) and (9) in Hilbert 2.1. Interior well-posedness of system (15). The interior well-posedness of a system means that when U (t) ≡ 0, the Cauchy problem has unique a solution for each initial X 0 ∈ D(A) and the solution depends continuously on the initial data. In other words, A generates a strongly continuous semigroup of bounded linear operators (briefly, C 0 semigroup). In this subsection, we mainly discuss the interior well-posedness of the system (13) or (15). Firstly, we have the following lemmas. Lemma 2.1. Let H and A be defined by (10) and (11) respectively. Then A is dissipative in H. Lemma 2.2. Let H and A be defined by (10) and (11) respectively. Then 0 ∈ ρ(A). Furthermore, A −1 is compact on H, and hence the spectra of A consist of all isolated eigenvalues of finite multiplicity, i.e., σ(A) = σ p (A). Proof. For any fixed (u(x), v(x)) T ∈ H, we consider the solvability of the equations that is, Solving the differential equations above we find out the formal solutions Inserting above into the boundary conditions yields A direct calculation shows that (I − C) −1 = I + C and (I − C T ) −1 = I + C T . Thus, Inserting above into the formal solutions, we can get a unique solution pair to (16) The closed operator Theorem asserts that 0 ∈ ρ(A). By the expression (17), (f, g) = A −1 (u, v) is an integral operator of continuous kernel function and so A −1 is compact on H. Thus, the spectra of A consist of all isolated eigenvalues of finite multiplicity (see [16]) and σ(A) = σ p (A). The proof is then complete. Lemma 2.1 and Lemma 2.2 together with the Lumer-Phillips Theorem in [17] assert the following result. Theorem 2.3. Let H and A be defined by (10) and (11) respectively. Then A generates a C 0 semigroup of contractions on H, and hence the system (15) is interior well-posed. 2.2. L 2 well-posedness of system (15). L 2 well-posedness of a system means that if a system has local L 2 input, then its output (observation) also is L 2 locally. Sometimes, it also is called the exterior well-posedness of the system. In this section, we shall discuss the L 2 well-posedness of the system (15). We begin with introducing the following lemma that gives a sufficient and necessary condition for L 2 well-posedness of a system(see [18, section 4, pp.281-284]). Lemma 2.4. Let X, U and Y be Hilbert spaces, L 2 loc (R + , U) and L 2 loc (R + , Y) be abstract function spaces consisting of all local square integrable functions. Suppose that A generates a C 0 semigroup T (t), t ≥ 0. Let B be a linear operator from U to X −1 and C also be a linear operator from D(A) to Y. is L 2 well-posed if and only if, for any given t > 0, there exists a constant M t > 0, which depends on the time t, such that L 2 well-posedness implies that the control operator B and the observation operator C are admissible for C 0 semigroup T = (T (t)) t>0 . B is an admissible control operator for T means that for ∀t > 0 and ∀u ∈ L 2 loc (R + ; U), the mild solution to (13) is given by x(t) = T (t)x 0 + t 0 T (t − s)Bu(s)ds; C is an admissible observation operator for T implies that, for ∀x 0 ∈ D(A), the mapping x 0 → CT (t)x 0 can be extended to a bounded mapping from X to L 2 loc (R + ; Y) (see [19, chapter 4 , pp.128-131]). Using the multiplier method, we can prove the L 2 well-posedness of controlobservation system (15). Since the proof of Theorem 2.5 has long and complicated calculation, we postpone the detail in Appendix A. 3. Spectral analysis of A. In this section, we shall discuss the spectral property of the system operator and its eigenvectors. Firstly, we observe that D(A) = D(A * ) (see (11) and (12)) and for (f, g) ∈ D(A), So we can define two operators in H by and Obviously, the following result is true. Theorem 3.1. Let A 0 and A 1 be defined by (18) and (19) respectively. Then A 0 is a skew adjoint operator and A 1 is a bounded linear operator on H. In particular, Since A 0 is a skew adjoint operator, for any λ ∈ iR, we have ||R(λ, A 0 )|| ≤ 1 | λ| . So when | λ| > ||A 1 ||, we have λ ∈ ρ(A) and From Lemma 2.1 we know that ρ(A) ⊂ {λ ∈ C λ ≤ 0}. Therefore, we have the following result. 3.1. Eigenvalues of A. In this subsection, we shall determine the distribution of spectra σ(A). In Lemma 2.2, we have proved that σ(A) = σ p (A). Therefore, we only need to discuss the eigenvalue problem of A. Let λ ∈ C be an eigenvalue of A and (P, Q) T be the corresponding eigenvector. In what follows, we calculate the det(D(λ)). Since and e xB is a diagonal matrix, so . Thus we can write D(λ) as a product of two matrices Obviously, det(D(λ)) = 0 if and only if det(d(λ)) = 0. Since Thus we find out Thus we get an explicate expression for det(d(λ)): Clearly, the spectra of A are determined by det(d(λ)) = 0. When λ → +∞, it holds that e −B(λ) → 0, and hence 3.2. Eigenvectors. In this subsection, we shall discuss the eigenvectors of A, that is equivalent to determine the vectors P (0) and Q(0) from equation The equations (22) If λ ∈ C satisfies det d(λ) = 0, then (26) have nonzero solutions. In what follows, we classify four cases to determine the solution according to whether or not λ makes d 6 (λ) = 0 and d 7 (λ) = 0 respectively. 3.2.1. Case 1: d 6 (λ) = 0 and d 7 (λ) = 0. In this case, according to (26) we have Substituting above into the first equation in (26) leads to = 0, which means that P 1 (0) is an arbitrary constant. Thus the nonzero solution of (26) is and the vector (Q(0), P (0)) T is given by Since there is only one arbitrary parameter ξ, so the eigen-space of A corresponding to λ is one-dimensional (see, Eqs. (20)). If A 10 = A 20 , by a complex calculation we see that only if all parameters satisfy the following equality the function equations have a solution. Otherwise, the function equations have no solution. From above we get Hence m and n satisfy condition If this condition is fulfilled, then it holds that According to the discussions of case 2 and case 3, we can assume d 4 (λ) = 0, d 5 (λ) = 0. From (26) we get The above discussions show that the following result is true. Theorem 3.4. Let H and A be defined by (10) and (11) respectively. Then for each λ ∈ σ(A), the corresponding eigen-space N (λI − A) is one dimensional. Note that Therefore, the eigenvector (P (x, λ), Q(x, λ)) T can be rewritten by 3.3. Multiplicity and separability of eigenvalues. In this subsection, we shall discuss the algebraic multiplicity and separability of eigenvalues of A. In the sequel, we simply denote G(λ) = det(d(λ)). At first, we have the following result. Theorem 3.5. Let H and A be defined by (10) and (11) respectively. The for any λ ∈ ρ(A), and (f, g) ∈ H, we have the resolvent expression where E(λ)(f, g) is an exponential-type H-valued entire function with respect to λ. This result is a direct verification(we omit the details). As a direct consequence of Theorem 3.5, we have the following corollary. Corollary 1. Let H and A be defined by (10) and (11) respectively. Then for λ ∈ σ(A) = σ p (A), the multiplicity of the λ is equal to the multiplicity of zeros of G(λ). In what follows, we shall discuss the multiplicity and separability of eigenvalues of A based on Corollary 1. We classify three cases according to whether the three vessels are the same or not. Theorem 3.6. Let H and A be defined by (10) and (11) respectively, if three vessels are the same for system (15), all eigenvalues of A are simple and separable . 3.3.2. Two sub-branches are the same. We consider the case that A 20 = A 30 and β 2 = β 3 . Thus b 2 (λ) = b 3 (λ) and whose zeros are given by Clearly, if e 2b2(λ) = −1, the zeros of G(λ) are separable and simple(at most finite number non-simple). We consider the second case. Set we have asymptotic expression for sufficiently large |λ| CHUN ZONG AND GEN QI XU When λ ∈ C satisfies | λ| ≤ r, cosh( i λ + δ i ) and sinh( i λ + δ i ) are bounded functions, so we have So the asymptotic zeros of G(λ) are given by To show the separability of eigenvalues, we consider function equations: Solving the equations yields Thus λ ∈ C is of the form Comparing the imaginary parts of λ leads to Therefore, only when the above two equations are satisfied simultaneously, the function equations have a solution λ. Otherwise, the function equations have no solution. Thus, the zeros are simple and separable. Theorem 3.7. Let H and A be defined by (10) and (11) respectively, when the two sub-branch vessels are the same for system (15) but (29) does not hold, all eigenvalues of A are separable and only at most finite number of eigenvalues are not simple. CHUN ZONG AND GEN QI XU Obviously, the zeros set of Γ(λ) is and λ ∈ C\σ Γ makes F (λ) = 0 if and only if G(λ) = 0. We rewrite F (λ) as For any η ∈ C, we consider the function equations Obviously, for some η ∈ C, λ ∈ C\σ Γ is a solution of the above equations if and only if F (λ) = 0 and hence λ makes G(λ) = 0. We assume that η ∈ C makes the function equations have at least one solution. For such an η, λ is a corresponding solution, then , . Theorem 3.8. Let H and A be defined by (10) and (11) respectively and the three vessels be different for system (15). If ∀λ ∈ C with G(λ) = 0 makes (31) do not hold, then all eigenvalues of A are simple and separable. Completeness and Riesz basis property of (generalized) eigenvectors of A. In this section, we shall discuss the completeness and Riesz basis property of (generalized) eigenvectors of A. Firstly, we establish the completeness of the (generalized) eigenvectors of A. Lemma 4.1. Let H be a Hilbert space. Suppose that A 0 is a skew adjoint operator in H with compact resolvent, and there exist two positive constants M and β such that its resolvent satisfies ||R(λ, A 0 )|| ≤ M e β|λ| for all λ satisfying dist(λ, σ(A 0 )) ≥ δ, and A 1 is a bounded linear operator on H. Then the root vectors of operator Proof. Denote the spectral subspace of A by where E(λ k , A) is the Riesz projector corresponding to λ k . We shall prove Sp(A) = H. Let u 0 ∈ H such that u 0 ⊥ Sp(A). Then R * (λ; A)u 0 is an H-valued entire function on C. For any u ∈ H, we denote a scalar function on C by Based on the above properties, the Phragmén-Lindelöf Theorem (see [23]) asserts that F (λ) is a bounded function on complex plane C, and then the Liouville's Theorem says that F (λ) ≡ 0. So R * (λ, A)u 0 ≡ 0, which implies u 0 = 0. Hence Sp(A) = H. This completes the proof. As a direct consequence of Lemma 4.1, we have the following result. In what follows, we shall discuss the Riesz basis property of eigenvector and generalized eigenvectors of A. At first we introduce the conception of general divide difference of exponentials. CHUN ZONG AND GEN QI XU If µ 1 = µ 2 , the first-order generalized divide difference In general, the k-order Generalized divide difference is defined by The following result gives the subspace Riesz basis of the eigenvectors and the family of exponentials (see e.g. [26,27,29]). ( is the Riesz projector associated with λ k ; (3) There is a constant α such that Remark 2. In Theorem 4.5, we do not use the exact multiplicity of eigenvalues of A. Set σ(A) = {λ n , n ∈ Z + }. Under certain conditions, the eigenvalues of A are simple and separable. In this case, the exponentials {e λnt , n ∈ Z + } forms a Riesz basis sequence in L 2 (0, T ) for some sufficiently large T , and there is a sequence of eigenvectors of A that is a Riesz basis for H. These facts will be used in the next section. 5. The exact observability and the exact controllability. In this section, we shall discuss the exact observability and the exact controllability of the system (15) in finite time. We first recall the concepts of the exact observability and exact controllability. Obviously, the mild solution of (33) is x 0 makes sense. The admissibility of operator C for T (t) means that the operator Ψ : can be extended to a bounded linear operator from H to L 2 loc (R + , Y). The exact observability of the system means that the operator Ψ τ is injective and has closed range in L 2 ((0, τ ), Y). Hence, we have that means that we can find out the initial state x 0 of the system from the observation Y (t), t ∈ (0, τ ) and hence for all state X(t). Therefore, the exact observability in finite time is an important property of the system. is said to be exactly controllable in finite time τ if for any x 0 , x 1 ∈ H, there exists a control function u(t) ∈ L 2 loc (R + , U) such that Note that, the admissibility of operator B for T (t) means that, for each u(·) ∈ L 2 loc (R + , U), the operator Φ : is a bounded linear operator from L 2 loc (R + , U) to H. Thus the mild solution of (35) is The exact controllability of the system means that for any given final state x 1 , we can find out a control u(t) such that the mild solution X(t, u) of the system arrives at x 1 at finite time τ . So the exact controllability also is an important property of the system. Remark 3. (see [19, chapter 11, pp.365]) Usually it is difficult to verify (36). Note that (36) implies that the range of operator Φ τ is the whole space H. By the duality theory, we can transfer the controllability into the observability of the dual system. Then the system (35) is exactly controllable in finite time τ if and only if its dual system is exactly observable in finite time τ , or equivalently, there exists a positive constant m τ such that 5.1. The exact observability of system (15). In this subsection, we shall discuss the exact observability of system (15) in finite time. Without loss of generality, we assume that all eigenvalues of A are separable and simple(also available for at most finite number non-simple eigenvalues). As pointed in Remark 2, for sufficiently large τ , the family of exponentials {e λnt , n ∈ Z} forms a Riesz basis sequence in L 2 (0, τ ), and there exists a sequence of eigenvectors of A, saying {Φ n (x, λ n ), n ∈ Z}, which forms a Riesz basis for H. The property of the Riesz basis sequence {e λnt , n ∈ Z} in L 2 (0, τ ) for sufficiently large τ implies that there exist two positive constants m τ and M τ such that And the Riesz basis property of eigenvectors {Φ n (x, λ n ), n ∈ Z} of A means that for each F = (f, g) ∈ H, F = n∈Z a n Φ n (x), a n = (F, Ψ n (λ n )), where the sequence {Ψ n (x, λ n ), n ∈ Z} is bi-orthogonal to {Φ n (x, λ n ), n ∈ Z} and there exist m 1 and m 2 such that Now let A be defined by in (11) and T (t) be the semigroup generated by A. The C is the observation operator defined in Section 2. Then T (t) has an analytic expression as T (t)F = n∈Z a n e λnt Φ n (x), ∀F ∈ H. In particular, for F ∈ D(A), the observation Y (t) satisfy Y (t) = CT (t)F = n∈Z a n e λnt CΦ n (x). 5.2. The exact controllability in finite time. In this subsection, we shall discuss the exact controllability of system (15) in finite time. Thanks to the duality, we only need to discuss the exact observability of the dual system (37). We complete the proof by the following three steps. Step 1. Calculating the conjugate operator B * . Firstly we notice that L 2 , and λ 1 , λ 2 are the resolvent point of operator A and A * , respectively. So, B * : H * −1 → U * = C 3 is given by Step 2. Calculating the eigenvectors of A * . Recall the results of the simplicity and separability of the eigenvalues of the system operator A in Theorem 3.6, Theorem 3.7 and Theorem 3.8. Then we have obtained the following result. Theorem 5.3. The system (15) are exactly controllable and exactly observable in finite time in the following three cases: (1) The three vessels are the same; (2) The two sub-branch vessels are the same but (29) does not hold; (3) The three vessels are different and if ∀λ ∈ C with G(λ) = 0 makes (31) do not hold. 6. Conclusion. In the previous several sections, we have carried out a complete system analysis for a linearization blood flow network described by (8) and (9). We have proved that the system is L 2 well-posed, exactly observable and exactly controllable in finite time. Note that this system is the simplest model of blood flow network, and we can extend this method to more complicated ones. The key point in the proof of observability is to show the Riesz basis sequence property of exponential family of eigenvalues for the corresponding system. And the main difficulty we encounter is the proof of separability of eigenvalues of the system operator. In the future, we shall study the observability and the controllability of more complicated blood flow networks. Appendix A: The proof of Theorem 2.5.
6,532.4
2014-09-01T00:00:00.000
[ "Engineering", "Medicine", "Mathematics" ]
Use of Pearson’s Chi-Square for Testing Equality of Percentile Profiles across Multiple Populations In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions. Introduction Student's t-test and analysis of variance are frequently employed to test the hypothesis that two or more distributions have common means. However, many random variables may have skewed distributions that are not readily transformed to symmetry, rendering the distributional assumptions that underlie use of these methods inappropriately. Nonparametric procedures such as the Wilcoxon, Kolmogorov-Smirnov and median tests are desirable alternatives to test for differences in distributions [1]. Unfortunately, many nonparametric tests are "global" tests of equivalence-that is, tests of whether the distributions are identical over the entire domain. For example, if the t-test is used to compare two distributions that are symmetrical and similarly shaped except for possible shifts in location, it may outperform the median test. However, if the sample sizes are small and the distributions are highly skewed the median test may be preferred. These tests are not designed to pin-point where the distributions are unequal or simultaneously test for differences in more than one distribution parameter. Similarly, under appropriate assumptions the t-test and variance-test are especially powerful for detecting differences in the location and scale, respectively, but may be considered too narrow in scope as they both test only one parameter. In practice, it is not uncommon to compare two distributions with varying degrees of skewness, location shifts, and possibly even mixtures of distributions. In these circumstances it may be of greater interest to compare the distributions in terms of their percentiles rather than their means or an overall test of equivalence. For example, it may be more informative to compare two or more distributions by testing the hypothesis that their profiles of judiciously selected percentiles are equal, where a percentile profile is defined as a set of one or more percentiles. The procedure investigated here, first described by [2], can be thought of as a generalization of the median test. Instead of testing the equality of only one percentile-the 50 th -the method is extended to simultaneously test multiple percentiles. In this way, it is possible to test if two or more sets (profiles) of desired percentiles are jointly identical across multiple populations. As an application of Pearson's chi-square test, this approach has excellent large sample properties. We give an example of the procedure to compare several populations with respect their percentile profiles using body mass index (BMI) data from the National Health and Nutrition Examination Survey (NHANES). We begin in Section 2 with a general formulation of hypotheses for comparing percentile profiles coupled with a testing strategy that is a novel generalization of that employed in the median test. Empirical power simulation results are shown in Section 3 to illustrate the test's large sample properties under selected conditions with irregularly shaped distributions. An illustrative example applied to the NHANES data is presented in Section 4. Some concluding remarks on the test and planned future work are given in Section 5. Formulation of Hypothesis and Test Procedure Let Y denote a continuous random variable of interest and let Q 1 , Q 2 , ···, Q p denote a set of p percentiles (quantiles) that in some sense characterize the distribution of the random variable across its range. Further let y 1 , y 2 , ···, y n represent a random sample of observations and let q 1 , q 2 , ···, q p represent, respectively, the usual sample estimates of Q 1 , Q 2 , ···, Q p . Suppose random samples are available from each of K populations with percentiles Q h = Q h1 , Q h2 , ···, Q hp , h = 1, 2, ···, K where there is interest in testing the hypothesis that the percentile profiles are identical across the K populations; that is, interest is in testing: The following approach is an extension of the median test: 1. Combine the K samples and obtain the usual estimates of the population percentiles for the corresponding combined populations. 2. Let y denote an arbitrary observation in the combined sample. Use the combined sample percentile estimates to define p categories or bins denoted bin 1 ,bin 2 , ···,bin p+1 , where bin 1 = min(y) ≤ all y ≤ q 1 , bin 2 = q 1 < all y ≤ q 2 , ···, bin p+1 = q p < all y ≤ max where min and max, respectively, represent the minimum and maximum observations in the sample. Power Simulations The asymptotic properties of the proposed percentile test were investigated. For example, Figure 1 shows the convergence of the distribution of the test statistic to a true chi-squared distribution with nine degrees of freedom when comparing the percentile profiles (1 st , 5 th , 10 th , 25 th , 50 th , 75 th , 90 th , 95 th , and 99 th ) of two populations with sample size n and m from the Gamma (shape = 2, scale = 3) distribution for simulated samples of various sizes (n = m = 25, 50, 100, 200). There is good agreement between empirical and true chi-squared distributions at around sample size 100, becoming nearly indistinguishable by samples of size 200. Simulations show that increasing the number of percentiles increases the sample size required for convergence to the true chi-squared distribution. The example in Figure 1 could be considered a relatively extreme case in that it simultaneously tests nine percentiles, many of which are at the extreme tails of the distribution. Profiles with fewer percentiles (and closer to center of the distribution) converge satisfactorily with smaller sample sizes. Power simulations were conducted for various scenarios where the data were generated from the following family of distributions: (1) gamma, (2) mixtures of gammas, and (3) uniform. Symmetric distributions were not considered as better procedures exist for their comparisons. The motivation for this procedure is to compare the profiles of asymmetric distributions that are common in biostatistics applications; gamma distributions are a natural choice to simulate skewed distributions due to their flexibility in simulating data from irregularly shaped distributions with wide-ranging shift options. The properties of the test when using a single percentile were also investigated. The percentile test is applicable to a wide variety of distributions, but is especially useful when comparing skewed and/or multimodal data and detecting differences in ranges between uniform distributions. All power estimates are based on 100,000 replicate samples and all procedures were programmed and carried out with R 3.1.2. Gamma Distributions Results of simulation studies to examine properties of the percentile test for comparing gamma distributions are presented in Table 1 and Table 2. The empirical alpha estimates under H 0 can be found in Table 1 for testing random samples from equivalent gamma distributions while the power estimates under H 1 for testing random samples from two unequal gamma distributions are in Table 2. For estimating power, data were generated for the two populations from gamma distributions differing in both scale and shape parameters (see Table 2 for details). Several percentile profiles were tested for each scenario: P 1 , P 3 , P 5 , P 7 , and P 9 . These refer to The empirical 95 th percentile of the 100,000 replicate samples-the empirical alphaconverges to the true 95 th percentile for each profile tested and its respective chi-squared distribution as the sample size increases. Not surprisingly, the empirical alpha does not match the true chi-squared 95 th percentile for small sample sizes but is very close with sample sizes as small as 100 for some profiles. The test for the P 1 profile, which is equivalent to the median test, approaches 0.05 from above while profiles with more than one percentile approach 0.05 from below with longer profiles converging slower and starting closer to zero. This is due to the increase in the number of bins in the contingency table and hence the degrees of freedom in the chi-square test; and, as will be shown later, smaller expected values in cells result in smaller chi-square values while holding row profiles equal. Table 2 shows the differences in the empirical power estimates for various percentile profiles, P 1 through P 9 . For the first example (Gamma (shape = 2.2, scale = 3.2)), the basic median test, P 1 , is the most powerful for all sample sizes. Because the test's (and the chisquared test's) power is a function of the true difference of percentiles between the distributions and the sample size, the median test performs the best (see Figure 2 comparing distributions with constant difference in percentiles). When testing profile P 9 , for instance, the count in the final bin is just 5 for each group with balanced samples of size 500 which has limited contribution to the overall chi-squared with so many degrees of freedom. This is also the case in the second example (Gamma (shape = 2.4, scale = 3.4)) in Table 2. The median test again is the most powerful. However, one must keep in mind that these particular percentile profiles were more or less arbitrarily chosen and used throughout the paper for consistency. While these choices seem appropriate for symmetric distributions, other choices may be preferred for gamma as well as for other asymmetric distributions. Thus, in practical applications, the analyst would likely select percentiles that are appropriate for the specific data at hand. For the first example, if a profile of (0.5, 0.75, 0.9, 0.95) is tested, the power estimate increases to 1.000 for sample size of 200, compared to the 0.503 for P 1 . Mixture of Gamma Distributions The convergence to 0.05 is nearly identical to the single gamma case for each combination of sample size and percentile profile tested. This result shows the test statistic converges to chi-squared for a wide range of distributions (results were consistent for simulations with normal and uniform distributions, although they are not shown). The empirical power estimates for comparing mixtures of gammas are presented in Table 3. Similar to the previously described simulation study, the distributions used for generating data for the two populations differed in both shape and scale parameters. In this case, these parameters differed between the populations in both of the gammas making up the mixture distributions. As expected, the power increases as sample size increases with the power greater than 0.9 at around sample sizes of 500. In these mixtures of gammas examples, the median test (P 1 ) generally performs the worst of all profiles, unlike in the example using the single gamma. For sample sizes greater than or equal to 100, P 1 is the least powerful of all the profiles tested and only better than P 9 for sample size 50. For sample size 25, P 1 is the most powerful due to the properties of the chi-square test, i.e. insufficient observations in the contingency table for profiles with more than one percentile. Uniform Distributions Simulation-based empirical power estimates for comparing uniform distributions are shown in Table 4 and a comparison with estimated power from other procedures is presented in Table 5. Simulations (not shown) confirmed the asymptotic behavior of testing uniform distributions under H 0 . The empirical alpha for each sample size/profile combination was equivalent (within 0.01) for uniform distributions as for gammas and mixtures of gammas. We considered two scenarios: (1) a shift in the range of the distribution from uniform (0, 1) to (0.1, 1.1) and (2) and reduction in the range from uniform (0, 1) to (0.1, 0.9). Table 4 shows the results of testing the percentile profiles between sample data from the uniform (0, 1) and these two modified uniform distributions. The percentile profiles P 1 , P 3 , P 5 , P 7 , and P 9 are the same as those previously used for testing the gamma distributions. The power estimates from the percentile test (P) used in Table 5 are specifically chosen for testing uniform distributions. The percentiles chosen in simulations in Table 5 are based on the properties of the uniform distribution. When comparing uniform distributions, the differences can be detected at the extreme percentiles, near 0 and 1-the middle part of either distribution is unnecessary. For instance, if the lower boundaries for the uniform distributions are unequal, a percentile near 0 will detect the difference (there is always a percentile as a function of the sample size that will create a perfect separation of observations into the first bin). Similarly, if the upper boundaries are unequal, there always exists a percentile near 1 that perfectly separates the observations. This results in a large chi-squared statistic and the rejection of the null hypothesis. If sample sizes are balanced for each group, a good choice of percentiles for comparing uniform distributions is (1/n + δ, 1 − (1/n + δ)) where n is the sample size of one group and δ is a small added value (add a 1 in the furthest decimal place, i.e. 0.03 would become 0.031 and 0.005 would become 0.0051, etc.). In the example, for samples of size 100, the optimal percentiles would be (0.011, 0.989). We will refer to the sample size dependent percentiles for uniform distributions as uniform optimal percentiles (UOPs). The percentile test is extremely powerful in detecting differences in the range of uniform distributions for both scenarios: (1) a shift (with equal range) and (2) a change in range but with equal average value. The profiles P 1 , P 3 , P 5 , P 7 , and P 9 displayed a range of performance with P 7 having the highest power for both scenarios. Although the P 7 profile performs well, the power is greatly improved when UOPs are used. When these sample size dependent percentiles are used, power of greater than 0.8 is achieved with samples of less than 50 in testing scenario (1) and (2). The performance is substantially better than both Wilcoxon and Kolmogorov-Smirnov tests, particularly under scenario (2), a change in range but with equal expected value. For example, the power of the percentile test with UOPs at sample size of 50 is 0.868, compared to 0.051 and 0.072 for the Wilcoxon and Kolmogorov-Smirnov tests, respectively. Testing Profiles of 100 Percentiles Additional simulations were conducted to examine the behavior of tests that compare profiles each comprised of 100 percentiles. Since the power of the percentile profile test is a function of sample size and the true difference in the distributions with respect to their percentile profiles, normal distributions were used to eliminate one of these variables (the difference in all percentiles are equal to the difference in the location parameter, assuming the scale parameter is constant). As can be seen in the plots in Figure 2, the median test (0.5 percentile) is the most powerful single-percentile test for each sample size even though the differences in percentiles between distributions are the same. This is likely due to the nature of the chi-squared test-as the differences in observed and expected values increases linearly, the chi-squared value increases quadratically. However, this relationship holds for gamma distributions for which the difference between percentiles is not constant. When comparing gamma (shape = 2, scale = 3) and gamma (shape = 2.2, scale = 3.2), the difference in the true percentiles increases as the percentile increases from zero to one. For small sample sizes, the power fluctuates greatly as the percentile changes and exhibits a pronounced "saw tooth" behavior. However, these fluctuations gradually disappear as the sample size increases. As the sample size increases, the power increases as the percentile is held constant. The same is true of the difference in the distributions' percentiles. The power is generally symmetric with a maximum at percentile 0.5 for all cases even if the differences in the percentiles between the distributions are not constant. The relationship between the true difference in distributions, sample size and percentile is quite complex and needs to be investigated further to be fully understood. Understanding these relationships will likely improve the effectiveness of this procedure. Illustrative Example Body mass index (BMI) data from the 2011-2012 NHANES study were used as an example of an application of the percentile test. For illustrative purposes, only non-Hispanic black and white adults between the ages of 20 and 79 are included in the analysis. Suppose there was interest in testing the homogeneity of BMI percentile profiles for independent race-sex groups: black females, black males, white females, and white males. Observed discrepancies among the four BMI distributions are shown in Figure 3. To test for homogeneity of the profiles, one could follow the steps outlined in Section 2. Consider the sample obtained by combining the four race-sex groups, and its 1 st , 5 th , 10 th , 25 th , 50 th , 75 th , 90 th , and 99 th BMI percentiles, shown in Table 6. To illustrate, consider the two percentile sets (0.25, 0.5, 0.75) and (0.1, 0.25, 0.5, 0.75, 0.9) where interest is in testing homogeneity of each of the corresponding percentile profiles. The corresponding percentile profiles obtained from Table 6, (24.5, 28.5, 33.5) and (22.0, 24.5, 28.5, 33.5, 39.2), were used as sets of cutoff values to construct the contingency tables in Table 7 and Table 8, respectively. Applying the chi-square test using the percentile set (0.25, 0.5, 0.75) in Table 7 results in a highly significant difference between the percentile profiles (p < 0.0001). Similarly, the profile (0.1, 0.25, 0.5, 0.75, 0.9) in Table 8 is also highly significantly different (p < 0.0001). To further test differences between the group profiles, within gender pairwise comparisons between black males and white males as well as black females and white females were performed. No significant difference was found between white males and black males for either set of percentiles (p = 0.192 and p = 0.298 for (0.25, 0.5, 0.75) and (0.1, 0.25, 0.5, 0.75, 0.9), respectively). However, black females differed significantly from white females in both sets of percentile profiles (p < 0.0001 for both sets). Concluding Remarks Percentile profiles provide easy to interpret characterizations of data distributions and are frequently used as descriptive statistics to capture distributional variations other than shifts in central location. Although the median test is well known, methods of conducting simultaneous inferences about percentiles within a specified profile have not been well described. The approach used in this manuscript is based on well-known fundamental principles that are easy to understand and implement. One clear advantage of this procedure over other tests is the ability to directly compare a number of percentiles between distributions rather than overall tests of equality or changes in location or shape. The procedure is extremely powerful in detecting differences between uniform distributions. When percentiles are optimally chosen, the power of the percentile test outperforms other procedures and is well powered at relatively small sample sizes. Further work will be done to investigate the properties of the test in comparing uniform distributions. A limitation is that the test relies on large sample theory and further study is needed to evaluate the severity of this restriction. It is important to remember that there are more powerful tests for comparing overall equality of distributions (Wilcoxon, KS test) or differences in specific parameters (t-test, F-test), but none that test equality of a set of multiple percentiles between distributions. Rules for choosing percentiles to maximize power may be useful research. Kernel density estimates of BMI for adult black females, black males, white females, and white males between ages 20 and 79. Empirical alpha estimates for comparing Gamma distribution. Table 2 Empirical power estimates when testing against Gamma (shape = 2, scale = 3). Table 3 Empirical power estimates testing against 1/2 Gamma (shape = 1.5, scale = 2.5 & 1/2 Gamma (shape = 4.5, scale = 4.5). Table 4 Empirical power estimates when testing against Uniform (0, 1). Table 5 Empirical power estimates when testing against Uniform (0, 1) with uniform percentile rule (P), Wilcoxon test and Kolmogorov-Smirnov (KS) test. Table 6 Percentiles for black females, black males, white females, white males combined.
4,914
2015-07-23T00:00:00.000
[ "Mathematics" ]
Factors Influencing the Choice of Sophisticated Management Accounting Practices-Exploratory Evidence from An Emerging Market This study attempts to explore the factors that may lead to the choice behind sophisticated management accounting practices (SMAP) in an emerging economy, Bangladesh. A semi-structured questionnaire has been developed to capture the market data and different descriptive and inferential statistical tools have been used to test relevant hypotheses. The findings of the study are helpful for the management accounting practitioners, academics, and researchers to understand the current state of management accounting practices in an emerging market. In addition, the study brings an extension to existing literature by exploring potential causal relationship between sophistication in applying management accounting tools and satisfaction of management accounting practitioners. This study confirms that there is a missing link between practitioners’ satisfaction and SMAP. It gives a signal to the market that the critical decision-making process is not supported by tactical exercises and market lacks professionalism greatly which may act as an obstacle to develop a competitive business environment. I. Introduction From its very inception, management accounting, as an offspring of accounting, is serving the decision-making needs of internal management. On its way, management accounting has changed every now and then as a response to changing requirements of the decision makers and thus management accounting practices do not take a particular shape. Management accounting practices combine a variety of methods, specially considered for manufacturing businesses to support the organization's infrastructure and management accounting processes (Ittner and Larcker, 2002). Companies in recent days work in a very highly competitive business environment driven by mostly unknown challenges and changes. Deregulation by encouraging private investments through privatization, borderless competition through highly decentralized corporate structure, shortening product life cycle due to rapid change in customers' tastes and requirements, serious dependency on cost effective information and production technologies, and impact of disruptive technology like data analytics, business analytics, blockchain, artificial and business intelligence, robotics, machine learning etc. have triggered firms from behind to implement sophisticated management accounting systems which may accommodate any level of difficulties with the provision of generating accurate data at desired level. Due to this revision in perception which resembles to practices, the job profile of management accountants doesn't follow traditional bean counter model (Rieg, 2018), rather replaced with business partner model, by taking a more active role in the decision making processes of organizations (Jørgensen and Messner, 2010). Research works published in leading business journals give the testimony of innovations and applications of sophisticated management accounting tools supporting this revised role of management accountants. There was a significant downturn in research and eventual diffusion of management accounting practices after 1925 (Kaplan, 1984). Johnson and Kaplan (1987) held management accounting practitioners and researchers responsible for their unknown reluctance in bringing sophisticated tools in the field of management accounting. These criticisms acted as strong motivator which is reflected in the development of innovative tools in the field of management accounting across a range of industries within next few decades. From late 80s, different sophisticated tools have been developed in the field and highly diffused which gives management accounting practitioners a revised role from simple cost focus to forward-looking perspective (Fullerton and McWatters, 2002;Haldma and Laats, 2002). By the time, International Federation of Accountants -IFAC (1998) comes with a discussion on explaining the evolution of management accounting in a framework with four stages. Initial focus of management accountants was limited to cost determination whereby the role of management accountants was simply clerical and gradually the focus moves to the creation of values for the customers which leaves the ultimate goal of management accounting as strategic. Ittner and Larcker (2001), very rightly supported this by arguing that "companies increasingly are integrating various [innovative] practices using a comprehensive 'value-based management' ….framework". Innovations and subsequent diffusion of innovative tools in practice requires supportive environment characterized by some sweetener. To explore motivating factors behind the choice of sophisticated tools surrounding management accounting practices, the researchers should be guided by contingency approach (Langfield-Smith, 1997;Chenhall, 2003). This approach assumes that there is no universally accepted modality of practices and every action depends on some internal and external contingent factors. This study has taken a positivistic approach in exploring the factors that may affect the choice of SMAP in an emerging economy, Bangladesh. The researchers are motivated to carry out such research from the postulate of Ittner and Larker (2002) which is "it is difficult to imagine how research in an applied discipline such as management accounting could evolve without the benefit of detailed examination of actual practice". To explore the benefit of actual practice of management accounting, this study targets to address the issue of satisfaction of management accountants, level of sophistication achieved in management accounting practices, and possible influence of practitioners' satisfaction on sophistication. The exploration is done based on a semi-structured questionnaire designed in Likert's scale. Practicing management accountants are the targeted respondents in this study. This study puts particular focus on the identification of different contingent factors affecting the choice of management accounting tolls leading to achieve sophistication and also searches for any relationship between the sophistication of management accounting practices with the level of satisfaction of management accountants. Management accountants' contribution in establishing SMAP targeting to achieve broader corporate goals should not be undermined. Thus, the research on identifying the relationship becomes a policy issue in management accounting research. This is surely a value addition to the current state of knowledge. An earnest effort is deployed here to develop management accounting as a separate field in an emerging economy which is gradually becoming an industry led economy. Practicing management accounting embedding it with financial accounting which is considered as mainstream in many countries may not give the country a competitive edge. Based on the study, the researchers are convinced that the practicing field becomes matured enough to accommodate innovative tools developed so far and management accountants' satisfaction is very important to achieve sophistication. II. Literature Review and Hypothesis Development The term sophistication is referred as the application of advanced tools used to produce accurate information for internal management of corporate affairs. A handful number of researches have narrowly focused in studying the level of sophistication achieved by firms using Activity Based Costing (ABC) as a proxy to sophistication. A study (Bjornenak, 1997) on the adoption status of ABC used cost structure, product diversity, existing costing system and competition as prime variables where 30 companies were classified as ABC adopters and another 23 companies were classified into non-adopter category. Another study (Booth and Giacobbe, 1998) done on 207 Australian manufacturing firms identified size, cost structure, competition, and product diversity that influence the decision of adopting ABC. In a separate study done on 204 Irish manufacturing firms, Clarke et al. (1999) classified the respondents into those implementing ABC (11.76%), assessing ABC (20.59%), rejected ABC (12.75%) and having not considered ABC (54.90%) which concludes that more than 50% of the firms have never considered ABC. Malmi (1999) conducted separate surveys for different industries covering a total of 490 organizations and the study resulted an adoption rate of 21% where 104 companies out of 490 had been classified as ABC adopters. The study examined few potential organizational determinants of ABC adoption, namely, size, competition faced, product diversity, cost structure, production type, and strategy. Using logistic regression, Gosselin (1997) has conducted another study based on responses collected from 161 manufacturing companies from Canada to examine the effect of organizational structure and strategic posture on the adoption of activity management approaches. Using data collected from five research sites in Australia, Abernathy et al. (2001) had conducted a study which is different from others. Rather than classifying the cost systems as traditional or ABC, they tried to classify cost systems with reference to level of sophistication. However, this study targets to explore the causal relationship between level of sophistication achieved by firms in Bangladesh and different factors driving the firms to achieve sophistication. The following factors have been identified to study their influence on the choice of SMAP.: a) Cost composition b) Competition c) Product diversity d) Size e) Use of information technology f) Decision-making usefulness g) Maturity h) Satisfaction Cost Composition One of the important reasons of moving towards sophisticated system is to ensure accuracy in product costing in firms where cost composition is critical, means overhead cost as a percentage of total cost of production is significant. Both the traditional and sophisticated product costing systems assign direct costs to costs object through directly tracing the cost with cost objects. However, simplistic costing system fails to assign indirect costs to cost objects accurately generating different level of distortion in product costing. One of the prime objectives of sophistication in product costing is to translate indirect costs into direct conceptually so that the assignment of indirect costs to ultimate cost object becomes meaningful. Johnson and Kaplan (1987) rightly mentioned that the modification in costing systems is caused by the dramatic change in cost structures over several decades. In recent years, direct labor hour based unsophisticated systems fall into serious criticism for reporting distorted product cost data due to the trend of increased overhead cost as a percentage of total costs (Cooper, 1988). Brierley et al. (2001) also confirmed a potential change in cost structure based on surveys conducted in firms from Europe and United States of America (USA). They concluded that direct material cost is comparatively higher than indirect costs; however, direct labor cost comprises a very small fraction of total costs. The choice of sophisticated system is necessarily guided by the composition in cost structure. The significance of overhead costs in the overall cost structure is an important parameter behind the selection of sophisticated methods for allocating indirect costs to cost objects (Brierley et al., 2001). As traditional systems are responsible for distorting product cost through wrongly allocating indirect costs to cost objects, Cooper and Kaplan (1992) have suggested using ABC systems by organizations with high indirect costs. Organizations with low indirect cost composition, however, they have supported using traditional unsophisticated systems. Thus, the literature supports to implement sophisticated systems to ensure accuracy in product costing for those firms showing significant percentage of indirect costs in total cost structure. Based on the above literature, it is confirmed that the design of sophisticated costing system depends on the significance of indirect costs in respective cost structures of firms. Based on the discussion above, the following hypothesis has been formed for investigation: H1: There is a positive and significant relationship between the percentage of indirect costs and the level of the sophistication of the costing system. Competition Competition is the most important external factor for stimulating managers to begin to work on a new cost system (Bruns, 1987). Companies operating in competitive environments should be encouraged to change their control systems, because proper costing systems and appropriate performance monitoring are fundamental to survival (Libby and Waterhouse, 1996). Market competition generates turbulence, stress, risk, and uncertainty for enterprises, so that they continuously Volume 11 No 1 (2021) | ISSN 2158-8708 (online) | DOI 10.5195/emaj.2021.211 | http://emaj.pitt.edu Factors Influencing the Choice of Sophisticated Management Accounting Practices -Exploratory Evidence from An Emerging Market Page |4| Emerging Markets Journal adjust their control system in response to the threats and opportunities in the competitive environment (Mia and Clarke, 1999). The choice of employing relatively sophisticated cost and management accounting systems is caused by the intensity of competition faced by companies in a particular market environment. Mia and Clarke (1999) tested the relationship between the intensity of market competition and the use of information by managers. They concluded that the increase in the intensity of competition in the market is associated with the increased managerial use of management accounting information. Libby and Waterhouse (1996) also found a positive relationship between intensity of competition and the design and use of management accounting systems. Similarly, Al-Omiri and Drury (2007) found a positive association between the intensity of competition and the sophistication level of the cost system. Researchers have also argued that firms operating in a more competitive environment are under the pressure of assigning costs accurately to products, services and customers than others operating in a less competitive environment. This pressure leads them to install more sophisticated cost systems. Otherwise, there is a high chance that competitors will take advantages of any errors made in decision making which is caused by inaccurate cost information generated through traditional systems. In line with the discussion above, this study has made the following hypothesis for investigation: H2: There is a positive and significant relationship between competition and the level of the sophistication of the costing system. Product Diversity Product diversity covers different variations in the offerings by a firm which may include support, process and volume diversity. Support diversity captures the pattern of services that each product receives from different service rendering units in organizations. Process diversity identifies all the required processes relating to design, manufacture and distribution of products with a particular focus of understanding the pattern of resources consumed by different processes. Volume diversity arises from the manufacturing variations caused by differences in production in terms of volumes and batches. A sophisticated costing system is demanded seriously to neutralize the impact of all these diversities on costing. Researchers, based on studies, have argued that one of the important factors for reporting distorted product costs by traditional costing systems is product diversity (Cooper, 1988 andEstrin et al., 1994). This product diversity becomes a serious concern when the resources consumed by different products vary significantly. To address these wide variations in resource consumption by different products, the application of more sophisticated costing systems is warranted. In absence of sophisticated costing systems, significantly distorted product costs are likely to be reported due to the inability of simplistic costing systems to adjust the resource consumption pattern by different cost objects. It is commonly accepted and understood that the type of costing system used and the underlying production process are somehow related (Malmi, 1999). The choice of costing system should logically be guided by the complexity of the production process. The more complex production process demands sophisticated costing system to capacitate the system for handling extra difficulty. Product diversity, sometimes, gives proxy to the prevalence of complexity of production process. As the products are getting more complex in terms of production process, they require more activities to manufacture them. As a result, it demands sophisticated cost accounting systems to measure the resource consumption of different products differently. The foregoing discussion concludes that sophisticated system is important when firms have greater product diversity and thus, the following hypothesis is hereby formed for investigation: H3: There is a positive and significant relationship between product diversity and the level of the sophistication of the costing system. Size Organizational size, measured in sales, assets or number of employees, have found positively related with the initiative of adopting sophisticated management accounting systems. Research has shown that larger firms have more SMAP as compared with smaller firms. In India, organizational size was found to be an important factor in adopting advanced management accounting practices (Joshi, 2001). In the UK, Al-Omiri and Drury (2007) also found a positive relationship between the organization's size and the level of cost system sophistication. Study done by Albu and Albu (2012) also revealed that size is one of the most important factors for the adoption and use of management accounting techniques. Size is an important organizational context variable and it can affect the way in which organizations design and use management systems. The size of firm has been shown to affect the design and scope of management accounting practices (Abdel-Kader and Luther, 2008;Albu and Albu, 2012). Abdel-Kader and Luther (2008) found that large enterprises in the UK food and drink industry adopted more SMAP than small ones. Behind the choice of more SMAP by larger firms, a possible reason may be that the larger organizations have greater resources to afford adopting sophisticated systems as compared to their smaller counterparts (Haldma and Laats, 2002;Al-Omiri and Drury, 2007;and Abdel-Kader and Luther, 2008). To confirm a similar relationship, this study also assumes that size is an important factor behind the choice of sophisticated system and formed the following hypothesis for investigation: H4: There is a positive and significant relationship between size and the level of the sophistication of the costing system. Use of Information Technology Management accounting system should be compatible with providing critical information at the demand of the decision makers. This information is mostly intuitive in nature and very much situational which is not known previously. This information is very costly due to the decision-making usefulness which will provide competitive edge in the market. Thus, information technology used in processing information should be highly integrative, real-time and should allow query based solution. In this era of 4 th industrial revolution, technological innovation has changed the language of business data analysis. Management accounting systems have undergone significant changes due to the use of blockchain, machine learning, data analytics, business intelligence and other advanced technologies. Based on a field study covering management accounting and control systems in South Africa, the researchers found that one of the main motivators of change in management accounting and control systems is changes in technology, in particular information systems (Waweru et al., 2004). Szychta (2002) also agreed that technology is one of driving forces behind the shift in use of management accounting practices in Poland. Sophisticated system demands such information technology which is intelligent enough to respond to the needs of the decision maker instantly. Dependence on information technology in a simplistic system environment is not that much critical. Even a simplistic system cannot provide required information real time rather defers it with reference to some future time period. This is due to low investment on information technology, skilled manpower and other infrastructure. To understand the influence of information technology on the choice of sophisticated system, this study investigated the following hypothesis: H5: There is a positive and significant relationship between use of information technology and the level of the sophistication of the costing system. Decision-Making Usefulness Business decision making process is essentially being substantiated by cost and other relevant information. Product costing system is the center of generating relevant cost data in most of the manufacturing firms. A wrong product costing system brings data sterility through under-costing or overcosting whereby subsidizing a group of customers for another. The scope of product costing system has been expanded considerably in recent times to generate additional information having decision making usefulness. Based on relevant information, firms may take only profitable ventures and deal with unprofitable ventures differently. Cost information is important to take different operational and strategic decisions like separation of profitable and unprofitable activities, decision on outsourcing and redesign, cost reduction initiatives, pattern of resource consumptions and disparity among the products, product and service mix decisions etc. In summary, a drive to achieve accuracy in product costing guarantees to provide a great deal of information to help the firms taking accurate decision making. Wrong decision based on irrelevant and inaccurate data may lead not only to face the litigation but also being unfit in the marketplace to survive and grow. The discussion supports the necessity of accurate cost information for product costing and pricing decisions which necessitate sophisticated system in operation. This study, thus, takes the following hypothesis for investigation: H6: There is a positive and significant relationship between decision making usefulness and the level of the sophistication of the costing system. Maturity Sophisticated costing system demands different firm specific parameters which are usually related to number of years the firm is under operation. Liability of newness gives us an important understanding about the likelihood of survival where younger firms runs the risk of being non-existed than their older counterparts with the meaning that age is associated with the likelihood of survival (Hannan and Freeman, 1989). And sophisticated costing system supplements the survival issues of firms through providing valuable information timely. IFAC (1998) has developed the framework explaining the evolution of management accounting practices where the level of sophistication is defined with reference to a particular timeline. Learning accrues from experiences through ages which are very much similar to the process how learning curve operates. This learning about management can be easily converted into improved Management Control Systems (MCS) even though the company is not growing. Age may be related to the emergence of management process if MCS facilitate (Davila, 2005). This discussion eventually acknowledges that the maturity of firms is a very important factor to design and use sophisticated system to remain relevant and competitive. To explore this reality, the following hypothesis is formed for investigation: H7: There is a positive and significant relationship between maturity and the level of the sophistication of the costing system. Satisfaction The study will enrich the scope of management accounting research by bringing management accountants' satisfaction in explaining sophistication of management accounting practices. Current literature is limited to the study of level of sophistication with the identification of contextual factors explaining the level of sophistication without considering any impact of practitioners' satisfaction on it. The study plans to investigate the potential existence of a relationship between management accountants' satisfaction and the level of sophistication in management accounting practices. The establishment of this relationship is perceived to be important for the empowerment of management accountants and diffusion of sophisticated management accounting techniques through continuous change and innovations. The following hypothesis is taken for investigation: H8: There is a positive and significant relationship between management accountants' satisfaction and the level of the sophistication of the costing system. Factors Influencing the Choice of Sophisticated Management Accounting Practices -Exploratory Evidence from An Emerging Market Page |6| Emerging Markets Journal literature review and research objectives, a draft questionnaire was developed. Using snow-ball sampling method, the draft questionnaire was pre-tested where a total of 27 respondents participated until the saturation point was reached. Then the draft questionnaire was finalized with little modification to draft questionnaire based on the result of pre-testing and the questionnaire was finalized before commissioning the final study. To bring more objectivity in research methodology, a sample frame is thought of the manufacturing companies where professional management accountants are working. This is done through the scrutiny of membership directory of the Institute of Cost and Management Accountants of Bangladesh (ICMAB) for the year 2017. Such scrutiny results 200 companies where the members of ICMAB were working. The study doesn't consider any service industry and companies operating outside Dhaka, the capital city of Bangladesh. Out of the 200 companies, management accountants from 47 companies expressed their reluctance to participate in the survey. Other 153 companies are considered as the sample for the study. However, questionnaires are not received from 28 companies though they have been given remainder in time and 12 of the received questionnaires are rejected due to the missing data. Finally, a total of 113 questionnaires are used for data analysis based on which the research draws major conclusions. Constructs This study uses different constructs to test relevant hypothesis. These constructs are given below with definitions and related scales: Conceptual Framework Considering the major theme of the study, constructs considered and relationship studied, a conceptual framework of the study is presented below: Measurements In this study cost composition covers the significance of indirect costs as compared with total costs in percentage for the respondents' firm. To understand competition faced by the responding firm, three questions were asked covering product, pricing and marketing areas. Respondents were asked to choose values in a 7point Likert scale where 1 refers to 'extremely disagree' and 7 refers to 'strongly agree'. In empirical studies, the perceived intensity of competition has been measured differently in different studies. Several studies have been based on Khandwalla's (1977) model (Libby and Waterhouse, 1996;Williams and Seaman, 2001;Hoque, 2008) which consists of five questions rating intensive competition for raw material, technical personnel, selling and distribution, quality and the variety of products and price. On the other hand, Mia and Clarke (1999) measured the intensity of competition by only one Likerttype scale question but taking into account all factors, including number of major competitors, frequency of technological change in the industry, frequency of new product introduction, extent of price manipulations, package deals for customers, access to marketing channels and change in government regulation or policy. Product diversity follows a composite scale covering physical size, complexity and batch size. A 5point Likert scale is used to capture the feedback where 1 refers to 'not at all' and 5 refers to 'to a very great extent'. Size of firm is typically measured by the number of employees working for an organization or its total assets or its total sales. This study uses number of employees to measure size. To measure use of information technology and decision-making usefulness, a composite scale is designed on 7-point Likert scale where 1 is used to mean 'strongly disagree' and 7 is used to mean 'strongly agree'. Maturity is measured in terms of the age of the company; however, different dimensions of satisfaction is measured using a 7-point Likert scale anchoring 1 for 'very dissatisfied' and 7 for 'very satisfied'. To measure the value of sophistication, a weighted multi-criteria method is applied. Four different parameters used as criteria in measuring sophistication are Pool-Driver Quantitative, Pool-Driver Qualitative, Education and Advanced Management Accounting Techniques Adoption with their respective weights and sub-weights. The methodology results a value between 1 and 100 for every firms where a value close to 100 means sophisticated system and values close to 1 means unsophisticated system. For the application of logistic regression, the dependent variable sophistication is converted to categorical variables as follows: Unsophisticated System = Value between 1 and 50 Sophisticated System = Value between 51 and 100 Statistical Tools Used Different descriptive and inferential statistical tools (e.g., regression analysis) are used to test the hypotheses formulated in the study. A correlation between and among the variables is shown for explaining the suitability of each construct in regression analysis. A multiple regression is run to understand the relationship of each construct with the level of sophistication. To run logistic regression, level of sophistication has been made a categorical variable. Finally, in a separate regression analysis, the study looks for any relationship between satisfaction and sophistication. IV. Analysis and Findings This section presents analysis and findings of the study in different sub-sections. Initially it begins with explaining the respondents and corporate profile of the study using descriptive form of analysis. Then, all the hypotheses formed in literature review section have been tested using regression analysis. Findings of the analysis are presented at the end. Respondents' and Corporate Profile A total of 113 respondents participated in this study. These participants vary in terms of their educational background, years of experience, turnover intention, no of jobs and their position in organizational chain of command. Around 41% of the respondents are professional accountants, while other 50% have master's degree. In terms of years of experience, around 78% of the respondents have more than 5 years of experience. A good percentage of the employees seem to be happy with their job which is reflected in less percentage (18%) of respondents having an intension of switching the job. Respondents are not so much job hopper where only few respondents (6%) have switched above 5 jobs where around 78% of the respondents hold positions in either mid level or top level management. Like the number of the respondents, a total of 113 firms have participated in this study as 1 respondent has responded from each firm. Questionnaire captures the profile of responding firms in terms of total number of years in operation, number of employees, annual turnover and net assets. Around 20 firms out of 113 have less than 10 years of operation, and 58% of the firms having less than 1,000 of employees. In terms of Factors Influencing the Choice of Sophisticated Management Accounting Practices -Exploratory Evidence from An Emerging Market Page |8| Emerging Markets Journal turnover, 68% of the firms reports more than Bangladesh Taka (BDT) 100 million while around 78% of responding firms have more than BDT 100 million invested in net assets. Correlation Coefficient To measure the strength of the relationship between the relative movements of each pair of variables, a correlation matrix is derived. It shows the relationship between alternative measures of cost system sophistication and factors affecting sophistication. Use of information technology and competition are positively correlated (p<0.01) with decision making usefulness. Competition is also positively correlated (p<0.01) with decision making usefulness. Size is positively correlated (p<0.01) with sophistication based on 100-point scale whereas it is negatively correlated (p<0.05) with sophistication classified by two categories. On the other hand, maturity is positively correlated (p<0.05) with size. Multiple Regression Analysis This study uses regression analysis predominantly as a tool for drawing inferences. Correlation coefficients as presented above confirm the use of every construct as a separate one. To test the reliability of scales used, Cronbach alpha value is calculated and for all the constructs alpha results exceeded 0.70 which confirms the reliability of scales. Based on the research objectives and conceptual framework, a regression model is developed as below: In this regression model, sophistication (y) is used as a dependent variable with other seven constructs which may influence achieving sophistication in cost system design and use. As satisfaction is considered separately, this model doesn't consider satisfaction. ANOVA and model summary of the regression analysis explains the model fitness, significance and explanatory power of all the constructs together on the dependent variable of the model. It results a value of .345 for R which is the multiple correlation coefficient defining the correlation between the observed values of the response variable and the values predicted by the model. Its square (R 2 ) gives the proportion of the variability of the response variable accounted for by the explanatory variables. A value of .345 for R indicates a weak correlation and .119 for R 2 means only 11.9% of the change in sophistication is accounted for by the explanatory variables collectively. However, the model becomes statistically significant (F (7, 105) = 2.030, p < 0.05), and so concludes that at least one of the explanatory variables is related to the level of sophistication. Table 3 below extends regression analysis with the beta coefficients of each construct and multicollinearity diagnostics. We can easily predict the influence of each explanatory variable on the response variable. As per the coefficients presented above, only one construct (maturity) out of seven becomes statistically significant (p<0.05). The beta coefficient becomes .325 (standardized) which means it has a positive relationship with sophistication and around 32% of change in sophistication is explained by age of the firm. Product diversity, Use of information technology, competition and size show positively related with sophistication, however, not significant. On the other hand, cost composition and decision-making usefulness result negative relationship with sophistication which is also not significant. Regression becomes impossible in case of existence of multicollinearity and thus, it is important to confirm that the independent variables are not related to Volume 11 No 1 (2021) | ISSN 2158-8708 (online) | DOI 10.5195/emaj.2021.211 | http://emaj.pitt.edu Nikhil Chandra Shil, Mahfuzul Hoque, Mahmuda Akter Emerging Markets Journal | P a g e 9 each other. Regression analysis results two statistics, e.g., tolerance and VIF, to report the status of multicollinearity. As per the values resulted in above table for tolerance and VIF, there exists no multicollinearity which may be a concern for regression analysis. Binary Logistic Regression To use binary logistic regression as an extension to the analysis done above, the dependent variable (level of sophistication) of the model has been made binary. As mentioned in methodology section, firms having a value up to 50 in a 100 point scale have been considered as 'no' means not using sophisticated system and firms having a value more than 50 have been considered as 'yes' means using sophisticated system. The model like multiple regression model tests the influence of all the seven constructs on achieving the level of sophistication of firms. As per the result of the test, the value of Chi-Square becomes 10.022 on 7 df which is not significant. It gives an indication that the variables added to the model don't impact the dependant variable significantly. Other results of logistic regression analysis are presented below: Source: SPSS Output The predictability of the model is very poor which is reflected in larger value of -2 Log likelihood statistics. Cox & Snell R Square and Nagelkerke R Square measure the proportion of variance explained by the predictor variables which vary from 8.5% to 12.7% as given in Table 9 above. An interesting issue is that the construct 'maturity' becomes statistically significant like before. Cost composition and size result zero betas showing no influence of sophistication whereas decisionmaking usefulness and competition result a negative relationship with the level of sophistication. Using the values of logistic regression analysis, the Logit model could be written as: This model could be used to find out the probability of being sophisticated system by a firm with respective values for all the constructs in the model. A firm with 50% manufacturing overhead costs, degree of product diversity 2, level of use of cost data 5.75, level of use of information technology 6, level of competition 4, number of employees 300 and years in operation 21, the probability can be computed by using the equation as below: Logit (p) = -1.829 + .000(50%) + .162 (2) Thus, the odd ratio will be Odd1/Odd2, i.e. 0.25199/0.2439 = 103.32. It means the first firm enjoys 103.32 times higher probability of attaining sophistication as compared with the second firm. Satisfaction and Sophistication Based on the conceptual model, a separate regression analysis is done to explore any relationship between satisfaction and sophistication. The model becomes statistically insignificant and the values for R and R 2 result .139 and .019 respectively with a very low explanatory power. None of the beta coefficients becomes statistically significant when three different dimensions of satisfaction are looked into separately, however, the beta coefficients result positive values. This result provides an indication that practitioners' level of satisfaction doesn't influence sophistication of firms in terms of adoption of different management accounting practices. V. Conclusions Attainment of SMAP by firms is driven by different contingent factors. Management accounting practices follow contingency framework as it is not mandated by law and the practices are not standardized like financial accounting. This study, based on the literature review, has identified eight contingent variables with the expectation that these variables may collectively drive the sophistication initiative of manufacturing firms. The analysis comes out with a very worrying picture that mostly all the factors except 'maturity' carry no significance in explaining the level of sophistication achieved by the firms. This finding is very important for understanding the business environment in Bangladesh. The demand for management accounting information is not that much critical to management and thus, they may not be that much serious with the sophisticated management accounting tools. Rather, accounting system is mainly designed based on the mandatory needs of the market and management accounting is embedded within traditional financial accounting and reporting system. The significance of years in operation (age) parameter gives a new dimension to the study of sophistication in a country like Bangladesh which is struggling with a mass of first generation firms. As these firms are growing older, they are trying to perform professionally. This may be a good reason why years in operation influence the level of sophistication. Other factors which theoretically demand sophisticated management accounting systems have been proved irrelevant in Bangladesh. The country still needs more maturity on some internal and external factors as considered in this study to instill sophisticated management accounting systems. Very interestingly, practitioners' satisfaction on the system and job has no impact on sophisticated management accounting systems which demonstrates the application of a peculiar isomorphism theory. The contextualization of isomorphism theory (DiMaggio and Powell, 1983) in management accounting practice is essentially been shaped by coercive, mimetic, or normative pressures. Institutions in a geography, location, sector or industry become more like one another because innovations are broadly diffused (DiMaggio and Powell, 1983). This study believes that mimetic isomorphism is active in Bangladesh in the field of management accounting practices. Firms usually follow others while choosing particular management accounting tools and matured firms having enough resources are in a position to afford implementing sophisticated management accounting techniques. However, for the massive diffusion of sophisticated management accounting techniques, coercive and normative isomorphisms are important. For coercive isomorphism, firms operating in a certain area should involve themselves into severe completion which leads them to implement sophisticated techniques to guide them in taking critical decision. At the same time, to encourage normative isomorphism, the role of professional accounting bodies, other regulators, researchers and academics must play strong role which seems to be nonexisted in emerging economy like Bangladesh.
8,610
2021-09-08T00:00:00.000
[ "Business", "Economics" ]
Entropic analysis of the quantum oscillator with a minimal length The well-known Heisenberg--Robertson uncertainty relation for a pair of noncommuting observables, is expressed in terms of the product of variances and the commutator among the operators, computed for the quantum state of a system. Different modified commutation relations have been considered in the last years with the purpose of taking into account the effect of quantum gravity. Indeed it can be seen that letting $[X,P] = i\hbar (1+\beta P^2)$ implies the existence of a minimal length proportional to $\sqrt\beta$. The Bialynicki-Birula--Mycielski entropic uncertainty relation in terms of Shannon entropies is also seen to be deformed in the presence of a minimal length, corresponding to a strictly positive deformation parameter $\beta$. Generalized entropies can be implemented. Indeed, results for the sum of position and (auxiliary) momentum R\'enyi entropies with conjugated indices have been provided recently for the ground and first excited state. We present numerical findings for conjugated pairs of entropic indices, for the lowest lying levels of the deformed harmonic oscillator system in 1D, taking into account the position distribution for the wavefunction and the actual momentum. The well-known Heisenberg-Robertson uncertainty relation for a pair of noncommuting observables, is expressed in terms of the product of variances and the commutator among the operators, computed for the quantum state of a system. Different modified commutation relations have been considered in the last years with the purpose of taking into account the effect of quantum gravity. Indeed it can be seen that letting [X, P ] = i (1 + βP 2 ) implies the existence of a minimal length proportional to √ β. The Bialynicki-Birula-Mycielski entropic uncertainty relation in terms of Shannon entropies is also seen to be deformed in the presence of a minimal length, corresponding to a strictly positive deformation parameter β. Generalized entropies can be implemented. Indeed, results for the sum of position and (auxiliary) momentum Rényi entropies with conjugated indices have been provided recently for the ground and first excited state. We present numerical findings for conjugated pairs of entropic indices, for the lowest lying levels of the deformed harmonic oscillator system in 1D, taking into account the position distribution for the wavefunction and the actual momentum. I. INTRODUCTION One of the pillars in the building of quantum physics is the uncertainty principle, which has been formulated by Heisenberg [1] and originally given in terms of the product of variances of position and momentum observables as quantifiers of the quantum particles' spreading. Recently, nontrivial relations have been obtained for the sum of variances [2]. Besides, it has been proven that uncertainty relations can be formulated as well in terms of Shannon and Rényi information entropies (see, for instance, [3][4][5][6] and references therein). The possible influence of gravity in uncertainty relations has been recently proposed [7][8][9][10], and a modification to Heisenberg inequality known as generalized uncertainty principle (GUP) has been analyzed. The modification of the position-momentum uncertainty relation, which is carried out through a deformation of the typical commutation relation between operators, is linked to the existence of a minimal observable length. It is interesting to quote that an experimental procedure to detect these possible modifications has been proposed [11], however it is not yet possible to achieve the required precision. We consider here the harmonic oscillator and study the deformed uncertainty relations appealing to Rényi entropies. In Refs. [12,13], the wavefunctions for some values of the principal quantum number have been given in momentum space, and in position space. However, differently to the standard quantum mechanics, the presence of gravity induces that position and momentum space wavefunctions are not related via a Fourier transform, but it is necessary to consider an auxiliary transformation [14]. We present an entropic analysis in this case, and show a family of inequalities satisfied by Rényi entropies. A. Quantum oscillator wavefunctions with a minimal length The harmonic oscillator (HO) is one of the most relevant physical systems, and it is one of the few quantum systems for which the spectrum and corresponding wavefunctions are exactly known. In general, the Hamiltonian of the 1D HO is given by H = P 2 /(2m) + (mω 2 /2)X 2 , where X and P are such that their commutator gives the c-number i . In the context of GUP, a deformation of the standard commutation relation is assumed. Here we assume the form [x, k] = i (1 + βk 2 ), with β being a positive parameter, which imposes a minimal value for the variance in position, ∆X min = √ β, of the order of Planck's length. We mention that more general deformations (in arbitrary dimensions) have been proposed [7], however this will be focus of further study to be considered elsewhere. Under these assumptions, the Schrödinger equation in the (auxiliary) momentum space has been solved by Pedram [13]. Letting k = tan( , the wavefunctions are given by where N n is the normalization constant, given by 1 + η 2 4 + η 2 + 1 2 ωn 2 η. table Note that in the limit β → 0 + , or equivalently λ → +∞, the standard case is recovered as it is shown in [13]. From Eq. (1) one can compute the position wavefunction through the Fourier transform as In Ref. [13] this has been computed exactly for n = 0 and n = 1. B. Behavior of the sum of Rényi entropies Shannon entropy in momentum space was analytically calculated by Pedram [13] for the ground state and for the first excited state. Here the Rényi entropy R α using both representations in momentum space, is numerically studied for the ground state and the first five excited ones. Rényi entropies are given by for the representations of auxiliary and actual momenta respectively, where α > 0 and α = 1. Note that, as the wavefunctions ψ(x) and φ(q) are connected through Fourier transformation, they necessarily satisfy the Maassen-Uffink uncertainty relation [4] for conjugated indices, with 1/α + 1/α * = 2. Although this relation has been improved by considering the probability density governing the measurement process [14], to the best of our knowledge the correction to this inequality taking into account the transformation (4) has not been developed until now, except for the Shannon case [14] that corresponds to the limit α = α * = 1. Notice that, from Eqs. (4) and (6), it follows trivially that R α [φ n ] > R α [φ n ] whenever φ is not the Dirac's delta. As an example we show in Table I the behavior of the Rényi entropies corresponding to the ground state and first five excited ones in both representations of momentum space, for α = 2, and for different values of the deformation parameter β. In Table II we show some particular values of the position-momentum Rényi entropies' sum for different values of the parameter β, various quantum states of the harmonic system with minimal length, and fixed α = 2 3 , then α * = 2. Note that, as expected, all values are bigger than the lower bound in (7), given by ln 3 √ 3 π 2 2.100 for these particular values of the entropic parameters α and α * . III. DISCUSSION In these proceedings we show a numerical analysis of informational measures such as Rényi entropies and their sum, in the case of the 1-dimensional quantum harmonic oscillator wavefunctions assuming for the position and momentum operators a deformed commutation relation, characterized by a parameter β. A nonvanishing deformation parameter implies the existence of a minimal length, which is proposed to be a characteristic of quantum gravity theory. Further findings for arbitrary pairs of entropic indices below the conjugacy curve, could also be obtained, and will be presented elsewhere together with a comparison with known lower bounds for the entropies' sum. Future work includes consideration of other physical systems and/or a more general deformed commutator between position and momentum observables (in D dimensions), focusing on those states that minimize the generalized uncertainty relations.
1,881.6
2019-09-23T00:00:00.000
[ "Physics" ]
CONSTRUCTING AN INTELLIGENT PATENT NETWORK ANALYSIS METHOD Department of Cooperative Economics, Feng Chia University, 100, Wen-Hwa Road, Seatwen, Taichung 40724, Taiwan Email<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 43, Sec. 4, Keelung Road, Taipei 106, Taiwan Department of Information Management, Chinese Culture University, 55, Hwa-Kang Road, Yang-Ming-Shan, Taipei 11114, Taiwan *Email<EMAIL_ADDRESS><EMAIL_ADDRESS> INTRODUCTION Patents which describe main contents of technological inventions contain considerable technical knowledge.These documents are significant sources of technological data and play a critical role in the advancement and diffusion of technology (Horie, Maeno, & Ohsawa, 2007;Liu & Luo, 2007a).Furthermore, patent analysis transfers the patent data to systematic and valuable information that is helpful for managing research and development process, exploring technological trends, tracking technological development, and identifying technology plans (Liu & Luo, 2007b;Liu & Yang, 2008;Chang, Wu, & Leu, 2010).It is considered to be a useful vehicle for technology management. Traditionally, patent bibliometric analysis has been most commonly used to implement patent analysis (Narin, 1994).Patent bibliometric analysis utilizes bibliometric data from patent documents to perform statistical analysis and citation analysis.Statistical analysis employs bibliometric data, such as number of patents, country, assignee, inventor, and so forth.Then, statistical methods are used to analyze the bibliometric data.Citations are the counts of other patents or non-patent literature cited in the patent documents.Citation analysis uses these citations in patent documents to find important patents and develop other scientific linkages.Patent bibliometric analysis, albeit easy to understand and simple to use, is limited in the scope of analysis and the richness of potential information (Yoon & Park, 2004). To overcome these limitations, Yoon & Park (2004) suggest an advanced method of patent analysis, called patent network analysis.This method uses several patent keywords as input to produce a visual patent network.The network demonstrates the overall relationship among all patents.The analysts are thus able to comprehend the overall structure of a patent database intuitively and discover the key patents in the patent network.Although patent network analysis possesses relative advantages over traditional methods of patent analysis, it is subject to Data Science Journal, Volume 11, 22 November 2012 several crucial drawbacks.First, the search for patent documents to be studied relies on the subjective judgments of analysts.Second, the collection of patent documents is a time-consuming task because it requires an exhaustive search of patent databases.The current method lacks a set of systematic and convenient patent searching procedures.As a result, the dataset of patent documents being studied is not complete.Third, the relevant patent keywords used in the current method are selected by technical experts.In reality, the technical experts often use different terminologies to describe the same technology (Li, Wang, & Hong, 2009).Even though these experts have rich experience in the field of technology being studied, they have great difficulty avoiding the subjectivity involved in the extraction of patent keywords.If the keywords are not chosen properly, the visualization of patent network will be distorted.Finally, the current method assumes that the weight of each patent keyword is equal.However, the individual weights of patent keywords are different from each other.It is necessary to determine the priorities among diverse patent keywords and the relative weighted value of each keyword. In order to resolve all of the aforementioned problems, constructing an automated technique for improving the current method is necessary.This study proposes applying artificial intelligence techniques to come up with an intelligent patent network analysis method.Artificial intelligence is usually an excellent solution when facing the abundance of current patent documents.When making a quick and effective search for the most useful and important key patents, the related techniques of artificial intelligence play a significant role.For example, these techniques can swiftly process and categorize large amounts of patent documents, automatically identify and extract keyword sets, as well as broadly and objectively select the keywords that are synonyms. Accordingly, artificial intelligence techniques for assisting patent analysts in patent processing and analysis are in great demand.Previous study has developed a framework for automatic patent analysis method (Wu & Yao, 2012).However, the issue regarding weights of keywords was not concerned and the utility of method was not assured.Thus, this study extends previous framework to propose an intelligent patent network analysis method, and verifies the utility of this one.The proposed method is useful for making the visual patent network more substantial, which in turn improves the efficiency and effectiveness of patent analysis.That is the purpose of this study, and the details are as follows.First, in order to collect a complete dataset of patent documents, this study proposes a set of systematic patent searching procedures by introducing an ontology methodology of automatic document classification.This procedure is very convenient in terms of search time and cost.Second, this study conducts the enhanced term frequency -inverse document frequency (ETF-IDF) technique to conduct the information retrieval job to extract the patent keywords automatically from the selected patent documents.Third, the association rules, which combine the Viterbi algorithm with the Apriori algorithm, are used to determine the weighted value of each keyword.Finally, the sets of patent keywords are employed to act as the input base for generating the precise visualization of the patent network that contributes to implementing the patent analysis.In particular, the patents regarding the technological field of Carbon Nanotube Backlight Unit (CNT-BLU) are analyzed to verify the utility of the proposed method. Patent network analysis method Network analysis, by emphasizing the relationships among the social positions within a system, provides a powerful brush for painting a systematic picture of global social structures and their components (Knoke & Kuklinski, 1982).This analysis is capable of showing the structure of edges among nodes.Nodes are the given entities in the network.The relationship between nodes and the location of individual nodes in the network provide ample information and assist the analysts in realizing the overall structure.Furthermore, network analysis utilizes quantitative techniques to generate relevant indexes that clarify the characteristics of the whole network and show the position of individuals or groups in the network structure (Wasserman & Faust, 1994). Even though network analysis was developed initially for sociological studies, it is utilized widely in other research areas (Leoncini, Maggioni, & Montresor, 1996;Cross, Borgatti, & Parker, 2001;Calero, Buter, Valdés, & Noyons, 2006;Shin, Lee, & Park, 2006).Recently, Yoon & Park (2004) applied the concept of network analysis in patent analysis and proposed patent network analysis.This method utilizes the frequency of keywords' appearance in patent documents as the input base to generate a patent network.The relationship among patents can be visually demonstrated in this analysis, and the analysts are able to comprehend the overall structure of patent network.Moreover, this method produces several meaningful indexes which can help Data Science Journal, Volume 11, 22 November 2012 analysts to identify the relative importance of individual patents and to explore technological trends (Chang, Wu, & Leu, 2010). The main purpose of this study is to propose an intelligent patent network analysis method based on artificial intelligence techniques in order to develop a visually sophisticated patent network.The concept of artificial intelligence techniques will be described in the next section. Artificial intelligence techniques Artificial intelligence is the field of computer science focusing on enabling computers to engage in behaviors that humans consider intelligent by automatic judgment mechanic (Crevier, 1993).It attempts to achieve the goal of giving the computer human intelligence by intelligent algorithm.Today, after the advent of the computer and 50 years of research into artificial intelligence programming techniques, the dream of smart machines is becoming a reality (Yang, 2007).Researchers are creating systems that can mimic human thought, understand speech, and do countless other feats never before possible.Recently, artificial intelligence has been developed in many applied areas (Yang & Liu, 1999).A prominent branch of artificial intelligence research is the highly technical and specialized information retrieval, which can utilize techniques such as fuzzy theory, nature language processing (NLP) technique, and so on, to automatically process the abundance of information on the internet. Among various techniques of data mining, Apriori is a classic algorithm for learning association rules which can find out the latent relations between different items (Agrawal, Imielinski, & Swami, 1993;Yang & Liu, 1999).Apriori algorithm is designed to process the abundant transactions and to operate on databases which contain transactions, such as collections of items bought by consumers or details of a website frequentation.It attempts to find the frequent subsets that have in common at least a minimum number of items, which is the cutoff or confidence threshold of the subsets.The Apriori algorithm put the association rule into practice which represents an unsupervised learning method that attempts to capture associations among groups of items.This technique can be applied to the intelligent method suggested in this study in order to quickly and automatically handle complicated patent documents. Regarding keyword automatic identification, the term frequency -inverse document frequency (TF-IDF) methodology proposes an excellent algorithm that computes the appropriate frequency of keyword (Salton & McGill, 1983).The TF-IDF technique is usually used to weigh each word in the text document based on how unique it is.This technique captures relevant keywords, text documents, and particular categories.Our study combines the TF-IDF technique with our linguistic recognition rules, which are provided by experts in order to further select out the long word vocabularies and specialized vocabularies with a particular language purpose to give higher weighting.Then the right weightings of all keywords are automatically counted after proper adjustment through the linguistic rules.Next, the keyword set of each patent document is formed.Finally, we use the association rules to compare all keyword sets of patent documents in order to delete the unsuitable vocabularies out of the keyword set.This automatically strengthens the final suitable relevant keywords of all patent documents. Using the above information, several artificial intelligence techniques are applied to construct our intelligent patent network analysis.The detailed methodology will be explained in the next section. METHODOLOGY AND PROCEDURE The main purpose of this study is to propose an automatically intelligent patent network analysis method.In this section, the methodology of intelligent patent network analysis presented in this study is explained.Figure 1 shows the overall procedure of the proposed method.It contains four major stages: searching and collecting patent documents, extracting patent keywords, determining the weight of each patent keyword, and generating a sophisticated visualization of the patent network.First, this study exploits the ontology of the automatic document classification process which is identified by the patent keywords agents to extract the feature subset documents.This automated technique is used to search, filter and categorize the relevant patent documents in order to collect a complete dataset of patent documents.Next, the enhanced term frequency -inverse document frequency (ETF-IDF) technique is executed to elicit the patent keywords automatically from the selected patent documents.Moreover, the Viterbi algorithm is traditionally used to detect keywords through the HMM configuration (Cho, Kim, & Lee, 2010).Each path in the decoder is a sequence of keywords and garbage elements.The decoder finds scores for all possible paths, and the one with the highest score is selected as the output for the keyword set.Therefore, through using association rules which are put to combine the Viterbi algorithm with the Apriori algorithm into practice, the intelligent system produces the weighted value of each patent keyword in every patent document and further strengthens those keywords in iteratively appearing different patent documents to derive the really appropriate keywords.Finally, the sets of weighted patent keywords are employed to serve as the input base for generating a sophisticated patent network in order to effectively implement patent analysis. In order to assure the utility of the intelligent patent network analysis method, patents in the field of Carbon Nanotube Backlight Unit (CNT-BLU), an emerging nanotechnology, are analyzed.CNT-BLU is a new product that uses Carbon Nanotube (CNT) in the design of a back light unit for a Thin Film Transistor Liquid Crystal Displays (TFT-LCD).It has the advantages of low cost, less power consumption, no need of optical films, no toxic chemicals, and superior color performance (Kim & Yoo, 2005).The reason why CNT-BLU was selected as an example in this study is as follows.First, CNT-BLU is an emerging nanotechnology that was developed to meet urgent demands for flat panel display.Second, CNT-BLU is suitable for exploring technological trends because of its rapid technical progress.Finally, the patent dataset of CNT-BLU is a convenient size for analyzing technological information and mapping the patent network.More detailed processes for the four stages of the proposed method are described as follows. Selection of patent documents Ontology is a formal representation of knowledge in artificial intelligence and knowledge management as a set of concepts including their attributes within a domain, and the relationships between those concepts (Noy & McGuinness, 2001).An ontology is used to systematically understand the entities within some domain and may be used further to automatically process the information of this domain, such as documents.Therefore, an ontology which is a "formal and definite specification of a shared epistemology" provides a shared knowledge architecture as a method that can effectively discovery and organize a domain with the definitions of objects and notions and relations to classify for much of the information on the internet to build up the semantic web (Brank, Grobelnik, Frayling, & Mladenic, 2002). This study applies an ontology tree relevant to the field of patented technology being studied, in this case CNT-BLU, to automatically locate the relevant patent documents from the United States Patent Classification (UPC) database (United States Patent and Trademark Office, 2011), based on a keywords-based search to discover all related documents, which often cannot actually reflect the true meanings of the patent documents.The concept-based document searching method can be adopted to correctly classify the patent documents that Data Science Journal, Volume 11, 22 November 2012 belong to the field of technology being studied.This study uses the Protégé-2000 software (Bottou & Vapnik, 1992) to set up the ontology patent tree. Many document retrieval technologies in the artificial intelligence field, seek to upgrade the accuracy of the document classification as an important focus (Guarino, 1998).This study combines the Salton method that automatically extracts the representative keywords from documents with the intelligent sorting document mechanism (Nowak & Wakulicz, 2005).The Salton method combines both methods of weighting by looking at both inter document frequencies and intra document frequencies.That is, by considering both the total frequency of the occurrence of a term in a document and its distribution over all documents, we can get the proper and exact term weighting values.Then, using linguistic rules, we automatically extract the representative keywords from all patent documents to further fix the proper weighting of each keyword in the keyword set.This is our improved TF-IDF algorithm (ETF-IDF).Finally, we utilize the association rule to assess the final word components in the keyword set of each patent document (Nowak & Wakulicz, 2005).By referencing the classification of the UPC to discover the category and layer of a patent document, this study is able to further filter the patent documents that are being searched.Subsequently, in order to improve the precision of the patent document classification, this study puts the resultant document through a patent classification process using a patent tree. Through a series of searching procedures, the result reveals 97 relevant patent documents concerning CNT-BLU technology from U.S. patent numbers 6062931 to 7169005.The patent numbers and titles of these patent documents are shown in the Appendix.Because the patent numbers are too long to be usable for subsequent analysis, the patents were sorted by patent number and labeled with serial numbers from 1 to 97. Delete the verbose and word tagging in the patent article After selecting the related patent documents in the specific field, as described above, the next stage extracts all possible special meaning words from these patent documents.In order to correctly process text segmentation of the English patent document, this study utilizes the stanfordLexParser-1.6 as a tool that processes English sentences.One of the great advantages of the stanfordLexParser-1.6 is that it can work well in the morphological restoration of any word and in syntactical analysis.This study introduces the stanfordLexParser-1.6 to process the three main patent contents -Abstract, Claim, and Descriptionin the document.The detailed steps in this stage are shown in Figure 2 and are implemented as follows: Step 1: Delete the verbose This step segments the sentence according to different signs, ex: comma mark, full stop mark and period mark.Then, it constructs up a syntax representation tree and deletes all extra words in each sentence. Step 2: Word tagging In this step, the stanfordLexParser-1.6 program processes the word tagging.We added to its lexicon as references for many domain similar words to enhance the tagging result in order to get a syntax parse tree (Lyon, 1999). Step 3: Punctuation marks processing Because stanfordLexParser-1.6 segments sentences by punctuation marks, it can be achieved to get better results if the main different marks are dealt with and handled.Three types of punctuation marks may change the structure of sentences and should be refined in the processing to upgrade the understandings of context meanings in a sentence.Step 4: Analysis of the descriptive sentences The relationships of different parts-of-speech (POS) can be calculated by using their frequencies to disclose the syntax of partial structure in descriptive sentences.In particular, the POS of words are analyzed by following the major component keyword (MCK).The top-10 frequencies of the POS samples are shown in Table 1.Note that the frequency of a POS is based on the statistics of about 9000 sentences in the selected patent documents.In this study, we select only the words with the POS Na (noun), Nc (place noun), and VH (intransitive verb) for further study. Enhanced term frequency -inverse document frequency (ETF-IDF) and context recognizing rules In this study, we focus on to amend the term frequency -inverse document frequency (TF-IDF) to strengthen those more important keywords which should have the higher weighting values.So, the ETF-IDF algorithm is upgraded from TF-IDF by considering the relative importance of each keyword in each patent document.TF-IDF is the most general weighting technology which has applied to classify the text categorizations in information retrieve.The TF-IDF function computes the weight of each vector component (each of them relating to a word of the vocabulary) of each document on the following basis.First, it incorporates the word frequency in the document.Therefore, the more a word appears in a document (e.g., its term frequency (TF) is high), the more it is estimated to be significant in this patent document.And thus, IDF measures how infrequent a word is in all patent document set and its value can be reasonably estimated. Hence, if a word is very frequent in a document set, the IDF is not believed to be particularly representative of this document because it occurs in most patent documents, for instance, stop words and so on.On the contrary, if a word is infrequent in the document set, it is considered to be very relevant for the document in the field.Hence, by using frequency counting, the TF-IDF can identify the patent keywords and to reduce some mistakes in the filtering keywords process.Although the TF-IDF method can identify the keywords from the patent document, it cannot insure that the selected keywords are the best representative professional words.In other words, the patent keyword through our ETF-IDF filtering process can be more suitable and really keywords, so the enhanced TF-IDF algorithm is used to enhance these drawbacks of the original TF-IDF. The ETF-IDF counts the frequency of each word in order to retrieve the meaningful words and compares a query vector with a document vector using a similarity or distance function, such as the cosine similarity function.There are several variants of TF-IDF.The following variant found by Yang & Liu (1999) was generally used in many experiments. , otherwise (1) Weight t ,d = 0 where tf t,d is the frequency of word t in document d, n is the number of documents in the text collection, and x t is the number of documents where word t occurs.Normalization to unit length is generally applied to the resulting vectors (unnecessary with KNN and the cosine similarity function). Data Science Journal, Volume 11, 22 November 2012 To continue with the next step, this study discovers the real meaning of the context word and the importance of different keywords by further analyzing the syntactical relationship of the filtered words set.After several rounds, this approach can deduce the context recognizing rules that analyze the larger sets of patent documents.These context recognizing rules can help upgrade the accuracy of the selected keyword.The detailed steps are described as follows: Step 1: Problem setting This study addresses the problem of automatic extraction of semantic similarity relations among lexical items in relational form from which fine grained hierarchical clusters are obtained in the patent tree.In order to restrict the vocabulary and word ambiguity as well as to utilize information in abundant patent texts, this processing is confined to corpora from specific patent domains.This restriction is acceptable in the framework of Natural Language Processing (NLP) systems, which usually operate on sub-languages and are interested only in domain specific word meanings.Therefore, this process aims at developing a method applicable to every domain for which specific corpora are available in order to extract domain independent word meaning relations.Thus, this process can provide the semantic relations of the filtered keywords in relevance to thematic domains as well. N-gram methods, which share the same perspective, focus on fast processing of large corpora and consider as context only immediately adjacent words without exploiting medium distance word dependencies (Venkataraman, 2001).Because large corpora are available only for few domains, this step aims at developing a method for processing small or medium sized corpora, exploiting as much as possible contextual information rich in semantic restrictions.The method is driven by the observation that in constrained domain corpora, the vocabulary and the syntactic structures are limited and that small or medium distance word or phrase patterns are often used to express similar facts.Stock market financial news and Modern Greek are used as domain and language test cases, respectively.Throughout the paper, examples taken from English corpora are also used. Step 2: Context similarity estimation Counting the number of occurrences of every semantic token found in the corpus, a frequency threshold under which no semantic clustering is attempted can be defined.Therefore, only Frequent Semantic Entities (FSE) are subjected to clustering (except the FSEs represented in the corpus by known patterns) while all but the rarest semantic tokens are used as clustering parameters.The corresponding frequency thresholds in the present experiments were set to 20 and 10 respectively in order to acquire sufficient contextual data for every FSE constraining computational time.Ideally, any word appearing at least twice in the corpus should be used as a context parameter.Definite determiners and verb auxiliaries are excluded from the processing because they have no semantic connection with their head words while pronouns are handled as semantically empty words. Through the above processes, a total of 12 patent keywords were automatically extracted from the selected patent documents.Then, experts who work in the field of CNT-BLU further reviewed these keywords in order to confirm the correctness of automatic extraction.Consequently, all of the representative keywords with important technical features were included: "nanotube", "backlight", "display", "emission", "vacuum", "electrode", "cathode", "anode", "phosphor", "thin film", "binder", and "fluorescent". Determination of the weight of each patent keyword The conventional approach to detect keywords is Viterbi decoding through the HMM configuration (Cho, Kim, & Lee, 2010).Each path in the decoder is a sequence of keyword and garbage elements.The decoder finds scores for all possible paths, and the one with the highest score is selected as the output.This score is related to the joint probability of the path and the feature vectors.This scoring approach concerns the keyword spotting task.The score is a global score estimated by accumulating all likelihoods for the whole expression. The score is not normalized with respect to the probability of the acoustic observation and thus is relative to the particular acoustic observation space (Ketabdar, Vepa, Bengio, & Bourlard, 2006).For example, it can be related to the length of the utterance, the length and number of keywords and garbage elements, the numerical range for values of evidences, etc.The values of these scores are penalized by changing keyword and garbage entrance penalties, which are effective spotting thresholds in this approach.There is no meaningful interpretation for the entrance penalty values, and they should be adjusted empirically to optimize the performance criteria.This implies that for each keyword there should be a sufficiently large development or training set.It would be ideal if we could find a reasonable threshold based on keyword characteristics, such as length, which can be known a Data Science Journal, Volume 11, 22 November 2012 priori or easily estimated or measured instead of adjusting in a development set. The Apriori algorithm is an influential algorithm for mining frequent itemsets for Boolean association rules (Agrawal, Imielinski, & Swami, 1993;Yang & Liu, 1999).In the fields of computer science and data mining, Apriori is a classic algorithm for learning association rules.Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers or details of a website frequentation).The algorithm attempts to find subsets which are common to at least a minimum number C (the cutoff, or confidence threshold) of the itemsets. In other words, Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time, a step known as candidate generation, and groups of candidates are tested against the data.The algorithm terminates when no further successful extensions are found.Apriori uses breadth-first search and a hash tree structure to count candidate item sets efficiently.Through the above steps, the patent keyword set that contains the individual weighted value of each keyword is automatically derived and shown in Table 2. Generation of the patent network In this stage, several techniques are employed to generate the patent network.The detailed content is described as follows: Step 1: Counting the occurrence frequency of keywords in each patent document and then the weighted value of each keyword multiplied by the occurrence frequency to generate the weighted occurrence frequency of keywords in each patent document.p is the weighted occurrence frequency of the first keyword in the Patent 1. Step 2: Utilizing Euclidian distance to calculate the distance among the patents and to establish the relationship among patents.The Euclidian distance value ( d ik E ) between the two vectors is computed as follows: Step 3: Transforming the real values of d E matrix into the standardized values of s E matrix in order to graph the patent network for next procedure. Step 4: The cell of the s E matrix must be a binary transformation, comprising 0s and 1s if it is to exceed the cut-off value q: Data Science Journal, Volume 11, 22 November 2012 The I matrix includes the binary value where ik I equals 1 if patent i is strongly connected with patent k. ik I equals 0 if patent i is weakly connected with patent k or not at all connected.That is, if the s ik E value is smaller than the cut-off value q, the connectivity between patent i and patent k is regarded as strong, and the ik I value is set to 1. Otherwise, the connectivity is considered weak, and the ik I value is set to 0. Through trying numerous cut-off values, q = 0.10 was chosen, which indicated that ij I equaled 1 if s ij E was smaller than 0.10; otherwise ij I equaled 0. Consequently, the binary matrix, I, was built for the implementation of the network analysis.The patent network was drawn by using UCINET 6.0 (Borgatti, Everett, & Freeman, 1999) and is shown in Figure .3. The interconnected set contains 84 patents and the relationship among these patents.It represents the focal point of the visual patent network and provides much information regarding the production and application of CNT-BLU.On the other hand, the isolated set includes the other 13 patents, which are quite divergent in the area of CNT-BLU.Thus, these inventions are excluded from the patent network through the above analysis process. In the patent network, several patents that are closely located in the central position may represent the key technology in the field of CNT-BLU.In order to examine the structure of the network, the technology centrality index (TCI) can be calculated to identify the most important patents.The formula for calculating the TCI of patent i is shown below: , r: ties of patent i where n denote the number of patents.This measures the relative importance of a subject patent by calculating the density of its linkage with other patents.That is, the higher the TCI, the greater the impact on other patents. The TCI can be used to identify the influential patents in the field of the technology being studied.Moreover, detailed information on these influential patents can be obtained.Technological implications can be deduced from the information as well. Data Science Journal, Volume 11, 22 November 2012 Table 3 shows seven relatively important patents in the patent network with high TCI values,including No. 32,13,55,29,14,11,and 71.The TCI values of these patents are all above 0.5 and far ahead of other patents.The core technology and developing trends in CNT-BLU were grasped by analyzing these patents in this study.Specifically, the core technologies focus on three main processes for making a CNT-BLU, including anode plate, cathode plate, and assembly of cathode and anode.Furthermore, the technological trend regarding the process of CNT-BLU manufacturing is CNT paste printing. CONCLUSIONS This study constructs a novel patent analysis method, called the intelligent patent network analysis method, to make a precise visual network.Based on artificial intelligence techniques, this study proposes a detailed procedure for generating an intelligent patent network.First, this study utilized the concept of ontology to search and categorize relevant patent documents for collecting a complete dataset of patent documents.Second, through use of the enhanced term frequency -inverse document frequency (ETF-IDF) technique, reliable patent keywords suitable for further process analysis were extracted.Third, association rules were used to determine the weighted value of each keyword.Finally, sets of patent keywords were employed to serve as the input base for generating a sophisticated patent network.In order to assure the utility of the proposed method, the patents of CNT-BLU technology were analyzed in each stage as above.Several contributions regarding academic and practical implications are suggested as follows. For academics, the contribution of this study is significant in terms of the methodology of patent analysis. Primarily, this study applies artificial intelligence techniques to modify current practice and proposes a rigorous method to make the visual network more sophisticated.The intelligent patent network analysis method provides a procedure for searching patent documents, extracting patent keywords, and determining the weight of each patent keyword in order to generate a precise visualization of a patent network.In this study, the effectiveness of the intelligent patent network has been verified by analyzing the patents of CNT-BLU technology.Compared with current methods, the proposed method has great improvements in terms of patent search, information extraction, visualization, and analysis. For practical implications, the core technology and technological trends for CNT-BLU have been discovered through using the proposed method in this study.The practical application of the smart method was fully demonstrated.Thus, the intelligent patent network analysis method is valuable to the practical affairs of engineers or scientists.It enables engineers and scientists to intuitively understand the overview of a set of patents and to identify the developmental trends of critical technologies.Specifically, engineers and scientists are able to uncover significant technological information and grasp meaningful technological insights in the patent network. Despite the above advantages, the proposed method has some challenges.For example, inevitable errors in the results of patent text categorization probably exist that would lead to the extraction of incorrect keywords.To resolve this problem, the automatic categorization results of the patent documents should be reconfirmed, that is, a mixed solution should be adopted that blends artificial intelligence and human intelligence to promote Figure 1 . Figure 1.The overall procedure of the intelligent patent network analysis method Figure 2 . Figure 2. The steps of extracting words from patent documents Figure 3 . Figure 3. Patent network in the field of CNT-BLU Table 1 . The top-10 frequencies of parts-of-speech (POS) Table 2 . Patent keyword set in the field of CNT-BLU Note: The sum of weighted values is equal to one. Table 3 . TCI values of the relatively important patents in the patent network No.
7,490
2012-11-12T00:00:00.000
[ "Computer Science", "Engineering" ]
Effect of Pretreatment with Sulfuric Acid on Catalytic Hydrocracking of Fe/AC Catalysts Activated carbon (AC) was modified by H2SO4 and used as a support for catalyst. The Fe2S3/AC-T catalyst was prepared by deposition-precipitation method and used to catalyze hydrocracking of coal-related model compound, di(1-naphthyl)methane (DNM). The properties of catalyst were studied by N2 adsorption-desorption, X-ray diffraction, and scanning electron microscopy. The result showed that ferric sulfate and acidic centers had synergetic effect on hydrocracking of DNM when using Fe2S3/AC-T as catalyst, the optimal loading of Fe is 9 wt.%. Hydroconversion of the extraction residue from Guizhou bituminous coal was also studied using Fe2S3/AC-T as the catalyst. The reaction was conducted in cyclohexane under 0.8Mpa of initial hydrogen pressure at 310°C. The reaction mixture was extracted with petroleum ether and analyzed by GC/MS. Amounts of organic compounds which fall into the categories of homologues of benzene and naphthalene were detected. It suggested that the catalyst could effectively catalyze the cleavage of C-C-bridged bonds. Introduction As an important chemical process, direct coal liquefaction (DCL) could be a feasible option to directly convert coals into liquid fuels or chemicals [1,2].Nowadays, rapid decrease of petroleum stresses the importance of coals for covering the shortage of organic chemical feedstock [3,4].Compared to the existing DCL performed at high temperatures, low-temperature DCL should be more promising in consideration of the fact that the formation of gaseous products and coke can be considerably reduced at low temperatures [5][6][7]. Catalyst is one of key issues in DCL process.Considerable efforts [8][9][10][11][12][13][14][15] have been contributed to the catalytic performances of metal-based (Fe and Ni) catalysts for DCL or to the reactions of coal-related model compounds.It is generally accepted that catalysts can significantly promote coal pyrolysis by reducing the pyrolysis activation energy and the formation of active hydrogen atoms (AHAs) by facilitating H 2 dissociation [16,17].The cleavage of C-C bridged bonds in coals, which is very important for DCL, begins with the secondary distribution of AHAs in the whole reaction system [9,15,17].Iron sulfides are commonly used catalyst for DCL because of their easy availability.By literatures [18,19], FeS 2 thermolysis to Fe 1-x S and FeS 2 regenerated from the reaction of Fe 1-x S with H 2 S facilitates the continuous formation of AHAs.In another word, the cycle of FeS 2 decomposition and regeneration contributes to its catalysis in the reactions of DCL. Although the iron-based catalysts are successfully used in industrial processes of DCL, their efficiency is still quite low.Solid acids are commonly catalyst for cracking reactions, and the corresponding acidic sites are active for the cleavage of C-C-bridged bonds [12-14, 20, 21].If combined with acidic centers, the Fe-based catalyst could exhibit an improved catalytic performance in DCL process.Therefore, we prepared an iron sulfide-supported catalyst, the activated carbon (AC) which was modified by H 2 SO 4 was used as support, and the catalysts were used to catalyze hydrocracking of a coal-related model compound (DNM) and hydroconversion of the extraction residue of a bituminous coal. Experiments 2.1.Preparation of Catalyst.Guizhou bituminous coal (GBC) was collected from Guizhou province, China.It was pulverized to pass a 200-mesh sieve followed by desiccation in a vacuum oven at 80 °C before use.All of chemical reagents used in the experiments are commercially obtained. 3 g activated carbon (AC) was placed in a beaker, and 10 mL H 2 SO 4 (2 mol/L) was added.The mixture was disposed by ultrasound by placing the beaker into an ultrasonic oscillator for 3 min and then kept overnight.The obtained mixture was dried under drier at 80 °C for 6 h.The product obtained was denoted as AC-T (T means treated carrier), and the corresponding amount of acid was 2.0 mmol/g, which was measured via neutral drip reaction with NaOH solution. A certain amount of Fe 2 (SO 4 ) 3 (0.34 g, 0.59 g, 0.86 g, 1.16 g, and 1.48 g) and the corresponding amount of Na 2 S•9H 2 O (0.612 g, 1.602 g, 1.55 g, 2.09 g, and 2.66 g) were dissolved in 50 mL deionized water, respectively.Under stirring, both the Na 2 S and Fe 2 (SO 4 ) 3 aqueous solution were continuously dropped to 10 mL deionized water at the same rate.When the reactions were completed, the 3 g AC-T support was added.The mixture was kept stirring for 10 min followed by filtration, and the filter cake was dried at 100 °C for 2 h.The product obtained was calcined at 300 °C and held at the temperature for 2 h under a protection of N 2 flow.The obtained catalyst was denoted as Fe 2 S 3 / AC-T(x), where x represents the weight percentage of Fe in catalyst. For unmodified AC as support, the catalyst prepared at the same conditions was denoted as Fe 2 S 3 /AC. Characterization of Catalyst . N 2 adsorption-desorption isotherms were measured with a Bayer BELSORP-max instrument.Specific surface area (SSA), total pore volume (TPV), and average pore diameter (APD) of the samples were calculated from the isotherms via t-plot, BJH, and HK methods, respectively.Morphology of the samples and the corresponding elemental distribution on the surface were characterized using a FEI Quanta 250 scanning electron microscope (SEM) coupled with an energy-dispersive spectrometer (EDS).The X-ray diffraction pattern was recorded on a Burker D8 ADVANCE diffractometer with a scanning rate of 4 °/min at 2θ of 10 to 80 °, using a Cu ka radiation (λ = 0.154 nm) at 40 kV and 40 mA. Di(1-naphthyl)methane Hydrocracking.Di(1-naphthyl)methane (DNM, 1 mmol), catalyst (0.4 g), and cyclohexane (30 mL) were put into a 60 mL stainless, magnetically stirred autoclave.After replacing the air with hydrogen for 3 times, the autoclave was pressurized to 0.8 MPa and heated to an indicated temperature in 15 min.Under stirring at a rate of 200 r/m, the reaction was conducted for 1 h.Then the autoclave was cooled to room temperature in an ice-water bath.The gaseous reaction mixture was taken out from the autoclave and quantified using a gas chromatography (9790, Zhejiang Fuli, China). 2.4.GBC Residue Hydroconversion. 2 g GBC and 50 mL acetone were put into a 100 mL stainless, magnetically stirred autoclave.After replacing air for 3 times, the autoclave was pressurized to 5 MPa with nitrogen and heated to 310 °C within 15 min.The autoclave was cooled to room temperature in an ice-water bath after being kept at 310 °C for 1 h.The reaction mixture was taken out and filtered through a 0.8 μm membrane filter, and the residue from GBC (RGBC) was dried in vacuum at 80 °C for 6 h. The RGBC (1 g), Fe 2 S 3 /AC-T catalyst (0.5 g), and cyclohexane (50 mL) were put into a 100 mL stainless, magnetically stirred autoclave.After replacing air for 3 times, the autoclave was pressurized to 0.8 MPa with hydrogen, heated to 310 °C within 15 min, and kept at the temperature for 1 h.Then the autoclave was immediately cooled to room temperature in an ice-water bath.The reaction mixture was taken out from the autoclave using petrol ether as the rinse solvent and filtered through a 0.8 μm membrane filter.The filtrate was concentrated by evaporating the solvents using a rotary evaporator and then analyzed with a Hewlett-Packard 6890/5973 GC/MS. Results and Discussion Figure 1 shows the dependence of DNM conversion on the Fe loading of Fe 2 S 3 /AC-T.At 270 °C, the conversion rate of DNM over Fe 2 S 3 /AC-T with various Fe loading was below 1%.When the reaction temperature increased to 290 or 310 °C, with the increases of Fe loading (3% to 9%), the DNM conversion rate increased and reached a maximum of 13.5% at 290 °C and 19.3% at 310 °C.With further increase of Fe loading, the DNM conversion rate decreased to 9% at 290 °C and 13% at 310 °C.The results indicated that 9% of Fe loading is appropriate for the catalyst and excessive ferric sulfide may result in a decrease of specific surface area, which in turn reduce the catalytic efficiency.The hydrocracking of DNM is an endothermic reaction, and higher temperature is favorable for the reaction.It can be seen in Figure 1 that a high DNM conversion was got under a mild condition (reaction temperature of 310 °C). 2 Journal of Spectroscopy The catalytic activities of the catalysts Fe 2 S 3 /AC (9) and Fe 2 S 3 /AC-T (9) were compared in Figure 2, which summarizes the DNM conversions under different reaction temperatures.At 250 °C, the DNM conversion was close to zero.As the reaction temperature rised to 270 °C, the DNM conversion over Fe 2 S 3 /AC (9) and Fe 2 S 3 /AC-T (9) were 1.12% and 1.9%, respectively.With further increase of reaction temperature to 290 °C, the DNM conversion increased greatly, 7.85% over Fe 2 S 3 /AC (9) and 13.5% over Fe 2 S 3 /AC-T (9), respectively.At 310 °C, the DNM conversion reached 12.4% and 19.3% over Fe 2 S 3 /AC (9) and Fe 2 S 3 /AC-T (9), respectively. It is believed that the H 2 S species, generated by ferric sulfate and hydrogen gas under enhanced temperature and pressure, further react with the C-C-bridged bonds of DNM to form naphthalene and methyl naphthalene.In our study, the support AC-T was obtained by impregnating the AC to sulfuric acid solution, and the catalyst Fe 2 S 3 /AC-T was coated with a layer of acid sites.The acid sites are helpful to the cleavage of C-C-bridged bonds.The catalyst Fe 2 S 3 / AC-T was more active than the Fe 2 S 3 /AC, which could be ascribed to the synergy effects of ferric sulfate and acid sites in the catalyst.3 Journal of Spectroscopy AC-T ( 7) and Fe 2 S 3 /AC-T (9) exhibited a mixture of type and type isotherms, with a wider knee exhibited at relative pressure (p/p0) < 0.1.This indicated wider micropores.An obvious capillary condensation step (hysteresis loop) at p/p0 > 0.4 indicated that a considerable amount of mesoporous was also present.Mesopore-and micropore-size distributions are shown in Figures 4(a) and 4(b), respectively, and the specific surface area, total pore volume, and pore diameter of the support AC and the catalyst Fe 2 S 3 /AC-T are listed in Table 1.It can be seen that AC has a high specific surface area and a high percentage of micropores and the pore-size distribution was predominantly 0.5-0.8nm.After treated by sulfuric acid, the specific surface area and total volume of AC-T decreased about 30% and 18%, respectively.The micropore size becomes nonuniform and has a wide range of distribution.Compare to the support AC-T, the Fe 2 S 3 /AC-T catalysts have a significantly higher specific surface area and total pore volume.It may be ascribed to the loose structure of ferric sulfate on the support.A certain amount of micropores was also formed during impregnation.The values of specific surface area and total pore volume reach the maximum at the Fe loading of 9 wt.%.Combined with the catalytic performance exhibited in Figures 1 and 2, the optimal loading of Fe is 9 wt.%. The XRD patterns of the catalyst Fe 2 S 3 /AC-T with different ferric sulfate loading are shown in Figure 5.The broad diffraction peak showed between 2θ = 20~30 °is ascribed to the support AC.It is obvious that the catalyst only exhibited the diffraction peak of support AC.The absence of diffraction peaks of ferric substance can be ascribed to its well dispersion on the AC surface. As Figure 6 displays, the surface of AC support is smooth, and the surface of catalyst Fe 2 S 3 /AC-T ( 9) is rough.There are irregular bulks adhered to the AC surface.Combined with the XRD results, the bulks of ferric sulfate are well dispersed on the AC surface. Total ion chromatogram of the filtrate of reaction mixture from catalytic hydroconversion of RGBC is shown in Figure 7. 18 compounds were identified and listed in Table 2. Five arenes and 13 alkanes were identified.It indicated that the Fe 2 S 3 /AC-T significantly catalyzed the hydroconversion of RGBC and forms GC/MS-detectable species. Conclusions Ascribed to the synergy effects of ferric sulfate and acid sites, the Fe 2 S 3 /AC-T exhibited a benign catalytic performance for DNM hydrocracking. 4 Journal of Spectroscopy The Fe 2 S 3 /AC-T significantly catalyzed the hydroconversion of RGBC and forms GC/MS-detectable species.Five arenes and 13 alkanes were identified in the filtrate of reaction mixture. Figure 1 : Figure 1: Effect of Fe loading on DNM conversion rate. Figure 2 : Figure 2: DNM conversion rates with two kinds of catalysts at different temperatures. Figure 7 : Figure 7: Total ion chromatogram of the filtrate of reaction mixture. Table 1 : The surface properties of AC support and the catalysts. Table 2 : The compounds detected in the reaction mixture.
2,962.4
2017-12-01T00:00:00.000
[ "Chemistry", "Environmental Science", "Engineering" ]
Detection of 2D and 3D Video Transitions Based on EEG Power Despite the long and extensive history of 3D technology, it has recently attracted the attention of researchers. This technology has become the center of interest of young people because of the real feelings and sensations it creates. People see their environment as 3D because of their eye structure. In this study, it is hypothesized that people lose their perception of depth during sleepy moments and that there is a sudden transition from 3D vision to 2D vision. Regarding these transitions, the EEG signal analysis method was used for deep and comprehensive analysis of 2D and 3D brain signals. In this study, a single-stream anaglyph video of random 2D and 3D segments was prepared. After watching this single video, the obtained EEG recordings were considered for two different analyses: the part involving the critical transition (transition-state) and the state analysis of only the 2D versus 3D or 3D versus 2D parts (steady-state). The main objective of this study is to see the behavioral changes of brain signals in 2D and 3D transitions. To clarify the impacts of the human brain’s power spectral density (PSD) in 2D-to-3D (2D_3D) and 3D-to-2D (3D_2D) transitions of anaglyph video, 9 visual healthy individuals were prepared for testing in this pioneering study. Spectrogram graphs based on Short Time Fourier transform (STFT) were considered to evaluate the power spectrum analysis in each EEG channel of transition or steady-state. Thus, in 2D and 3D transition scenarios, important channels representing EEG frequency bands and brain lobes will be identified. To classify the 2D and 3D transitions, the dominant bands and time intervals representing the maximum difference of PSD were selected. Afterward, effective features were selected by applying statistical methods such as standard deviation (SD), maximum (max), and Hjorth parameters to epochs indicating transition intervals. Ultimately, k -Nearest Neighbors ( k -NN), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA) algorithms were applied to classify 2D_3D and 3D_2D transitions. The frontal, temporal, and partially parietal lobes show 2D_3D and 3D_2D transitions with a good classification success rate. Overall, it was found that Hjorth parameters and LDA algorithms have 71.11% and 77.78% classification success rates for transition and steady-state, respectively. The human brain controls the central nervous system, manages the peripheral nervous system, and regulates almost all functions of the human being through the skull nerves and spinal cord [1]. Ionic voltage fluctuations of brain neurons play a key role in these processes. Fluctuations of these voltages result in the formation of current and finally electric field. The measurement of these fluctuations in brain neurons is referred to as electroencephalography (EEG) [2]. The eye is one of the significant sensory organs in the human body. The eye and brain form a unit called the "visual system" that develops during working. In the concept of human vision, the areas visible to the right and left eyes overlap to a certain extent. Most of the visual field is seen with two eyes, i.e., in a binocular fashion [3], [4]. Due to the 6-cm distance between the eyes, two different photographs are taken by the left and right eyes. As a result of this distance, binocular vision is actually the ability to see similar, but slightly in different ways. Stereo vision is a normal human vision with some amazing features [5]. The relationship between such vision and brain function has been the subject of many studies [6], [7], [8], [9]. In addition to the perception of depth, this vision is the ability to distinguish relationships between objects and ultimately the appearance of a 3D image. Three-dimensional technology was first established by the beginning of photography. The history of this fascinating technology dates back to the 18th century [10]. Multidimensional technologies have come a long way from 2D to 3D technologies. Although the 2D technology has remained on the market for a long time, technological advances have improved the 3D technology to the extent that it is used in multiple applications such as the 3D printing industry, entertainment [11], healthcare [12], [13], defense [14], aerospace [15], industry, manufacturing, and architecture [16]. Entertainment has been the most dominant practice in the market and is expected to continue to steer the market within the stipulated period. Reviewing the literature shows the abundance of studies on 2D, 3D, and EEG applications. These applications can be classified mainly in the analysis of brain signals of 2D and 3D game watching [17], [18], in 2D and 3D learning content of the education area [19], biomedical research [13], and eye fatigue analysis [20], [21]. The main objective of the present study is the comprehensive EEG analysis of the transition moment in a single hybrid video consisting of random 2D and 3D segments. Based on the 3D human vision, the analysis of this transition moment can lead to a hypothesis in the case of human fatigue. When people fall asleep, their perception of depth is lost. Under such a condition, there may be a sudden transition from 3D to 2D. To test the hypothesis of the present study, the epochs containing the full critical transitions and also the steady-state by disabling this critical transition were considered in the video with random 2D and 3D transitions. Based on the STFT, EEG behavior was analyzed in channels representing different brain lobes based on the spectrogram graph in the epochs reflecting these two conditions. Through this timefrequency visual representation, we will gain insight into the PSD differences of 2D_3D and 3D_2D transitions, important frequency bands of EEG, and dominant time intervals. Finally, by observing this graph, we focus on effective feature extraction to realize a good performance classification technique by considering the EEG bands and dominant time intervals that reveal the power difference in each channel. Participants In this test, 9 subjects participated. Participants that took part in the analysis were adults (4 females and 5 males) with the age range of 34.12 ± 2.072 years. They had normal vision and were free from any neurological or mental disorder, which may affect the results. All stages of the test were explained to the individuals in detail. They were asked to minimize unnecessary blinking and body movements. The experiments were conducted under the registration number of 24237859-806 according to the Institutional Ethics Committee. Hybrid video preparation To design video with random transitions from 2D_3D and 3D_2D, the 3D version of the Saw video [22] was converted to 2D with Xilisoft 3D Video Converter [23] and then the 3D version was converted to 3D anaglyph form using the IQ mango 3D converter [24] program. In the next step, 8-second short segments were combined with Idoo video editor pro [25]. Finally, a 135second video of 2D and 3D random eight-second parts was prepared for the test. EEG recording and dataset The EEG recordings were obtained in the EEG laboratory of Trabzon Karadeniz Technical University. All electrodes were placed on the scalp according to the international 10-20 system. In this study, 21 EEG electrodes (Fp1, Fpz, Fp2, F3, F4, F7, F8, C3, C4, Fz, P3, P4, Pz, O1, O2, T3, T4, T5, T6, Oz, and Cz) were used. The Cz electrode was selected as a reference. The configuration pattern of the electrodes is shown in Fig 1. Individuals were asked to sit on a comfortable chair about 85 cm from the TV (LG 32 inch) stand. The features of this television are detailed in [5]. Each EEG recording lasted approximately 135 s. Each test was repeated 5 times. To watch 2D and 3D video segments in a single video, the selected glasses must be independent of any display system. Therefore, anaglyph glasses were found suitable for this scenario according to its working principle. EEG data were sampled at 512 Hz and the skin impedance was below 10 KΩ. Video details of hybrid scenario Hybrid video and transitions consisting of random 2D and 3D segments are presented in Fig. 2. Orange and green arrows represent 2D_3D and 3D_2D, respectively. In the critical transition analysis, 5-s epochs were created to analyze and classify the transition moments from 2D_3D and 3D_2D. This epoch range is presented in Table 1. Since there were 5 transitions from 2D to 3D and 4 transitions from 3D to 2D and a total of 5 recordings were taken from each individual, 45 epochs were analyzed. In addition, five intervals from 2D to 3D and five intervals from 3D to 2D were considered for steady-state. Thus, there are 50 epochs for this state. The 4s intervals are shown in Table 2. Data analysis The general block diagram for EEG data analysis is presented in Fig 3. After data collection, each block is described in detail in the following sections. Preprocessing Preprocessing is an important step in EEG signal processing [26]. Preprocessing techniques help remove unwanted artifacts from the EEG signal and therefore improve the signal-to-noise ratio. One reason for the necessity of preprocessing is that the signals collected from the scalp may not precisely represent signals from the brain as spatial information is lost. Also, the EEG data contain high amounts of noise that can hide weak EEG signals. Artifacts such as blinking or muscle movement [27] may contaminate data and distort the main data. Finally, it is desirable to separate the respective nerve signals from random neural activity that occurs during EEG recordings. In addition to the bandpass filter and notch filter in the EEG device used, averaging, filtering and normalization methods were used as preprocessing techniques. In the first step, trials were averaged to minimize the noise level in each channel. A 50-Hz notch filter was then applied to suppress line noise. A third-order Butterworth [28] bandpass filter was used to clear the noise signal in the frequency range of 1-55 Hz. Regarding the normalization method, the signal should be normalized to compare EEG activity in different individuals or between different channels [29]. Furthermore, the amplitude of the signals can directly affect the classification performance. Therefore, epochs were normalized to obtain similar conditions and to reduce the effect of size change. In this study, Eq. (1) [30] was used as the normalization technique for each epoch [31]. where , ̅ , , and refer to the original epoch, the mean, SD of the original epoch, and the normalized epoch, respectively. STFT-Spectrogram A sonogram is a two-dimensional image generated by calculating "Short Time Fourier Transform" (STFT) using a floating transient window. This transformation adds important information to the unpredictable nature of EEG data. By adjusting the width of the window, the time resolution of the resulting spectrum can be determined. Narrower windows will provide better time resolution but lower frequency resolution, while larger windows will provide the opposite. The absence of undesirable cross-terms [32] and the simplicity of calculation [33] are the main factors in the widespread use of STFT in practice. Among its many important features, STFT has a basic feature that facilitates the interpretation of the resulting distribution: i.e., the magnitude-wise shift-invariance at both time and frequency [34]. Considering the square module of STFT, the spectrogram, which is the spectral energy density of the windowed signal, was used locally. The power spectrum intensity of the EEG signal was calculated by the STFT spectrogram method with the Gauss window function. Due to the uncertain nature of the EEG signals and to minimize spectral leaks, a soft-behaving Hanning window was selected. The Hanning window with the 512 samples window's length was chosen to achieve an acceptable frequency resolution. The overlapping of the window was considered as 'window size -1'. In this section, pre-analysis is generally divided into EEG amplitude-time and powerfrequency preliminary analyses. The flowchart of this scenario summarizing preprocessing and the EEG data preliminary analysis is shown in Time-frequency signal analysis In signal processing, time-frequency analysis is a set of techniques used to characterize and modify transient and statistics-changing signals. One of the major benefits of applying timefrequency conversion to a signal is to discover patterns of frequency changes that clarify the structure of the signal. Another important use of time-frequency analysis is to reduce random noise in noise-dependent signals. In this study, the STFT time-frequency analysis method was used following the EEG preliminary analysis and based on the results of delta band selection. Feature extraction In machine learning, pattern recognition, and image processing, feature extraction is done using the first set of measured data to create informative and necessary values (properties) and facilitate subsequent learning and generalization steps. In addition, feature extraction is a dimensional reduction process in which the initial set of raw variables is reduced to more manageable groups (properties) for processing and still defines the original data set accurately and precisely. For the transition and steady-state, the feature extraction was performed considering the STFT-based spectrogram graph. According to these graphs, the difference in 2D_3D and 3D_2D for the transition and steady-states showed a big difference in the delta band. Feature extraction was performed using SD and max statistical functions for both dominant time intervals and frequency band in 5-s (transition) and 4-s (steady-state) epochs. As a result of the spectrogram graphs analysis, 1-1.5 s and 1.5-3 s time intervals at 2, 3, and 4Hz frequencies of delta band were selected to show the difference in PSD of 2D_3D and 3D_2D intensely. As the second feature extraction method, Hjorth parameters were applied to epochs in dominant time and frequency ranges. The Hjorth parameter has three types of parameters that help to show the statistical characteristic of a signal in the time domain as in Table 3 [35]. The activity parameter, which is the variance (var) of the time parameter, can show the surface of the power spectrum in the frequency domain. The mobility parameter is defined as the square root of the first derivative of the signal and the ratio of the variance of the signal. This parameter has the standard deviation rate of the power spectrum. The complexity parameter shows how the shape of a signal is similar to a pure sine wave. At this stage, the feature extraction method is summarized in Fig. 5 for critical transition and steady-state. Table 3. Hjorth parameters [35] Parameter Notation In the table, ̇( ) and ̈( ) are the first and second derivatives of the signal, respectively. While these three parameters contain information on the frequency spectrum of a signal, they also help analyze the signals in the time domain. In addition, lower computational complexity can be achieved by their use. Classification techniques A classifier attempts to estimate the corresponding class to which an independent variable belongs, using values for the features as input [36]. In general, a trained classifier models the relationship between classes and corresponding features and can identify new samples in an invisible test dataset. In a study conducted to evaluate the performance of the classification method, some performance measures, namely sensitivity and specificity [10], were used in addition to classification accuracy. Since there is a two-class classification problem, this means that the chance level is 50%. Three classifiers including k-NN, SVM, and LDA were used to show the effectiveness of the proposed technique of this study. The k-NN is a supervised learning algorithm that defines the class of a test sample by class of the majority of k-nearest training samples. The k-NN algorithm is a simple easy-to-implement machine learning algorithm that can be used to solve both classification and regression problems [37]. Despite its simplicity, k-NN can outperform stronger classifications and can be used in a variety of applications, such as economic prediction, data compression, and genetics. The performance of the k-NN classifier depends on the distance function and the k value of the neighborhood parameter. Here, k plays a very important role in the performance of the nearest neighbor classifier. If k is too small, the result may be noise sensitive; on the other hand, if k is too large, neighbors may be affected more than other classes [38]. Although there is no clear decision in articles on the choice of k, in general, it is understood that setting k = 1 or selecting k through cross-validation is the most popular method [39]. SVM is a classification method based on statistical learning theory. SVM shows good generalization performance for high dimensional data due to convergence optimization problem [40], [41]. SVM can define classes using a distinctive hyperplane. For the given two classes of linearly separable classification problems, the SVM attempts to find a hyperplane that separates the input field by the maximum margin [42]. Non-linear decision boundaries can be created by using a kernel trick. Non-linear decision boundaries can be created by using a kernel trick. In EEG studies, the Gaussian or Radial Basic Function (RBF) core is often used with very good results [43]. In this study, RBF was preferred for performing the SVM method. LDA maximizes the ratio of interclass variance to in-class variance in any given data set, thus guaranteeing maximum separability [44], [45]. LDA is used to model differences in groups by separating two or more classes and reflecting the features in a high dimensional space to a lower dimension space. In this study, K-fold cross-validation was applied to validate the results of the classification algorithms. K value was taken as 10. After performing this cross-validation, the σ value of the SVM and the k value of k-NN were obtained. In our study, we defined class 2D_3D as a positive sample and class 3D_2D as a negative sample. To prepare a dataset for classification, the epochs of each class were divided into two groups. Then, training and test sets were prepared. The hybrid scenario classification flowchart is presented in Fig. 6. The processes in this flow were repeated 20 times. Results To analyze and classify 2D_3D and 3D_2D separately, the window lengths of 5-s and 4-s were selected for critical transition steady-states analyses, respectively. The average classification accuracy of SVM, k-NN, and LDA of 9 subjects was calculated in each channel in critical transition and steady-state. The average results of the classification algorithms in the critical transition and steady-states for training and test dataset are presented in Figs. 7, 8, 9, and 10. Also, the comparison of two feature extraction methods is shown in Figs. 11 and 12, respectively. ) 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 Scrutinizing the graphs presented, as expected, the training data show a higher success compared to the test. In general, it is observed that SVM and LDA classifiers are more successful in interpreting graphs than k-NN. In addition, the frontal, temporal and parietal lobes of the brain seem to be more effective in the classification of 2D and 3D transition. In Fig. 7, the T3 channel was the best channel with 66.67% success using SD and max feature extraction method and LDA classification algorithm. The second effective channel was obtained by the Hjorth method and the LDA algorithm. This channel is defined as Pz, with 66.25% success. As shown in Fig. 8 in the steady-state, the F8 and T6 channels were selected as effective channels with the success rates of 66.67% and 66.2%, respectively, using the Hjorth and LDA methods. Since this study consists of two classes, it is observed that the average classification results were low. In this case, it is necessary to improve these results by applying another method. In the following, it was tried to achieve this objective. Improving the classification results After obtaining the classification results of the subjects using two feature extraction methods, the result improvement analysis was taken into consideration by inter-label voting of the best three channels. To realize this voting process, three channels with the best result were selected based on average classification results. The results for each group of each epoch were explained according to a decision mechanism [46]. This decision mechanism is summarized in Table 4. In this table, the final decision is called "estimated label based on criteria". If the label of the estimated epoch for two channels (or three channels) is 1 and represents "2D_3D", it takes the value 1. Additionally, if the estimated epoch label for two channels (or three channels) is 2 and represents "3D_2D", it takes the value 2. The three best channels based on average accuracy results are presented in Tables 5 and 6. Voting was then carried out between three channels for each test epoch. By voting between these channels, the labels of the test epochs were determined. The flow chart of the voting process is shown in Fig. 13. The classification results for the critical transition and steady-state of the 2D_3D and 3D_2D transitions based on the best three channels label voting are presented in Tables 7 and 8, respectively. Generally, at the top three channel selection tables of the average classification results, the frontal, temporal and parietal lobes appear to be more prominent. In addition to effective brain lobe selection, reduction of the number of channels indicates that this study is suitable for its application in biomedical fields. In addition, it seems that the voting method has effectively reduced the number of channels and improved classification accuracy. Moreover, classification performance parameters show good compatibility with each other. Comparing the feature extraction methods used, the Hjorth method seems to be more successful. Based on the result tables and classification methods, the LDA algorithm gives better results with an overview. Considering Table 7 in the interpretation of the average classification result, the best result (71.11%) was obtained with Hjorth and LDA in the label voting of Pz, T3, and F7 channels. This result increased by approximately 6.48% compared to the pre-voting result. Similarly, as seen in Table 8, the success rate in label voting of F8, T6, and F4 channels increased by 11.57% after voting. Discussion In the present study, datasets of EEG records related to 2D_3D and 3D_2D transitions were obtained with the predetermined scenario. Reviewing the studies on 2D and 3D technology, no detailed quantitative research was found using the channels representing the five lobes of the brain and all EEG bands [47], [8], [9]. The effect of watching 2D and 3D TV on brain signals has been the focus of some studies in terms of qualitative [12], [48]. In [12], the researchers claimed that the behavior of brain signals did not change during 2D/3D TV watching. With the opposite result of this study, brain dynamics have been shown to exhibit different behaviors in the brain lobes and EEG frequency bands [49], [48], [5], [10]. Analysis of brain signals by watching 2D and 3D videos individually was done extensively in our previous studies [5], [10]. The classification of EEG signals of 2D and 3D movie watching and post 2D and 3D movie watching in dominant bands has distinguished these studies from others that were done in this field. Both critical transition moment and steady-state were taken into consideration by watching the video with 2D_3D and 3D_2D random transitions using a single video. Moreover, in the studies carried out in this field, no study is found on catching the transition moment in 2D_3D and 3D_2D transitions. In our hypothesis, the effort to capture this transition moment is based on people's eye anatomy. People see their surroundings in 3D due to their eye structure. Thus, it can be claimed that the perception of depth may be lost when they are sleepy and tired. For this reason, it is important to analyze and classify 2D_3D and 3D_2D transitions for both critical and steady-state situations. 2D_3D and 3D_2D transitions were analyzed separately in critical and stable situations. By looking at the PSD spectrogram graphics based on STFT, the 2, 3, and 4 Hz frequency hypothesis from the delta band was chosen as the dominant band in these transitions. It was prepared to classify critical transition and steady-state using SD, max, and Hjorth parameters at these frequencies. Both situations were analyzed using SVM, k-NN, and LDA classification techniques. In this two-class study, the best result shows that the LDA classification and Hjorth methods well represent the efficacy of the channels from the frontal and temporal lobes in the case of 2D and 3D transitions. Based on these results, the channels representing 2D and 3D transitions from the temporal and frontal lobes overlap with areas that are sensitive to depth perception and binocular visual ability [7], [50], [8]. Conclusion The main goal of this study is to capture the transition moment in a single video consisting of 2D and 3D parts. This goal is based on people's eye anatomy such that to lay the foundation for fatigue work. Since people's ability to see binoculars lose dimension in case of fatigue, the transition from 2D to 3D and 3D to 2D are important. In this study, the power spectrum density differences of EEG signals in 2D_3D and 3D_2D transitions were analyzed in the form of spectrogram graphics. By interpreting these graphs, an idea was obtained about EEG frequency bands and dominant time intervals. In this study, the power difference of brain signals in transitions is evident at 2, 3, and 4 Hz of the delta band. As a result, basic steps were taken to obtain a good classification performance by extracting suitable features from the dominant band; i.e., delta band. It is noteworthy that Hjorth parameters and LDA classification technique provided promising results in temporal and frontal lobes, in general. Transition analysis in a single hybrid video consisting of 2D and 3D parts can add a new and unique perspective to driver fatigue studies. This study provides a good basis for our future work. In the near future, transition analysis using a more professional combination of 2D and 3D video in brain signal analysis of tired individuals can provide a promising ground on early diagnosis studies in the transition moment to fatigue. In addition, more specific frequency ranges, power spectrum, and statistical methods can be researched and determined. A suggestion for future studies might be decreasing the number of channels and increasing the classification accuracy using different feature extraction and classification methods. A flexible and interactive graphical interface can be used to process dynamic 2D/3D brain data using EEGLAB. Train and classification stages can be improved by using deep learning algorithms.
6,113.8
2020-02-05T00:00:00.000
[ "Computer Science" ]
Treatment of malignant tumors of the skull base with multi-session radiosurgery Objective Malignant tumors that involve the skull base pose significant challenges to the clinician because of the proximity of critical neurovascular structures and limited effectiveness of surgical resection without major morbidity. The purpose of this study was to evaluate the efficacy and safety of multi-session radiosurgery in patients with malignancies of the skull base. Methods Clinical and radiographic data for 37 patients treated with image-guided, multi-session radiosurgery between January 2002 and December 2007 were reviewed retrospectively. Lesions were classified according to involvement with the bones of the base of the skull and proximity to the cranial nerves. Results Our cohort consisted of 37 patients. Six patients with follow-up periods less than four weeks were eliminated from statistical consideration, thus leaving the data from 31 patients to be analyzed. The median follow-up was 37 weeks. Ten patients (32%) were alive at the end of the follow-up period. At last follow-up, or the time of death from systemic disease, tumor regression or stable local disease was observed in 23 lesions, representing an overall tumor control rate of 74%. For the remainder of lesions, the median time to progression was 24 weeks. The median progression-free survival was 230 weeks. The median overall survival was 39 weeks. In the absence of tumor progression, there were no cranial nerve, brainstem or vascular complications referable specifically to CyberKnife® radiosurgery. Conclusion Our experience suggests that multi-session radiosurgery for the treatment of malignant skull base tumors is comparable to other radiosurgical techniques in progression-free survival, local tumor control, and adverse effects. Introduction A variety of malignant tumors can involve the skull base. These tumors may originate from various tissues of the skull base, or invade into the region as extensions of head and neck cancers [1,2]. The skull base is also a common site of metastasis from distant tumors [3,4]. Patients with skull base malignancies suffer greatly [5]. Common clinical presentations include pain and cranial nerve deficits, such as visual disturbances, facial paresis and swallowing difficulties [3]. Treatment of these tumors presents formidable challenges to the clinician. In addition to neurological factors, such as the close proximity of critical neurovascular structures, oncological factors play a key role. Metastatic skull base tumors are often late complications of systemic cancers, and the advanced systemic tumor burden, poor overall clinical condition and the morbidities from prior interventions, all make treatment difficult [6,7]. Historically, malignant skull base tumors were deemed inoperable and the overall prognosis was poor, especially for those presenting with cranial nerve deficits [8,9]. Surgical resection was frequently incomplete and limited by high mortality, risk of severe neurological morbidity and frequent recurrences [10][11][12][13]. Important technical advancements such as improved understanding of the microanatomy of the area, higher-resolution diagnostic imaging, safer operative strategies, and multidisciplinary collaboration have evolved over the past three decades, making surgical treatment safer [14,15]. Surgical resection or debulking is currently considered a critical component of their management [16,17]. But, even though some authors regard surgery as the "gold standard" treatment, the limitations of brainstem and cranial nerve morbidities continue to make curative resections a rarity [18][19][20]. There is an important role for radiation therapy in the management of skull base malignancies, both as primary treatment as well as adjuvant treatment, after surgical resection [21][22][23][24][25][26]. However, as with surgery for these tumors, the limitations of this therapy are readily apparent. External beam radiation therapy alone results in poor local control and overall survival due to factors such as large tumor volume, limitations of radiation dose, and the intrinsic "radio-resistance" of certain tumors [27,28]. Single-session radiosurgery has been employed in the treatment of chordomas and malignant tumors at the cranial base [3,[29][30][31][32][33][34]. However, given the close proximity of these lesions to critical neurovascular structures, methods to minimize radiation-induced toxicities should be considered. [35][36][37][38][39][40][41][42][43][44][45]. More recently, "hypofractionated" or staged radiosurgery has provided an attractive alternative. This therapy has been successfully utilized in the treatment of tumors in which preservation of surrounding structures is particularly vital, such as those near the optic nerve and optic chiasm, as well as for various lesions at the skull base [46][47][48][49]. The hiatus between treatment sessions theoretically provides time for normal tissue repair, and the resultant lower radiation risk to the normal structures permits more effective treatment of the target lesion [50]. This therapy may be particularly useful for patients with skull base malignancies, for whom the essential goal of treatment is for palliation rather than cure [31]. The CyberKnife ® is an image-guided, frameless radiosurgical system that uses inverse planning for the delivery of radiation to a defined target volume [51]. Non-isocentric radiation delivery permits simultaneous treatment of multiple lesions, and the frameless configuration allows for staged treatment. It has been successfully utilized to treat various skull base lesions including chordomas and plasmacytomas among many others [47,49]. We utilized the CyberKnife ® to treat skull base malignancies, believing that it is useful for managing these relatively rare but highly challenging tumors. In this retrospective study, we evaluated the efficacy and safety of staged stereotactic radiosurgery for treatment of malignant skull base tumors, either as a primary treatment modality or as an adjunct to surgery and conventional external beam radiotherapy. Patient Population We performed a retrospective review of 464 patients with intracranial tumors who were treated with CyberKnife ® stereotactic radiosurgery (CKS) at Georgetown University Hospital between January 2002 and December 2007. One hundred forty-five patients were classified as having tumors of the skull base, of which 108 were benign. Thirty-seven patients had 37 lesions that were classified as malignant skull base tumors. Six patients who had followup periods less than or equal to four weeks were eliminated from statistical consideration, thus leaving 31 patients for analysis. For the purposes of this study, skull base lesions were defined as those that involved the osseous structures of the base of the skull, in close proximity to the critical neurovascular structures of the region. All the tumors included in this study either completely encircled, partially circumscribed, or directly contacted the brainstem, optic chiasm, or cranial nerves with meaningful remaining function. Primary brain tumors were excluded, unless they had the potential to metastasize and were thus considered malignant. An example of such a tumor is a hemagiopericytoma. Malignant orbital, sinus and head-andneck tumors were included in this study only if there was intracranial extension. This malignant skull base tumor group consisted of 21 men and 10 women, with a median age of 57 (range: 11 -81) ( Table 1). The histopathology of all tumors was either known from prior microsurgical resection, biopsy, or was presumed based on the intracranial extension of known head and neck cancers. Radiosurgical Treatment Planning and Delivery A multidisciplinary meeting of specialists that included neurosurgeons, otolaryngologists, radiation oncologists, medical oncologists, and neuroradiologists evaluated all patients. A collective decision to treat with radiosurgery was made for each individual patient. Radiosurgery was only offered to patients for whom conventional microsurgical resection was contraindicated because of high neurological risk, overwhelming medical comorbidities, poor prognosis with limited survival, or recurrent disease in the presence of prior microsurgical resection, chemotherapy and radiation therapy. The CyberKnife ® radiosurgical system was used to administer cranial radiosurgery in every case. The technical aspects of CKS for cranial tumors have been described in detail [46,50]. Briefly, the patient's head was immobilized by a malleable thermoplastic mask during the acquisition of a thin-sliced (1.25 mm) high-resolution computed tomography scan, which was used for treatment planning. The use of a contrast-enhanced MRI fused to the treatment planning CT scan was at the discretion of the treating physicians. This decision was influenced by various factors, such as previous radiation to the area, performance status, treatment intent and extent of contact and compression of critical neurological structures. The target volumes and critical structures were then delineated by the treating neurosurgeon. An inverse planning method with non-isocenteric technique was used for all cases, with specific dose constraints on critical structures such as the optic chiasm and brainstem. The planning software calculated the optimal solution for treatment, and the dose-volume histogram of each plan was evaluated until an acceptable plan was found. The treating neurosurgeon and radiation oncologist, who have a shared responsibility for all aspects of the treatment planning and procedure, determined the minimal tumor margin dose of the target vol-ume, the treatment isodose and the number of treatment sessions into which the total dose was to be divided. This decision was influenced by various factors, such as previous radiation to the area, tumor volume, and extent of contact and compression of critical neurological structures. In most cases, the treatment dose was prescribed to the isodose surface that encompassed the margin of the tumor. The delivery of radiosurgery by the CyberKnife ® was guided by real-time imaging. Using computed tomography planning, target volume locations were related to radiographic landmarks of the cranium. With the assumption that the target position is fixed within the cranium, cranial tracking allowed for anatomy based tracking relatively independent of patient's daily setup. Position verification was validated several times per minute during treatment using paired, orthogonal, x-ray images. Caclulation of Radiosurgical Treatment Planning Parameters The homogeneity index and new conformity index were calculated for each treatment plan. The homogeneity index (HI) describes the uniformity of dose within a treated target volume, and is directly calculated from the prescription isodose line chosen to cover the margin of the tumor. It is calculated by the following equation: The new conformity index (NCI) as formulated by Paddick, and modified by Nakamura, describes the degree to which the prescribed isodose volume conforms to the shape and size of the target volume [52,53]. It also takes into account avoidance of surrounding normal tissue. It is calculated by the following equation: Clinical Assessment and Follow-Up Post-radiosurgical follow-up was typically performed in a multidisciplinary clinic of the treating neurosurgeon and radiation oncologist beginning one month after the conclusion of radiosurgery. Patients were subsequently followed in three-month intervals. During each follow-up visit, a clinical evaluation and physical examination were performed as well as a review of pertinent radiographic imaging. If a patient experienced deterioration in their clinical condition at any point during the follow-up period, an immediate evaluation was performed. The progress of all patients was discussed periodically at a multidisciplinary tumor conference of various specialists, ensuring precise interpretation of the available data. We v volume of the target covered by the prescription isodose v volume 2 ( ) Patient and tumor characteristics The characteristics of the study group including the distribution of gender, age, tumor histology and location are detailed below and summarized in Tables 1 and 2. The most frequent tumors in this series were squamous cell carcinoma (6 lesions), adenoid cystic carcinoma (5 lesions), rhabdomyosarcoma (2 lesions) and metastases of melanoma and renal cell carcinomas (3 lesions each). The median tumor volume was 18.3 cc (range: 3.2 -206.5 cc). Tumors varied in their skull base location, as illustrated in Table 2. A number of lesions, however, spanned multiple anatomical locations. CKS was the primary treatment to the malignant skull base tumor in 18 patients (58%). Of the 13 patients with previous treatment to the tumor involved in this study, 6 (46%) had previous craniofacial surgery, 4 (30%) had previous external beam radiation, and 1 (7%) had previous stereotactic radiotherapy. Four patients (13% of the entire series) had undergone biopsy only. Radiosurgical treatment The specific dose and fractionation scheme for the tumors in this series was influenced by various factors, including previous radiation to the area, tumor volume, and extent of contact and compression of critical neurological structures. Details of the radiosurgical treatments are found in Table 3. A median treatment dose of 2500 cGy was deliv- For those patients with local progression, the median time to progression was 24 weeks (range: 5 -230 weeks). One patient with a renal cell carcinoma metastasis to the right jugular foramen/CPA who experienced local progression at 31 weeks underwent a second course of CKS, which halted further progression and resulted in subsequent local control at a follow-up of 72 weeks. Survival Ten patients (32%) were alive at the end of the follow-up period, having survived a median of 81 weeks (range: 18 -238 weeks). For the 21 patients (68%) who died, the median time to death was 25 weeks (range: 6 -142 weeks) (Tables 4 &5). Among those patients who died, 5 (25%) had local progression. However, no patients died specifically from radiosurgery-treated disease or treatment-related complications. The median progression-free survival of the cohort was 230 weeks (Figure 3). The median overall survival of the cohort was 39 weeks (Figure 4). 57-year-old woman with squamous cell carcinoma of the left ethmoid sinus, orbit and anterior skull base Leiomyosarcom a n/a n/a Stable n/a Dead 28 28 Tumor Control and Survival as a Function of "Stand-Alone" Radiosurgery versus "Adjunctive" Radiosurgery The follow-up clinical data were compared between the groups of patients for whom CKS was primary "standalone" treatment versus secondary treatment following surgery or external beam radiotherapy. Among the patients with adequate follow-up data, 18 patients were treated with CKS as a primary treatment. The median follow-up was 44 weeks (range: 7 -238 weeks). Nine patients (50%) were alive at the end of the follow-up period, and 5 (27%) experienced local tumor progression, with a median time to progression of 31 weeks (range: 9 -230 weeks). For the 13 patients with previous treatments for their skull base lesion, the median follow-up was 35 weeks (range: 6 -142 weeks). One patient (8%) was alive at the end of the follow-up period, and 3 (23%) experienced local tumor progression, with a median time to progression of 16 weeks (range: 5 -32 weeks). Toxicity The neurological deficits before and after CKS are summarized in Table 6. Altered vision comprised the most com-mon presenting symptom prior to radiosurgery, with 10 patients having reduced visual acuity, 13 patients having diplopia, and 1 patient having proptosis. Four patients (40%) experienced improved visual acuity and three patients (23%) experienced improvement from their diplopia following treatment. Otherwise, all symptoms remained stable at last follow-up. Of the 17 patients with facial weakness or facial pain on physical examination prior to CKS, 15 (88%) remained stable at last follow-up. One patient (6%) with facial weakness reported improvement. In one patient, facial weakness and swallowing difficulty worsened following CKS due to local disease progression involving all cranial nerves. Swallowing difficulties were found in four patients, 75% of which remained stable following treatment ( Figure 5). In the absence of tumor progression, there were no cranial nerve, brainstem or vascular complications referable specifically to Cyber-Knife ® radiosurgery. Specifically, there were no new cranial nerve deficits observed following SRS in this series. Discussion Skull base malignancies pose unique challenges to the clinician because of oncological and neurological factors. Since these tumors present late in the course of the patients' disease, they are often poor candidates for aggressive therapy. And because these tumors are in close proximity or contact with the brain stem and cranial nerves, complete surgical resection is almost uniformly impossible without significant neurological injury. External beam radiation has had limited success in treating these malignancies largely due to dose-limitations [27,28]. Given the results of the current study, we feel that microsurgical resection of skull base malignancies may no longer be the "gold-standard" or optimal first-line treatment. Cases should be evaluated on an individual basis by a multi-disciplinary team so that the best treatment, capitalizing on the advances in skull base microsurgery and radiation oncology, can be delivered. Review of the Literature Radiosurgery may be uniquely suitable for treating these tumors, since it is non-invasive and can precisely target the tumor with minimal spread of radiation to surrounding normal neurological structures. Various investigators have reported their experience with stereotactic radiosurgery in the treatment of malignant skull base tumors. Cmelak et al. reported their data on 47 patients with 59 malignant skull base tumors [54]. Eleven patients with primary nasopharyngeal carcinoma were treated with Linac radiosurgery as a boost (7 -16 Gy, median: 12 Gy) after a course of fractionated radiotherapy. None of the eleven had tumor progression during the follow-up period. The rest of the patients were treated for skull base metastases or local recurrences from primary head and neck cancers. Radiation doses of 7.0 Gy -35.0 Gy (median 20.0 Gy) were delivered to these lesions, usually as a single fraction. A tumor control rate of 69% was reported for these patients during the study period (median: 36 weeks). Major toxicities occurred after 5 of 59 treatments. These included three cranial nerve palsies, one CSF leak, and one case of trismus. An important conclusion from their data was that local control did not correlate with lesion size, histology, or radiosurgical dose. Two small studies from Japan showed similar results. Tanaka et al. reported on 19 malignant skull base tumors, which they treated with single fraction gamma knife radiosurgery [33]. The mean marginal dose utilized was 12.9 Gy. During a follow-up period of 22 months, a tumor control rate of 68% was recorded. The other study by Iwai and Yamanaka of 18 similar patients showed a tumor control rate of 67% during a median follow up of 10 months [31]. A local control rate as high as 95% at 2 years has been reported in one radiosurgery study, but the patient population in that series included 66% with skull base chordomas, chondrosarcomas and adenoid cystic carcinomas, which differ significantly from the cancer patient population studied in the other cited series and our own [55]. In the attempt to bring some order to a heterogenous group of skull base tumors, Morita et al. recently classified cranial base tumors by the degree of aggressiveness into benign, intermediate malignant (or low grade/slow growing), and highly malignant (or fast growing) [56]. Applying this strategy to our series, 31 tumors in our series (84%) would be classified as "highly malignant" or fast growing. Despite this unfavorable bias in our population, the tumor control rate in our series compared favorably to Progression-free survival Figure 3 Progression-free survival. Overall survival Figure 4 Overall survival. the rate reported in the literature [3,31,33,54,57]. We treated 31 malignant skull base tumors with a median marginal dose of 2500 cGy delivered in 2-7 sessions (median of 5) and achieved a local control rate of 74% during the follow-up period (median 37 weeks). The median progression-free survival was 230 weeks. In separate analysis of the patients with tumors classified as "highly malignant", the local control rate in this subgroup of patients did not differ significantly from the total study population (74% at 40 weeks), confirming the reported finding on metastatic tumors that response to radiosurgery may be independent of tumor characteristics [15]. Similarly, a comparison of patients who received radiosurgery as primary treatment versus adjunct treatment after surgery or radiotherapy did not reveal major differences in outcome. Limitation of Toxicity Neurological deterioration occurred only in a minority of our patients and in each case, it was accompanied by local tumor progression. Neurological symptoms remained stable or improved in 94% of the patients. No neurological deficits were attributable to toxicity of radiosurgery. Although it is possible that a higher complication rate will emerge with longer follow-up, we believe that the lack of morbidity is largely the result of delivering radiosurgery in multiple sessions, with high conformality and homogeneity. Fractionation is a cornerstone principle in radiation oncology. The oncologist uses it to exploit the significantly different response to radiation of normal versus neoplastic tissue, for the protection of the former and ablation of the latter. It provides time for normal tissue repair between doses, and theoretically minimizes radiation toxicity. With the advent of frameless, image-guided radiosurgery, "hypofractionation" or multi-session treatment became possible. Adler et al. reported on their experience on multi-session radiosurgery for treating skull base, benign tumors situated within 2 mm of the optic apparatus. They achieved a high tumor control rate and found that 94% of the patients had stable or improved vision after treatment [46]. The authors believed that staging the treatment significantly contributed to the low incidence of radiosurgical toxicity. In addition to protective effects, the staging of radiosurgical treatments may have heretofore under-recognized tumor control benefits as well. A new report from Canada showed that patients who received staged radiosurgery to their brain metastases survived longer that those who received single-session treatment [58]. It is possible, that by allowing for a higher total 72 year-old man with a history of transitional cell carcinoma with a biopsy proven metastasis to the clivus and foramen mag-num Figure 5 72 year-old man with a history of transitional cell carcinoma with a biopsy proven metastasis to the clivus and foramen magnum. He underwent prior radiation treatment with 60 Gy in 30 fractions. He presented to our institution with progressive facial numbness and difficulty swallowing. (A) Sagittal MRI of the brain after gadolinium administration demonstrating a large clival-based lesion compressing the pons and medulla. Having seen three other skull-base surgeons, none of whom offered surgical resection, we deemed the patient a good radiosurgery candidate. (B) Sagittal CT with treatment contour. The lesion was treated with 2000 cGy in 5 stages. He was followed for 41 weeks when he died of failure to thrive. There was no radiographic progression of this lesion at the time of his last follow-up appointment. A recent report out of our institution demonstrated that the CyberKnife ® radiosurgical system is capable of delivering a high dose of radiation to a well-defined clinical target volume with high conformity (median NCI 1.66) and homogeneity (median HI 1.26), regardless of irregular tumor shape, large tumor volume, or proximity to critical structures [59]. The median NCI in the present series was 1.60, and the median HI was 1.32. Although still controversial, it is our opinion that improved conformity and homogeneity may maintain high rates of local control while decreasing radiation-induced complications [53,[59][60][61]. It seems intuitively evident that conformality and homogeneity are important in treating malignancies of the skull base, since all the tumors are in close proximity to, or entirely surround critical neurological structures that have limited radiation tolerance. In many instances, the encircled cranial nerve is not visible on the treatmentplanning image, and one must assume that it received the maximum dose. Dose and Staging Selection A significant majority of the patients in the present study received a does of 2500 cGy in 5 stages. The initial selection of the dose and staging regimen stemmed from our group's experience using the CyberKnife ® radiosurgical system to treat benign skull base lesions. Having encountered no neurological morbidity attributable to radiosurgery in this study, it is impossible to tell whether current treatment regimen represent the "ideal" dose to malignant skull base tumors. A higher average dose may lead to a better tumor control rate than the 74% seen in the present series, and still achieve an acceptably low rate of complications. It is also possible that the "ideal" dosing and staging is different for each patient, dependent on histopathology, previous treatments, tumor volume, neurological status and systemic tumor burden. Our confidence in raising the treatment dose, like the "true" complication rate, will no doubt come with time and further experience with these difficult tumors. Conclusion Despite the significant challenges, stereotactic radiosurgery appears to be a safe and reasonably effective treatment modality for the treatment of malignant primary, recurrent, and metastatic skull base tumors. Our experience suggests that image-guided, multi-session radiosurgery compares favorably to other radiosurgical techniques in the treatment of these difficult tumors. In addition, no major morbidity was observed as a direct result of this method. Longer follow-up and, optimally, comparison of dosimetry and other treatment parameters across institutions, will be necessary to more accurately define the long-term survival and effect of multi-session radiosurgery on disease progression for patients with these aggressive tumors.
5,548.2
2009-04-02T00:00:00.000
[ "Engineering", "Medicine" ]
MSO with tests and reducts Tests added to Kleene algebra (by Kozen and others) are considered within Monadic Second Order logic over strings, where they are likened to statives in natural language. Reducts are formed over tests and non-tests alike, specifying what is observable. Notions of temporal granularity are based on observable change, under the assumption that a finite set bounds what is observable (with the possibility of stretching such bounds by moving to a larger finite set). String projections at different granularities are conjoined by superpositions that provide another variant of concatenation for Booleans. Introduction Regular languages can be studied declaratively through formulas of Monadic Second-Order logic over strings (MSO; e.g., Libkin, 2010) or through equations built with the constructs +, ·, * , 0, 1 of a Kleene algebra (KA; e.g., Kozen, 1994). A KA with a subalgebra of tests forming a Boolean algebra is a KA with tests (KAT; e.g., Kozen, 1997). Tests are identified below with statives that serve as a basis for the approach to temporal semantics in linguistics initiated in Dowty (1979). This identification is justified by (i) a guarded string interpretation of KAT (Kozen and Smith, 1996), in which tests form states, as conceived in Propositional Dynamic Logic (PDL, Fischer and Ladner, 1979), and (ii) a notion of homogeneity associated (by Dowty and other linguists) with statives, and linked below to tests under a conception of time as observable change. These two points are developed below in MSO using reducts. Kozen and Smith's definition of guarded strings is reformulated so that ( †) the MSO-sentence ϕ picking out guarded strings over actions Σ and tests B does not mention B (or their Boolean complements), asserting only that exactly one action occurs at every position except for the final one, where no action occurs. Precisely what ( †) means is taken up in section 2, with the help of reducts. Why ( †) is significant becomes plain in section 3, where the reformulation is used to clarify the connection with tests and states in PDL. 1 A notion of temporal granularity based on observable change in MSO is built on projections that compress reducts. These projections are applied in section 4 to generalize interval networks from (Allen, 1983). Guarded strings, MSO and reducts For any finite set Σ, let Reg Σ be the set of languages over the alphabet Σ accepted by finite automata. Then Reg Σ , ∪, ·, * , ∅, is a KA -arguably, the Σ-canonical KA. For a KA with tests, we start in §2.1 with a finite set B of tests, and present the free Boolean algebra generated by B in terms of powersets 2 X of sets X. Strings over the alphabet 2 B∪Σ are then used in §2.2 for an extension to a KA. This deviates tellingly from Kozen and Smith (1996)'s presentation of guarded strings over the alphabet Σ ∪ B ∪ B with Boolean complements B of B, reviewed in §2.3. The deviation is natural from the perspective of MSO, which is brought into the picture along with reducts in §2.4. Finite free Boolean algebras Given a set B, the set T B of Boolean terms over B is the smallest set ⊇-containing B ∪ {0, 1} that is closed under the binary connectives +, · and the unary connective c (for complements). Assuming B is finite, the free Boolean algebra generated by B is (with addition ∪, multiplication ∩, and complement 2 B \ X of a subset X of 2 B ). A B-atom is a subset q of B, and is used to interpret Boolean terms over B as follows and for terms t, t ∈ T B , Guarded strings of sets Next, given a set Σ disjoint from T B , Σ ∩ T B = ∅, let the set T Σ,B of (Σ, B)-terms be the smallest set containing Σ ∪ T B that is closed under the binary connectives +, · and the unary connective * . (i) For any string s of length > 0, let α s be the symbol that occurs first in s. (ii) For any symbol q and language L, let L[q] be the set of strings that, with q attached to the right, belong to L L[q] := {s | sq ∈ L}. Now, given sets L and L of strings of length > 0, the Σ-fused product of L and L is the set of strings ss from s ∈ L and s such that sq ∈ L where q is α s \ Σ. That is, where · Σ is a partial binary function on strings of length > 0 such that Notice that if L and L are both sets of B-atoms, then their Σ-fused product is just their intersection where the Σ-asterate Σ is the Σ-fused analog of Kleene star Strings in place of sets Guarded strings in Kozen and Smith (1996) are conceived over an alphabet different from 2 B∪Σ by fixing a string b 1 · · · b n that enumerates without repetition (making n the cardinality of B). Each b ∈ B is paired with a fresh test b, relative to which a B-atom q ⊆ B can be understood as n choices c 1 · · · c n between b i and b i , with 2 B is repackaged as the language of guarded strings over Σ and B, with alphabet In place of the Σ-fused product • Σ , we have the coalesced product n L n L := {sŝs | sŝ ∈ L,ŝs ∈ L and length(ŝ) = n}. Inasmuch as the two KATs over 2 G B Σ and 2 G Σ,B are isomorphic, it is tempting to dismiss the difference recorded in Table 1 as cosmetic. Nonetheless, there are reasons for preferring 2 B over A B from the perspective of MSO, a natural home for Boolean tests, with or without atoms. MSO and reducts Given a finite set A, an MSO A -model is understood (in this paper) to be a structure [n], S n , {U a } a∈A over the set [n] := {1, . . . , n} of integers from 1 to n (for some positive integer n), with the successor relation , and for each a ∈ A, a subset U a of [n]. We can identify [n], S n , {U a } a∈A with the string α 1 · · · α n over the alphabet 2 A given by making U a the set of positions where a occurs To construe a string a 1 · · · a n ∈ A + as an MSO Amodel, we lift it to a 1 · · · a n ∈ (2 A ) + , drawing boxes instead of curly braces {, } for sets qua string symbols, as opposed to sets qua languages. 2 Given a string s over the alphabet 2 A and a subset Indeed, we can describe G B Σ by embedding Σ into 2 Σ∪B via or by MSO A -formulas built with unary predicate symbols P a labeled by a ∈ A and the binary predicate symbol S (for successors). Proposition 1. For any disjoint sets Σ and B, (saying no two symbols from Σ occur at x). Note that ∀xχ Σ (x) is an MSO Σ -sentence stating ( †) exactly one symbol from Σ occurs at every string position except for the last position, where no symbol from Σ occurs. Inasmuch as ( †) describes a very particular encoding of guarded strings (applicable to G B Σ but not to G Σ,B ), it is natural to ask: can we motivate ( †) without resorting to details of encoding? We will argue in section 3 that we can, observing for now that χ Σ (x) makes no mention of B (belonging, as it does, to MSO Σ ). The price for working with as opposed to Kozen and Smith (1996) is a complication in the alphabet of strings interpreting MSO A from A to 2 A . But since MSO Amodels are already strings over 2 A , that price has already been paid. Rather it is the step from G B Σ to G Σ,B that is costly, complicating the label set A with a set B of labels for complements of B. It is telling that a string in G Σ,B satisfies the MSO {b,b}biconditionals only at positions x where b or b occurs. By contrast, every string in G B Σ can be expanded to a MSO Σ∪B∪B -model satisfying . A crude measure of the complexity of a regular language L ⊆ (2 A ) + is given by Proposition 2. For any finite set A and regular language L ⊆ (2 A ) + , there is a smallest subset A of A such that for some MSO A -formula ϕ, Proposition 2 follows from ( ‡) for all strings s ∈ (2 A ) + , subsets A of A and MSO A -formulas ϕ, Provable by induction on ϕ, ( ‡) is an instance of the satisfaction condition characteristic of institutions (Goguen and Burstall, 1992), to which we shall return in §3.3 below. If the least set A that Proposition 2 associates with L is called the grain of L, then G B Σ has grain Σ (by Proposition 1 and a moment's reflection). Not so the regular language G Σ,B , whose image under the map a 1 · · · a n → a 1 · · · a n has grain Σ ∪ B ∪ B. Proposition 1 consigns B to the background (using MSO's propositional connectives to interpret the Boolean structure of a KAT), drawing all attention to Σ. Indeed, as conceived in PDL, tests belong in Σ -or so we argue in the next section (pace Kozen) The remainder of this section fleshes out, for and is best skipped by readers for whom χ Σ (x) is ugly enough. We let ψ B Σ be ∀x ψ Σ,B (x) for ψ Σ,B (x) given with the help of some abbreviations. For A ⊆ A, let one A (x) be the MSO disjunction one A (x) := a∈A P a (x) saying some symbol from A occurs in position x, and let atm B (x 1 . . . x n ) abbreviate formula ϕ action in Σ program (e.g., test ϕ?) B-atom ⊆ B state ∈ Q guarded string input/output pair ∈ Q × Q (2) says b n + b n can only be followed by a symbol from Σ ∀y(one {bn,bn} (x) ∧ xSy ⊃ one Σ (y)) (2) (allowing for the case where x is the last position of the string), and (3) puts atoms before and after x whenever a symbol from Σ occurs at x and after B (x) abbreviates ∃x 1 · · · ∃x n (xSx 1 ∧ atm B (x 1 . . . x n )). Tests and observable change A test in PDL is a program ϕ? built from a proposition ϕ, where, given a set Q of states, where is the KAT counterpart of ϕ? in Σ, which is assumed disjoint from the set B of Booleans? The present section fills this gap by introducing for every b ∈ B, a test ?b that is interpreted the way an action p in Σ is in KAT, albeit with more care than the "anything-goes" clause that accepts any input/output pair q, q . To regulate the changes effected by an action in Σ, we introduce a labeled transition relation and interpret each p ∈ Σ as the subset writing E(q, p, q ) and (q, p, q ) ∈ E interchangably). The "anything-goes" interpretation is the special case But to capture the meaning of a test ?b in the manner PDL does for ϕ?, we require that E(q, ?b, q ) =⇒ b ∈ q and q = q for all q, q ⊆ B. To align the interpretation closer to the input/output semantics of PDL programs, we will interpret and form B-reducts (removing actions p ∈ Σ buried in guarded strings) before compressing them (according to bc from §3.1). Regulated programs including tests Given sets Σ and B, and for every b ∈ B, a label ?b ∈ Σ ∪ B such that We can then extend any set E ⊆ 2 B × Σ × 2 B to and pick out the subset G E (pronounced "G restricted by E") of G to interpret a term t from T Σ ] E , a few definitions are helpful. Let us call a string α 1 · · · α n stutterless if α i = α i+1 for all i ∈ [n − 1]. The block compression bc(s) of a string s = α 1 · · · α n deletes from s every α i such that α Clearly, bc(s) is stutterless and s is stutterless ⇐⇒ s = bc(s). otherwise leaving t as is Observable change Also, let us say Σ is E-active if for every p ∈ Σ, E(q, p, q ) =⇒ q = q for all q, q ⊆ B (requiring that states change under p). and assuming Σ is E-active, The two parts of Proposition 3 can be sharpened at the cost of complicating the notation. Given p ∈ Σ, let us say p is Part 2 For all t ∈ T Σ,B , assuming that every p ∈ Σ from which t is formed is (E, C)-observable. Actions for a specific Boolean The condition that p is (E, C)-observable can be formulated in MSO C∪{p} as saying x and y can be separated by a unary predicate with label from C. Dropping the action p from (4) results in the requirement that every temporal step S change C ∀x∀y (xSy ⊃ diff C (x, y)) (ntc C ) designated (ntc C ) for the slogan no time without change C . This slogan is behind the function bc C that maps a string s to the block compression of its C-reduct Proposition 4. For any C ⊆ A and s ∈ (2 A ) * , s |= (ntc C ) ⇐⇒ bc C (s) = s and bc C (bc C (s)) = bc C (s). To understand the importance of the subscript C, recall that MSO satisfaction |= has the property ( ‡) for all strings s ∈ (2 A ) + , subsets C of A and MSO C -sentences ϕ, ( ‡) brings out a fundamental limitation of an MSO C -sentence ϕ, its insensitivity to differences between strings with the same C-reduct. The significance of the subscript C is easy to overlook when describing G E in MSO. Consider from Proposition 1, the χ Σ (x) conjunct banning two programs in Σ from occurring simultaneously at x. The problem with running p ∈ Σ simultaneously with ?b ∈ Σ at x is that the state transitions they describe under E B may clash. Indeed, programs in PDL and more generally, Dynamic Logic (Harel et al., 2000) are interpreted as executing in isolation; for instance, the PDL test ϕ? ensures the input state does not change, and a random assignment x :=? changes at most the value of x. In both cases, any change from a program running concurrently is ruled out. Put another way, χ Σ (x)'s conjunct ¬two Σ (x) expresses the assumption that each program in Σ is to be understood as covering all programs that might run at x. By contrast, actions described in everyday speech are invariably partial in that (i) their effects are bounded, and (ii) they never occur in isolation. Keeping (i) and (ii) in mind, and zeroing in on a specific Boolean b ∈ B, let us add labels l(b) and r(b) to Σ for actions that mark the left and right borders of b as follows. More precisely, since for every b ∈ C, suppressing ∀x∀y to simplify the notation. Returning now to points (i) and (ii) above, notice that under (l b ) and (r b ), (i) the effects of l(b) and r(b) are confined to b and although Complex actions can be built from a finite set of b-specific actions l(b) and r(b), provided we stay away from the G B Σ postulate ¬two Σ (x), which effectively pretends actions are indivisible atoms. Projections and superpositions Having re-interpreted concatenation · as • Σ and n in section 2 so that its restriction to tests is Boolean conjunction, we present in this section yet another notion of conjunction for combining descriptions of change at varying granularities. We start with the descriptions in §4.1, computing their conjunctions in §4.2. Some star-free descriptions Given a subset C of some fixed set A (determining a fragment MSO A ) and a string s of subsets of C, let us agree the pair (C, s) describes the set of stutterless strings over the alphabet 2 A that bc C maps to s. 4 That is, if we gather together all stutterless strings over 2 A in To illustrate, for Next, we interpret a finite subset C of 2 A × L A as the intersection ] A is also star-free. Continuing the example above, if 2} consists of exactly 13 strings, one for each of the interval relations from Allen (1983), such as 2 1,2 2 depicting 1 during 2 4 The restriction here to stutterless strings is motivated by the Aristotelian dictum, no time without change, a Crelativization of which is enforced by bcC (Proposition 4). (e.g., Fernando, 2016). Generalizing from 2 intervals to any integer n ≥ 2, we can extend the set to a partial function C from 2 [n] to L [n] , defined on certain pairs {i, j} which C maps to a string C({i, j}) depicting an Allen relation between i and j. The result is an interval network with node set [n] and edge set {C ∈ domain(C) | |C| = 2}, each C in which is labeled by the Allen relation depicted by C(C). We can label the edge C by a set L ⊆ L C if we loosen (C, s) to the pair (C, L), interpreted as the inverse image of L under bc C restricted to L A Conjunction as superposition We now define, for any subsets C and C of A, a binary operation & C,C on languages such that for all s ∈ L C and s ∈ L C , As a first stab, observe that if & • forms the componentwise union of strings of the same length α 1 · · · α n & • α 1 · · · α n := (α 1 ∪ α 1 ) · · · (α n ∪ α n ) then ρ C∪C (s) = ρ C (s) & • ρ C (s). More projections Recalling the KAT dichotomy between Booleans in B and actions in Σ (paralleling that between formulas and programs in Dynamic Logic 5 ) it should be noted that the sets C and C have been construed throughout to be subsets of B. The MSOformulas ∆ l b (x) and ∆ r b (x) introducing the actions l(b) and r(b) in §3.3 define a border translation from B to Σ under which bc becomes the removal d 2 of empty boxes underlying projections in the S-strings of Durand and Schwer (2008), with, for instance, the Allen relation 1 during 2 recast as l(2) l(1) r(1) r(2) (Fernando, 2019;Fernando and Vogel, 2019). This section has focused on bc (for tests/statives) to lighten the notation. We can adapt § §4.1, 4.2 for C, C ⊆ Σ, putting d 2 in place of bc. Conclusion The present paper is essentially an argument for interpreting MSO A relative to strings over the alphabet 2 A , rather than strings over the alphabet A. The latter smuggles in an assumption ∀x spec A (x) where spec A (x) is the MSO A (x)-formula a∈A (P a (x) ∧ a ∈A\{a} ¬P a (x)) specifying exactly one label from A for the string position x. For a KAT generated by Booleans B and actions Σ, the alphabet A may contain B ∪ Σ (not to mention B), with the guarded string interpretation in (Kozen and Smith, 1996) imposing spec B (x) and spec Σ (x) at various positions x, treating states as Boolean atoms (absent in an infinite free Boolean algebra) and actions as programs running in isolation (as in Dynamic Logic). Neither spec B (x) nor spec Σ (x) is necessary or desirable for applications where descriptions of states and actions are partial. Section 2 challenges spec B (x), slighting B with a Σ-reduct (Proposition 1), while section 3 puts notions of observable change (described in Propositions 3 and 4) ahead of spec Σ (x) to account for tests. Casting spec aside, section 4 compresses C-reducts, for C ⊆ B, and conjoins them by superposition. (More in Fernando, To appear.)
4,949
2019-09-01T00:00:00.000
[ "Mathematics" ]
Spin-charge induced scalarization of Kerr-Newman black-hole spacetimes It has recently been demonstrated that Reissner-Nordström black holes in composed Einstein-Maxwell-scalar field theories can support static scalar field configurations with a non-minimal negative coupling to the Maxwell electromagnetic invariant of the charged spacetime. We here reveal the physically interesting fact that scalar field configurations with a non-minimal positive coupling to the spatially-dependent Maxwell electromagnetic invariant F\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{F} $$\end{document} ≡ FμνFμν can also be supported in black-hole spacetimes. Intriguingly, it is explicitly proved that the positive-coupling black-hole spontaneous scalarization phenomenon is induced by a non-zero combination a ∙ Q ≠ 0 of both the spin a ≡ J/M and the electric charge Q of the central supporting black hole. Using analytical techniques we prove that the regime of existence of the positive-coupling spontaneous scalarization phenomenon of Kerr-Newman black holes with horizon radius r+(M, a, Q) and a non-zero electric charge Q (which, in principle, may be arbitrarily small) is determined by the critical onset line (a/r+)critical = 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{2} $$\end{document} − 1. In particular, spinning and charged Kerr-Newman black holes in the composed Einstein-Maxwell-scalar field theory are spontaneously scalarized by the positively coupled fields in the dimensionless charge regime 0<QM≤22−2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ 0<\frac{Q}{M}\le \sqrt{2\sqrt{2}-2} $$\end{document} if their dimensionless spin parameters lie above the critical onset line aQM≥aQMcritical=1+1−22−2Q/M222\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \frac{a(Q)}{M}\ge {\left[\frac{a(Q)}{M}\right]}_{\mathrm{critical}}=\frac{1+\sqrt{1-2\left(2-\sqrt{2}\right){\left(Q/M\right)}^2}}{2\sqrt{2}} $$\end{document}. Introduction It is well known, based on the mathematically elegant no-hair theorems presented in [1][2][3][4][5][6][7], that asymptotically flat black-hole spacetimes in the composed Einstein-Maxwell-scalar field theory cannot support spatially regular static configurations of minimally coupled massless scalar fields. The most general black-hole solution of the non-linearly coupled Einstein-Maxwell-massless-scalar field equations is therefore described by the bald (scalarless) Kerr-Newman black-hole spacetime [8,9]. Intriguingly, it has recently been revealed in the physically important works [10,11] (see also [12][13][14][15][16]) that spherically symmetric charged black holes in composed Einstein-Maxwellscalar field theories whose actions contain a direct non-minimal coupling term f (φ)F [see eq. (2.1) below] between the scalar field φ and the Maxwell electromagnetic invariant F can support spatially regular configurations of the non-minimally coupled scalar fields. In particular, it has been established [10][11][12][13][14][15][16] that the critical boundary between the familiar (scalarless) black-hole solutions of the Einstein-Maxwell field equations and the hairy (scalarized) black-hole spacetimes that characterize the composed Einstein-Maxwellnonminimally-coupled-scalar field theories is marked by the presence of marginally-stable cloudy black-hole spacetimes (the term 'scalar clouds' is usually used in the physics literature [17][18][19][20] in order to describe spatially regular scalar fields which are linearly coupled to central supporting black holes). These critical spacetimes describe non-minimally coupled linearized scalar fields which, in the spherically symmetric case, are supported by a central charged Reissner-Nordström black hole. As nicely emphasized in [10,11], in order for the familiar (scalarless) charged blackhole solutions of general relativity (the Reissner-Nordström and Kerr-Newman black-hole spacetimes) to be valid solutions of the coupled Einstein-Maxwell-scalar field equations in the φ → 0 limit, the scalar function f (φ), whose coupling to the Maxwell electromagnetic invariant of the charged spacetime can trigger the black-hole spontaneous scalarization phenomenon, should be characterized by the universal weak-field functional behavior f (φ) = 1−αφ 2 +O(φ 4 ). The dimensionless physical parameter α, which in previous studies The intriguing spontaneous scalarization phenomenon of charged black holes in composed Einstein-Maxwell-nonminimally-coupled-scalar field theories is a direct consequence of the appearance of a spatially-dependent effective mass term, which in the weak-field φ → 0 regime has the compact functional form −αF /2 [10,11], in the modified Klein-Gordon equation [see eq. (2.9) below] of the non-trivially coupled scalar field. Interestingly, it has been established [10][11][12][13][14][15][16] that, for negative values of the dimensionless non-minimal coupling parameter α, the effective mass term of the scalar field, which reflects its direct coupling to the Maxwell electromagnetic invariant of the spacetime, may become negative outside the horizon of the central supporting black hole. This important observation, which was first discussed in the context of negatively coupled (α < 0) scalar fields in the physically interesting works [10,11], implies that the composed black-hole-scalar-field effective potential term in the scalar Klein-Gordon wave equation may become attractive (negative) in the vicinity of the outer horizon, thus allowing the central charged black hole to support spatially regular bound-state configurations of the non-minimally coupled scalar fields. The main goal of the present paper is to reveal the physically interesting fact that massless scalar fields with a non-minimal positive coupling (α > 0) to the Maxwell electromagnetic invariant F can also be supported in charged black-hole spacetimes. In particular, we shall explicitly prove that the positive-coupling black-hole spontaneous scalarization phenomenon in the composed Einstein-Maxwell-nonminimally-coupled-scalar field theory is spin-and-charge induced in the sense that only non-spherically symmetric Kerr-Newman black holes that possess both angular momentum and electric charge can support massless scalar fields with a non-minimal positive coupling to the Maxwell electromagnetic invariant of the spacetime. Below we shall explore the onset of the spontaneous scalarization phenomenon in spinning and charged Kerr-Newman black-hole spacetimes of the composed Einstein-Maxwellnonminimally-coupled-scalar field theory with positive values of the physical parameter α. 1 Using analytical techniques, we shall derive a remarkably compact functional expression for the Kerr-Newman dimensionless rotation parameter a/M which, for a given value of the black-hole dimensionless electric charge parameter Q/M , 2 determines the critical boundary between bald (scalarless) Kerr-Newman black-hole spacetimes and hairy black-hole-scalarfield bound-state configurations. 1 As nicely demonstrated in [10,11], the critical boundary between bald black-hole spacetimes and hairy (scalarized) black-hole spacetimes is universal for all Einstein-Maxwell-nonminimally-coupled-scalar field theories that share the same weak-field functional behavior f (φ) = 1 − αφ 2 + O(φ 4 ) of the non-minimal scalar coupling function. 2 Here M , J ≡ M a, and Q are respectively the mass, angular momentum, and electric charge of the Kerr-Newman black hole. We shall assume, without loss of generality, the relations a > 0 and Q > 0 for the characteristic physical parameters of the spinning and charged Kerr-Newman black-hole spacetime. JHEP08(2022)272 2 Description of the system We shall study, using analytical techniques, the onset of the positive-coupling spontaneous scalarization phenomenon of spinning and charged Kerr-Newman black holes in the composed Einstein-Maxwell-nonminimally-coupled-massless-scalar field theory whose action is given by the integral expression [10][11][12][13] where the source term, is the Maxwell electromagnetic invariant of the spacetime. Below we shall explicitly prove that the presence of the direct scalar-electromagnetic non-trivial coupling term f (φ)F in the composed action (2.1) allows the existence of spontaneously scalarized black-hole spacetimes in the Einstein-Maxwell-nonminimally-coupled-massless-scalar field theory. The bald (scalarless) spinning and charged Kerr-Newman black-hole solution of the composed Einstein-Maxwell-nonminimally-coupled-scalar field theory (2.1) can be described, using the Boyer-Lindquist spacetime coordinates (t, r, θ, ϕ), by the curved line element [8,9] where the metric functions in (2.3) are given by the mathematically compact functional expressions and The conserved physical quantities {M, a, Q} are respectively the mass, the angular momentum per unit mass, and the electric charge of the black hole. The horizon radii, of the Kerr-Newman black-hole spacetime (2.3) are determined by the roots of the metric function (2.4). As emphasized above, it has been proved in [10,11] that, in order for the familiar black-hole spacetimes of general relativity 4 to be valid solutions of the composed field theory (2.1) in the φ → 0 limit, the scalar coupling function f (φ) should be characterized by the weak-field universal functional behavior [10,11] JHEP08 (2022)272 where the dimensionless physical parameter α controls the strength of the non-trivial interaction between the scalar field φ and the Maxwell electromagnetic invariant F of the spacetime. Intriguingly, one finds that the critical existence surface of the theory, which marks the boundary between bald (scalarless) black-hole spacetimes and hairy (scalarized) black-hole solutions of the composed Einstein-Maxwell-nonminimally-coupledmassless-scalar field theory (2.1), is universal for all non-linear scalar coupling functions that share the same weak-field leading-order expansion (2.7) [10][11][12][13]. In the present paper we shall reveal the physically intriguing fact that scalar fields with a non-minimal positive coupling (α > 0) to the Maxwell electromagnetic invariant (2.2) can also be supported in black-hole spacetimes. In particular, in the next section we shall prove that the spontaneous scalarization phenomenon of positively-coupled scalar-electromagnetic fields is a unique feature of black holes that possess a combination of both non-zero spins and non-zero electric charges (which, in principle, may take arbitrarily small non-zero values). Hence, we shall henceforth assume the characteristic relation a · Q = 0 (2.10) for the central supporting Kerr-Newman black holes. Onset of positive-coupling spontaneous scalarization phenomenon in spinning and charged Kerr-Newman black-hole spacetimes In the present section we shall study the onset of the positive-coupling spontaneous scalarization phenomenon in the composed Einstein-Maxwell-nonminimally-coupled-masslessscalar field theory (2.1). In particular, using analytical techniques, we shall determine the critical onset-line a crit = a(Q) crit of the composed physical system which, for a given non-zero value of the black-hole electric charge, determines the minimal value of the blackhole spin that can trigger the positive-coupling spontaneous scalarization phenomenon of the spinning and charged Kerr-Newman black holes. JHEP08(2022)272 The presence of an attractive (negative) effective potential well in the modified Klein-Gordon wave equation (2.8) provides a necessary condition for the existence of non-minimally coupled scalar clouds (spatially regular bound-state field configurations) in the exterior region of the black-hole spacetime [21][22][23][24]. Interestingly, from the functional expression (2.2) one deduces that, depending on the angular momentum a of the spinning and charged black hole and the polar angle θ, the spatially-dependent effective mass term (2.9), which characterizes the composed black-hole-nonminimally-coupled-scalar-field system, may become negative (thus representing an attractive potential well) in the vicinity of the horizon of the central Kerr-Newman black hole. In particular, the onset of the physically intriguing spontaneous scalarization phenomenon in the Kerr-Newman spacetime (2.3) is characterized by the critical functional relation [21][22][23][24] and F 02 = 2Qa 2 r sin θ cos θ (r 2 + a 2 cos 2 θ) 2 ; We shall now prove that the critical onset-line a crit = a(Q) crit of the composed Einstein-Maxwell-massless-scalar field theory (2.1), which describes cloudy bound-state Kerr-Newman-black-hole-nonminimally-coupled-linearized-massless-scalar-field configurations with the critical property (3.2), can be determined analytically. In particular, we shall explicitly show that the critical functional relation (3.2), which determines the onset of the positive-coupling spontaneous scalarization phenomenon of Kerr-Newman black holes, can be solved analytically. To this end, it proves useful to define the composed dimensionless variable x ≡ a 2 cos 2 θ r 2 , (3.9) in terms of which the Maxwell electromagnetic invariant (3.6) can be written in the remarkably compact dimensionless functional form Substituting the expression (3.10) into (3.2), one obtains the critical quadratic equation which yields the critical value 6 for the dimensionless ratio (3.9) at the onset of the positive-coupling spontaneous scalarization phenomenon of the spinning and charged Kerr-Newman black holes. 6 Note that the second solution of the critical quadratic equation (3.11) is given by the dimensionless expression x + crit = 3 + 2 √ 2 > 1 and it therefore violates the physical requirement x ≤ 1 which follows from the inequalities a/r ≤ a/r+ ≤ 1 [see eq. (2.6)] and cos 2 θ ≤ 1. JHEP08(2022)272 Taking cognizance of eq. (3.9) one deduces that, for a given non-zero value of the black-hole electric charge parameter Q, the minimally allowed value of the black-hole spin a which is compatible with the positive-coupling spontaneous scalarization condition (3.2) can be inferred from the analytically derived dimensionless critical relation (3.12) with the maximally allowed value of the spatially dependent expression cos 2 θ/r 2 . In particular, the composed expression cos 2 θ/r 2 is maximized by the coordinate values (cos 2 θ) max = 1 with r min = r + (M, a, Q) at the poles of the black-hole horizon, which yields for the critical existence-line of the composed Kerr-Newman-black-hole-nonminimallycoupled-massless-scalar-field configurations. The analytically derived functional expression (3.15) determines the minimally allowed value of the Kerr-Newman black-hole spin a = a critical (Q) that can trigger, for a given non-zero value of the black-hole electric charge parameter Q, the positive-coupling spontaneous scalarization phenomenon in the composed Einstein-Maxwell-nonminimally-coupled-scalar field theory (2.1). Interestingly, one finds from (3.15) that the critical spin parameter of the cloudy Kerr-Newman black holes is a monotonically decreasing function of the black-hole electric charge parameter. In particular, the charge-dependent critical rotation parameter a critical = a critical (Q), which determines the onset of the positive-coupling spontaneous scalarization phenomenon in the Einstein-Maxwell-nonminimally-coupled-scalar field theory (2.1), attains its global minimum value, at the extremal Kerr-Newman limit (a 2 + Q 2 )/M 2 → 1 −7 with JHEP08(2022)272 It is physically interesting to point out that the classically allowed polar angular region for the positive-coupling near-horizon spontaneous scalarization phenomenon of spinning and charged Kerr-Newman black holes in the super-critical regime a/r + ≥ (a/r + ) crit is a monotonically increasing function of the dimensionless spin parameter a/r + . In particular, the near-horizon Maxwell electromagnetic invariant (3.6) of the Kerr-Newman black-hole spacetime becomes positive in the polar angular range The polar range (3.18), which can also be expressed in the remarkably compact form [see eq. (3.14)] a crit a 2 ≤ cos 2 θ scalar ≤ 1 , (3.19) defines, in the super-critical regime a ≥ a crit , the classically allowed angular region for the spontaneous scalarization phenomenon of positively-coupled scalar fields in the nearhorizon region of the spinning and charged Kerr-Newman black holes. In particular, the classically allowed angular region (3.18) is characterized by the limiting property for the maximally spinning Kerr-Newman black hole with a/M → 1 − . Summary and discussion It has recently been proved [10,11] (see also [12][13][14][15][16]) that charged black holes in composed Einstein-Maxwell-scalar field theories in which the scalar field is non-trivially coupled to the Maxwell electromagnetic invariant of the charged spacetime with a negative coupling constant (α < 0) can support bound-state hairy configurations of the scalar field. Motivated by this physically intriguing observation, in the present paper we have revealed the fact that scalar fields which are positively-coupled (α > 0) to the Maxwell electromagnetic invariant can also be supported in asymptotically flat black-hole spacetimes. In particular, we have studied, using analytical techniques, the onset of the positivecoupling spontaneous scalarization phenomenon in spinning and charged Kerr-Newman black-hole spacetimes. The main results derived in this paper and their physical implications are as follows: 1. We have revealed the physically intriguing fact that the black-hole spontaneous scalarization phenomenon of positively-coupled scalar-Maxwell fields is a unique feature of black holes that possess a non-zero combination (a·Q = 0) of angular momentum and electric charge [see eq. (2.10)]. Thus, the scalar-Maxwell positive-coupling spontaneous scalarization phenomenon of black holes is a spin-charge induced phenomenon. This is the largest classically allowed polar angular region for the positive-coupling near-horizon spontaneous scalarization phenomenon of the spinning and charged Kerr-Newman black holes.
3,488.8
2022-08-01T00:00:00.000
[ "Physics" ]
Entrepreneurship Education i n South Africa’s Higher Education Institutions: I n Pursuit of Promoting Self-Reliance i n Students Purpose: Entrepreneurship Education (EE) has garnered increased attention in South African higher education institutions due to its potential to foster self-reliance and job creation in a country grappling with high unemployment rates. This study examines entrepreneurship education's role in promoting economic development and poverty alleviation in South Africa. Study design/methodology/approach: The study adopts a systematic review approach to explore the effectiveness of entrepreneurship education in addressing unemployment, creating employment opportunities, and fostering economic growth in South Africa. The study seeks to identify key findings and insights regarding integrating entrepreneurship education into teaching and learning practices in higher education institutions by synthesising existing literature. Findings: The findings indicate that entrepreneurship education has the potential to accelerate economic growth, reduce unemployment rates, and alleviate poverty traps in South Africa. By equipping students with entrepreneurial skills and mindset, entrepreneurship education can empower individuals to establish and sustain small businesses, thus contributing to economic development and poverty reduction. Originality/value: This study contributes to the existing literature by providing insights into the role of entrepreneurship education in addressing socio-economic challenges in South Africa. The findings underscore the importance of integrating entrepreneurship education into teaching and learning practices across all higher education institutions to tackle unemployment and poverty in the country effectively. The study emphasises the need for concerted efforts from stakeholders in higher education to prioritise entrepreneurship education to foster economic resilience and social upliftment in South Africa. Introduction The unemployment rate has become increasingly high in the South African economy.It has generated some spin-offs that bode undesirable consequences, not only for economic development but also for the sane social-cultural coexistence of the people.Rather than clinging to an endless hope for formal employment, recourse to entrepreneurship has been touted as a possible antidote for confronting the situation.However, a prerequisite to self-employment is entrepreneurial intention.Thousands of students graduate annually from various higher education institutions across South Africa, and many are unemployed years after graduation.Hence, South Africa's graduate unemployment is a significant concern for many -families, businesses, and government.Notably, the unemployment rate has surpassed 25%, a trend associated with Chimucheka (2014), one of the factors increasing social ills in South African society.This high unemployment implies that a degree is insufficient to be employed (GEM, (2015;Iwu et al., 2021), but also entrepreneurship skills.However, despite the suitability of entrepreneurship as an alternative to traditional employment, entrepreneurial activity is currently low in South Africa.A recent OECD economic survey identified low entrepreneurship in South Africa compared to other emerging economies (OECD, 2017).Therefore, a critical way forward is to expose South African youths to entrepreneurial education, enabling them to escape the vicious cycle of poverty (North, 2002).The likelihood of a business venture succeeding depends on a graduate's business skills (GEM, 2012).It is, therefore, not a question of whether to provide students with entrepreneurship education or whether those in entrepreneurial training find any value in it.By finding value, we mean taking up an entrepreneurial activity at the end of the study.This is the basis upon which this study was conducted. Entrepreneurship education stimulates the desire of students to choose self-employment after graduation (Lawan et al., 2015;Premand et al., 2016).Through this programme, students become aware of different ways to start business ventures and the available support services (Fatoki, 2010;Katundu & Gabagambi, 2016).Interestingly, Makgosa and Ongori (2012) and Rudhumbu, Svotwa, Munyanyiwa, and Mutsau (2016) note that despite vocational education and entrepreneurship support programmes, graduates rarely consider entrepreneurship as a career or show interest in becoming entrepreneurs.This study aimed to provide insight into the need for entrepreneurship education in higher education curricula across the Republic to deal with the rising unemployment and other related social issues.Entrepreneurship, as a multifaceted paradigm, has been defined by various scholars.Thus, the literature needs to provide a cohesive definition of entrepreneurship.According to Nicolaides (2011), entrepreneurship is a process that nurtures and promotes economic growth, job creation and prosperity through viable businesses.Rwigema and Venter (2004) also define entrepreneurship as the process of innovative conceptualisation, organisation, and management of a sustainable business.Conversely, it is an educational approach that provides knowledge and skills that can create economic goods and services while creating job opportunities simultaneously. Entrepreneurship education offers its students a combination of different experiences (Walter et al., 2013).Different studies have revealed that entrepreneurship education enhances the intention to be an entrepreneur, behaviour, and attitude through improving entrepreneurial attention and competency (Bae et al., 2014;Kuratko, 2005;Martin et al., 2013).The advancement of entrepreneurship education in recent years is closely related to research on entrepreneurial learning, which has received significant research attention since the start of the twenty-first century and has been an increasingly important basis for developing pedagogy in entrepreneurship education.Different learning theories were influenced on teaching entrepreneurship, including action learning (Revans, 1982), transformative learning (Paprock, 1992), experiential learning theory (Kolb, 1984), and additional theories of learning focusing on action and change.Through teaching and learning, skills and knowledge that are believed to be one of a handful of areas in which a nation can have a competitive advantage can be developed.Various scholars have argued that the significance of entrepreneurial education is derived from the economic benefits to individuals, communities and the nation at large (Adeoye, 2020). Conversely, several studies have highlighted entrepreneurship teaching and learning in higher education as one of the most important factors that could contribute to entrepreneurial activities in South Africa (Van Vuuren & Groenewald, 2007;Kanayo, 2021).Similarly, classroom teaching and learning entrepreneurship education play an essential role in developing entrepreneurial skills in students (Witbooi & Ukpere, 2011).Alvarez-Risco et al. (2021) argue that having entrepreneurial skills considerably expands the chance of an individual owning or managing a business.Kanayo (2021) asserts that a likely resolution for entrepreneurs is the introduction of teaching and learning entrepreneurial skills in universities.Such programs assist students in owning businesses and positively impact the sustainability of their businesses.Clark et al. (2021) assert that teaching and learning entrepreneurship skills alone is insufficient, but taking emotional charge must be encouraged.For entrepreneurship students to successfully own and manage their businesses, a positive attitude and confidence are essential for their intentions and actions.Students must be taught how to take risks, as the ability to take risks is an essential characteristic of being an entrepreneur (Adeoye, 2020).It is believed that people with a strong sense of ability make a more significant effort to master challenges.Hence, a successful entrepreneur needs strong entrepreneurial self-efficacy, which can push such an entrepreneur to perform various tasks and roles successfully (Adeoye, 2020).Students need to construct meaningful concepts from entrepreneurship education to embrace and grow their interests.Thus, the authors adopt cognitive constructivist learning theory to underpin this study. Theoretical foundation According to Sailin and Mahmor (2018), cognitive constructivist learning theory involves drawing meaningful learning from experiences.Meaningful learning means that the students can acquire knowledge and skills that will enable them to learn the necessary vocations and start and become successful entrepreneurs in society.The theory is a combination of cognitive and constructivist approaches.According to Stewart (2021), cognitivist teaching strategies aim to help South African students integrate new knowledge into their body of prior knowledge and empower them to adapt their existing conceptual framework to incorporate that knowledge.This indicates that entrepreneurship students will acquire new knowledge and skills to build on their previous knowledge.They are building on previously acquired knowledge to develop and understand new knowledge offered to them, to make quality meaning that can be acted upon.While constructivism is considered an innovative approach that influences the teaching method, constructivism creates strategies that enable students to gain knowledge.For students to actively gain knowledge, teachers need to create a method by which the content of the study will be thought through.This method or strategy will change the teaching content for easy understanding by the students in various South African HEIs.Li & Guo (2015) aver that in constructivism, the teacher is a facilitator rather than the only resource during the teaching and learning process.A constructivist approach's underlying premise is that learning involves creating knowledge rather than simply receiving it.Instead of repeating, it is about comprehending and applying (Sulistyowati, 2019).Constructivism's main takeaway is that students can build their knowledge through active learning and interpret ideas in their own ways.This motive is essential for entrepreneurship students who aim to gain knowledge that will prepare them to start and successfully manage businesses. Based on cognitive constructivist learning theory, students assume responsibility for actively constructing meaning through conversation with others and themselves.Understanding is a valuable goal since it allows for incorporating the content into the student's existing knowledge.The main takeaway is that learning is a dynamic and ongoing process (Stewart, 2021).To develop students' entrepreneurial skills effectively, teaching and learning must continuously evaluate and adjust to unexpected changes in students' behaviour and thinking as teaching progresses.According to the theory, teachers must structure lessons so that students can link and make sense of concepts and facts.The objective is to integrate and inspire students to take a broad and critical view of society.This goal correlates with the aim of the study, which is to develop university students' entrepreneurial skills to make meaningful contributions to the economy and society. The Realities of Entrepreneurship Education in South Africa's Higher Education Institutions Entrepreneurship education in South Africa's higher education institutions (HEIs) is increasingly recognised as a pivotal avenue for fostering student self-reliance and economic empowerment (Iwu et al., 2021).As Kitchenham et al. (2009) emphasised, HEIs play a crucial role in equipping graduates with the skills and mindset necessary to navigate the complexities of the modern economy.In the South African context, where youth unemployment rates remain persistently high, entrepreneurship education holds promise as a pathway towards selfsufficiency and job creation (OECD, 2019).However, the realisation of these prospects is accompanied by many challenges.One of the primary challenges facing entrepreneurship education in South African HEIs is the need to reconcile academic rigour with practical relevance.While theoretical knowledge forms the cornerstone of traditional higher education, the dynamic nature of entrepreneurial endeavours necessitates a more experiential learning approach (Hmelo-Silver, 2004).Bridging this gap requires HEIs to adapt their curricula to incorporate hands-on learning experiences, such as internships, incubators, and real-world business projects (Braun & Clarke, 2006).Furthermore, integrating entrepreneurial education across disciplines can enhance its impact and relevance, fostering a culture of innovation and creativity (Jonassen, 1997). Moreover, the accessibility and inclusivity of entrepreneurship education remain significant challenges in South Africa.Historically marginalised communities, including women, rural populations, and individuals from low-income backgrounds, often face barriers to accessing entrepreneurial opportunities and resources (Hart, 2018).Addressing these disparities requires a concerted effort to democratise entrepreneurship education, providing tailored support and mentorship to underrepresented groups (Darling-Hammond & Baratz-Snowden, 2005).Additionally, partnerships between HEIs, government agencies, and private sector stakeholders are crucial for expanding access to funding, networking, and infrastructure (European Commission, 2013).Another critical challenge confronting entrepreneurship education in South African HEIs is the imperative to foster an entrepreneurial mindset among students.Beyond acquiring technical skills, entrepreneurship requires resilience, adaptability, and a willingness to take calculated risks (Bandura, 1986).Cultivating these attributes demands a holistic approach to education that encompasses academic instruction, personal development, and mentorship (Kunter et al., 2013).Furthermore, embedding entrepreneurship within the broader socio-economic context of South Africa can help students recognise the role of entrepreneurship in addressing societal challenges and driving sustainable development (Gu et al., 2020). Conversely, while entrepreneurship education holds immense promise for fostering selfreliance and economic empowerment in South Africa's HEIs, it has its challenges.Balancing academic rigour with practical relevance, ensuring accessibility and inclusivity, and cultivating an entrepreneurial mindset are among the key imperatives facing stakeholders in higher education.Addressing these challenges requires a multi-faceted approach that combines curriculum innovation, targeted support for marginalised groups, and collaborative partnerships across sectors.By overcoming these obstacles, entrepreneurship education can play a pivotal role in shaping a more prosperous and equitable future for South Africa. Research Methodology This study utilised a systematic literature review, following Kitchenham (2004) and Kitchenham and Charters (2007) guidelines.A systematic literature review states a research protocol for evaluating and interpreting all relevant research based on the research question, phenomenon of interest, or area (Kitchenham, 2004).The review was conducted in three phases: planning, conducting, and reporting.Those three phases have sub-elements, including (1) identification of review questions; (2) formulation of a review protocol; (3) developing inclusion and exclusion criteria; (4) reviewing strategy and selection procedures; (5) studying quality assessment; and (6) strategy for data extraction and reporting the answers to the research questions.Research methodology ensures scholarly investigations' rigour, validity, and reliability.In studying entrepreneurship education (EE) in South African higher education institutions, a systematic review methodology is employed to synthesise existing literature and generate evidence-based insights.Systematic reviews offer a structured approach to identifying, selecting, and critically appraising relevant studies, providing a comprehensive overview of current knowledge on a particular topic (Higgins & Green, 2011).By systematically searching multiple databases and sources, researchers can minimise bias and ensure the inclusivity of diverse perspectives and findings, enhancing the robustness of the synthesis process (Petticrew & Roberts, 2006). The systematic review methodology adopted in this study adheres to established guidelines and protocols, including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework (Moher et al., 2009).Following a predefined search strategy, relevant articles, reports, and studies about EE in South African higher education are systematically identified and screened for eligibility.Through a rigorous selection process based on predefined inclusion and exclusion criteria, studies meeting the predefined criteria are included for detailed analysis and synthesis (Higgins & Green, 2011).Thus, the study employed PRISMA flow to search for relevant literature sources, using keywords like "Entrepreneurship Education", "Higher Education", "South Africa", "Self-Reliance", and "Entrepreneurs".A total of 122 literature sources were accessed and screened to 52, appropriately focusing on the objectives of this study (see Fig 1 below).All studies written in other languages, not focused on the study's keywords, were excluded from the data.Page et al. (2021) argue that this flow enables researchers to include and exclude sources based on their relevance to the study.Critical appraisal of the selected studies is conducted to assess the evidence's quality, relevance, and methodological rigour, ensuring the synthesised findings' validity and reliability (Petticrew & Roberts, 2006).This study aims to advance knowledge and understanding of entrepreneurship education in South African higher education by employing a systematic review methodology.Through the synthesis and analysis of existing literature, the study seeks to identify key themes, trends, and gaps in the current research landscape, offering valuable insights for policymakers, educators, and stakeholders involved in entrepreneurship education initiatives.Additionally, the systematic review methodology enhances the transparency, reproducibility, and credibility of the research findings, thereby strengthening the overall impact and utility of the study in informing evidence-based decision-making and future research endeavours (Higgins & Green, 2011;Petticrew & Roberts, 2006). Presentation of the findings Findings from the systematic literature review revealed some themes that will be used to present findings in this section.These themes explain the findings from the study to present in-depth information for scholarship. Theme 1: Universities' roles in teaching entrepreneurship Given the increasing diversity of scholarship regarding entrepreneurship education, this study aims to use a systematic literature review to assess research published in the last decade and evaluate its general involvement in and impact on the field.This paper has three main objectives.First, it aims to systematically collect, summarise, analyse, and synthesise the information from previous studies published between 2009 and 2019.Second, it aims to examine in detail the findings of these systematically collected studies to offer a detailed picture of the present situation of entrepreneurship education.Finally, it aims to identify the research gaps that require further investigation and reveal opportunities for future research in entrepreneurship education.Universities are acknowledged as knowledge-intensive institutions and settings that support the growth of human capital, innovation, and entrepreneurship.Universities can encourage business thinking and entrepreneurial culture when their contributions to advanced research, education, and knowledge networking are considered.Universities play a role in developing business thinking and the general transfer of technology (Jessop, 2017;Adanlawo & Chaka, 2022).Universities participate actively in the entrepreneurial discovery process.Universities are the connecting institutions between all partners in an entrepreneurial ecosystem (Wang, 2021).These universities, according to Wang, are engaged in collaborations, networks, and other connections with public and private entities to support interaction, collaboration, and cooperation within the country's innovation system (Wang, 2021).Universities are not socially isolated; they work closely with business and government while maintaining their mostly independent status.South African universities should encourage entrepreneurship (Ajani et al., 2021).Entrepreneurial universities, innovation clusters, and knowledge transfer are intimately tied to entrepreneurial ecosystems (Fuster et al., 2019;Ajani et al., 2023).Universities that have created various systems for producing and disseminating information to advance national development are called entrepreneurial universities (Foster, 2022).They deliver instruction and extracurricular activities to promote the growth of entrepreneurial behaviour.Collaboration with external stakeholders is crucial for "entrepreneurial universities" since it offers connections, knowledge, and entrepreneurial experience for entrepreneurship education.Webb (2021) and Foster (2022) aver that innovation can flourish in universities, where knowledge and skills are taught in greater depth than at other levels of education. Additionally, programs and soft skills are combined with technical business training in university entrepreneurship education.Universities frequently serve as change agents that encourage human connection, knowledge transfer, and the development of trust amongst many stakeholders due to their closeness to other communities (Jessop, 2017;Klofsten et al., 2019).For students to work in an entrepreneurial manner, universities provide entrepreneurship education that cultivates and develops the attitudes, knowledge, and competencies necessary (Fuster et al., 2017;Wang, 2021).They aid in developing a business's culture and help students gain human abilities, including teamwork, leadership, creativity, and the capacity to function under pressure (Jessop, 2017).The role of universities in building students to become successful entrepreneurs who will make meaningful contributions to the economy is immeasurable. Theme 2: Nexus between Entrepreneurship Education and the Global Economy Entrepreneurship education has been given various definitions.Fayolle and Lassas-Clerc (2006) indicated that entrepreneurship education is an educational method related to entrepreneurs' attitudes and skills and can also be used to improve the qualities of individuals.Entrepreneurship education is also called developing behaviours, skills, and attitudes that individuals can use in an entrepreneurship-based career (Wilson, 2008).Also, according to Bechard and Toulouse (1998), EE is defined as formal teachings that inform, educate, and train those who want to start businesses or develop small ventures.Based on all these definitions, the researchers of this paper defined entrepreneurship education as developing entrepreneurship skills to enhance entrepreneurial intention, improve employability, and educate entrepreneurs to start a successful business.The recent global economic recession has witnessed many of the world's largest companies continuing to engage in massive downsizing. Conversely, small businesses continue to create more jobs.For instance, in the US, small businesses created 1,625,000 jobs (Martin, 2015).In sub-Saharan African countries, massive improvements have been recorded.In Zambia, 55 per cent of the working population is involved in small businesses (Richardson et al., 2004).Mozambique records 98.6% of enterprises, which employ 46.9% of the nation's workforce.Similarly, about 4 million people in Tanzania who are small businesses contribute about 45% to the country's GDP (Anderson, 2017).With these statistics, small businesses have found a home in most African nations, primarily because they are easier to set up than big enterprises. Nevertheless, the maximum potential of entrepreneurship in South African universities is yet to be achieved simply because of its constraints from the start-up stage to the survival stage (Ajani et al., 2023).Unfortunately, these limitations have not received enough attention from university academics (Ferreras-Garcia et al., 2021).The gap necessitated this study.Though South Africa is confronted with challenges of insufficient managerial abilities because of inadequate frameworks of instruction and skills training to tap entrepreneurship potential to the fullest, Clark et al. (2021) advise the need for entrepreneurship skills development among university students through teaching and learning.Entrepreneurship education has the potential to significantly contribute to economic growth, which supports the value of increasing awareness of this field (Carland & Carland, 2004;Hall et al., 2010).Gartner and Vesper (1994) indicated that the provision of entrepreneurial education programs for undergraduates and postgraduates has grown significantly in Asia, North America, New Zealand, Europe, and Australia.Entrepreneurship education's influence, however, varies based on the national and local context (Ahmad et al., 2018;Chen & Agrawal, 2018).Notably, Nabi et al. (2017) revealed that studies related to the influence of entrepreneurship education in rapidly developing countries are only used in 5% of empirical samples (e.g., Four of the BRICS countries "China, Russia, India, and Brazil").In the 1980s, management education appeared in a few universities in China, while in the early 1990s and the mid-1990s, management schools were established, and MBA programs were introduced.Businesses later helped further develop entrepreneurship education (Liet al., 2003).In China, according to Liet al. (2003), entrepreneurship education has been improved recently (it was a new concept till 2001), with testing for entrepreneurship education introduced in nine universities by the Ministry of Education, which proved successful and has since been developed extensively. Theme 3: The need for Entrepreneurship skills among university students Entrepreneurship education fosters entrepreneurial intentions, behaviours, and attitudes among students, thereby contributing to entrepreneurial development (Walter et al., 2013;Bae et al., 2014;Kuratko, 2005;Martin et al., 2013).This educational paradigm has evolved significantly in recent years, drawing from research on entrepreneurial learning and various learning theories to inform pedagogical approaches (Revans, 1982;Paprock, 1992;Kolb, 1984).Particularly in regions like Africa, where local private enterprise is deemed vital for economic advancement, entrepreneurship education has gained traction as a strategic intervention to address unemployment and stimulate economic growth (Rooke et al., 2011;Brixiova et al., 2015;Oluwatobi et al., 2015).However, the predominant traditional teaching methods in many higher education institutions, characterised by passive learning approaches, often need to be revised to nurture entrepreneurial competencies and innovative thinking among students (Blanton et al., 2006;Gorghiu et al., 2015). Education is pivotal in laying the foundation for successful entrepreneurship, with entrepreneurial skills acquisition paramount to business success (Martin, 2015;Adanlawo & Magigaba, 2022;Aminu, 2014).Through teaching and learning, students can acquire management skills, business planning acumen, and financial literacy, which are essential for navigating the complexities of entrepreneurship (Ferreras-Garcia et al., 2021;Martin, 2015).Moreover, integrating mentoring, counselling, and networking opportunities into entrepreneurship programs can further enhance students' readiness for entrepreneurial endeavours (Ferreras-Garcia, Sayoung les-Zaguirre, & Serradell-Lopez, 2021).Despite the challenges posed by the high cost of education, technological advancements have democratised access to educational resources, enabling aspiring entrepreneurs to acquire valuable knowledge and skills online (Undiyaundeye & Otu, 2015). Entrepreneurship education equips students with practical business skills and cultivates an entrepreneurial and managerial mindset essential for seizing opportunities and navigating uncertainties (Alvarez-Risco et al., 2021).Aspiring entrepreneurs access social, human, and financial capital through education, facilitating innovation and fostering business success (Adanlawo & Magigaba, 2021;Sitharam & Hoque, 2016).Moreover, education empowers entrepreneurs to leverage prior knowledge, personal traits, and social networks to effectively identify and exploit entrepreneurial opportunities (Alvarez-Risco et al., 2021;Greblikaite et al., 2016).By adopting a skills-based pedagogy that emphasises experiential learning and problemsolving, higher education institutions can inspire the development and practice of entrepreneurial skills among students, thereby nurturing a new generation of innovative and resilient entrepreneurs (Alvarez-Risco et al., 2021). Theme 4: Integration of Entrepreneurship Education into learning spaces to promote entrepreneurship skills Entrepreneurship education is widely recognised as a pivotal driver of entrepreneurial development, equipping individuals with the knowledge, skills, and attitudes necessary to initiate and sustain successful business ventures (Martin et al., 2013;Lin~an et al., 2011).However, while entrepreneurship education holds promise for cultivating a new generation of entrepreneurs, several studies have highlighted discrepancies between the outcomes of such education and graduates' expectations (Oosterbeek et al., 2010;Smith et al., 2006;Solomon & Matlay, 2008;Ajani et al., 2021).To address this gap, it is imperative to delineate the objectives and approaches of entrepreneurship education within the higher education landscape. Firstly, entrepreneurship education should create awareness and impart theoretical knowledge about the various facets of starting and managing a business (Ismail et al., 2021).This entails incorporating modules on business management and skills enhancement to provide students with a foundational understanding of business principles.Additionally, entrepreneurship education should prepare students for self-employment by imparting practical skills for initiating and managing small businesses (Ismail et al., 2021).Practical components such as business plan preparation and project development can instil students' entrepreneurial confidence and readiness (Ferreras-Garcia et al., 2021). Moreover, entrepreneurship education's teaching and learning process should go beyond imparting knowledge to fostering an entrepreneurial mindset and cultivating essential entrepreneurial attributes (Clark et al., 2021).This involves nurturing risk-taking, creativity, and innovativeness through experiential learning opportunities, role modelling, and exposure to real-world entrepreneurial experiences (Bischoff et al., 2018).Additionally, integrating learning strategies emphasising experiential learning and problem-solving can enhance students' ability to apply theoretical knowledge in practical contexts (Deon, 2017).By prioritising the development of entrepreneurial skills and attributes, entrepreneurship education can empower students to navigate the complexities of the entrepreneurial journey and contribute meaningfully to economic development. Furthermore, the effectiveness of entrepreneurship education hinges on the alignment between teaching methodologies and learning outcomes (Ferreras-Garcia et al., 2021).Emphasising active learning techniques, such as case studies, group discussions, and experiential exercises, can enhance students' engagement and comprehension of entrepreneurial concepts (Clark et al., 2021).Moreover, fostering an entrepreneurial mindset requires shifting from traditional teaching to learning approaches, where students actively engage in problem-solving and critical thinking (Ferreras-Garcia et al., 2021).By adopting innovative pedagogical strategies prioritising experiential learning and skill acquisition, entrepreneurship education can empower students to realise their entrepreneurial aspirations and contribute to economic growth and innovation. Theme 5: Challenges of Entrepreneurship Education Entrepreneurship education originated in the US and Europe and has spread to other regions aiming to develop entrepreneurial attitudes, skills, intentions, and knowledge; however, it has faced several challenges.Matlay (2009), Oosterbeek et al. (2010) and Smith et al. (2006) have asserted that entrepreneurship education does not always lead to the intended outcomes and Does not develop the knowledge, skills and attitudes required to engender entrepreneurial intention in students.Further, interest and activity in entrepreneurship education among highereducation students is low (Heinonen & Poikkijoki, 2006).Studies have also found that some students have a negative attitude toward entrepreneurship education (Hannon, 2005;Ajani et al., 2021).Some limitations of entrepreneurship education were mentioned in the studies reviewed.According to Honing (2004), many researchers indicated that there needs to be more studies determining EE effectiveness and outcomes.Entrepreneurship education research has received much attention recently based on the benefits that it can bring to individuals and countries; however, Fayolle (2013) and Rideout and Gray (2013) have stated that entrepreneurship education suffers from the lack of a coherent conceptual research framework that is comprehensively grounded on the knowledge of general education and the philosophy of entrepreneurship education.Concerning this point, very few sample studies were based on a mixed-methodology approach.Based on the research reviewed, the topics the researchers have discussed relate to different contexts, with most focusing on entrepreneurship education development but very much discussing entrepreneurship education policies or the entrepreneurship education gender perspective.Notably, a study conducted by Alakaleek (2019) regarding the level of entrepreneurship education in Jordan Universities stated that, although Jordan had reformed its education system, there remained a gap between the realworld implementation and the formal policies for improving higher education. Discussion The role of universities in fostering entrepreneurship is paramount in driving economic growth, innovation, and human capital development.As knowledge-intensive institutions, universities contribute significantly to advancing research, education, and knowledge networking, thereby nurturing entrepreneurial culture and business thinking among students (Jessop, 2017;Adanlawo & Chaka, 2022).By actively participating in entrepreneurial discovery processes and serving as connecting institutions within entrepreneurial ecosystems, universities facilitate collaboration and cooperation among various stakeholders, including public and private entities (Wang, 2021).Through collaborations and networks, universities harness resources and expertise to support entrepreneurship education, promoting the growth of entrepreneurial behaviour and fostering innovation clusters (Fuster et al., 2019;Ajani et al., 2023).Moreover, as change agents, universities play a pivotal role in cultivating entrepreneurial attitudes, knowledge, and competencies among students, equipping them with essential skills such as teamwork, leadership, and creativity (Jessop, 2017).Despite the potential of entrepreneurship education to drive economic development and innovation, several challenges hinder its effectiveness in realising intended outcomes.Studies have highlighted discrepancies between entrepreneurship education outcomes and graduates' expectations, with some students exhibiting negative attitudes toward entrepreneurship education (Oosterbeek et al., 2010;Smith et al., 2006;Ajani et al., 2021).Furthermore, the lack of comprehensive research frameworks and mixed-methodology approaches limits our understanding of entrepreneurship education's effectiveness and outcomes (Fayolle, 2013;Rideout & Gray, 2013).Additionally, entrepreneurship education faces challenges related to policy implementation, gender perspectives, and the alignment between formal policies and real-world practices in higher education institutions (Alakaleek, 2019). Addressing these challenges requires a multi-faceted approach that emphasises integrating entrepreneurship education into learning spaces, promotes stakeholder collaboration, and fosters an entrepreneurial mindset among students.Universities can enhance students' engagement and comprehension of entrepreneurial concepts by adopting active learning techniques like case studies and experiential exercises (Clark et al., 2021).Moreover, shifting from traditional teaching to learning approaches, where students actively engage in problemsolving and critical thinking, can nurture an entrepreneurial mindset and cultivate essential entrepreneurial attributes (Ferreras-Garcia et al., 2021).Furthermore, fostering collaborations and networks with external stakeholders can provide students with valuable connections, knowledge, and entrepreneurial experiences, enriching entrepreneurship education programs and promoting the growth of entrepreneurial behaviour (Ferreras-Garcia et al., 2021;Webb, 2021).In conclusion, universities are pivotal in promoting entrepreneurship education and driving economic development.However, challenges such as discrepancies in outcomes, limited research frameworks, and policy implementation gaps hinder the effectiveness of entrepreneurship education.Addressing these challenges requires a concerted effort to integrate entrepreneurship education into learning spaces, promote stakeholder collaboration, and foster an entrepreneurial mindset among students.By adopting innovative pedagogical strategies and fostering collaborations with external stakeholders, universities can empower students to realise their entrepreneurial aspirations and contribute meaningfully to economic growth and innovation. Conclusion The significance of entrepreneurship education cannot be ruled out in the recent scenario of the highly competitive market.Moreover, entrepreneurs are reportedly contributing positively to the economic development of the economy by increasing gross production, generating wealth, providing employment opportunities, etc. Universities play a crucial role in the development of entrepreneurial abilities.Universities are crucial in developing the innovative concepts of new enterprises.They provide teaching in entrepreneurship that fosters and develops the attitudes, skills, and competencies necessary for students to engage in an entrepreneurial manner (Peschl et al., 2021).Universities serve as significant institutional determinants for knowledge creation because they provide training and the ability for academics to produce "knowledge innovations" that can be formally or informally passed on to students through teaching and learning.Universities should encourage entrepreneurial behaviour, from raising awareness to inspiring and implementing ideas.They should run the Supporting Entrepreneurship programme, which provides training, teaching, learning, and infrastructural assistance to support and encourage knowledge-based entrepreneurship while preparing students for careers as entrepreneurs after their studies.We conclude that entrepreneurial growth in teaching and learning, regarded as a critical tool for supporting effective entrepreneurial activity, requires specific attention. Recommendations As good as entrepreneurship education is for South African students, various challenges have been identified.However, with serious commitment and focus, stakeholders can capacitate students at various higher education institutions with entrepreneurship knowledge and skills.Thus, based on the findings from various reviewed literature sources, this study recommends involving higher education institutions to provide entrepreneurship education to more students across South Africa.Furthermore, entrepreneurship education should be strengthened with practical training, especially hard skills, through collaboration with companies/industries for student internship programmes.In addition, it is also recommended that the teaching material and curriculum be regularly updated to keep students abreast of dynamic information in the economy.Moreover, professional development in entrepreneurship education should be regularly, adequately, and appropriately provided to lecturers/facilitators to enhance their personal skills and provide students with quality and necessary knowledge and skills.Adequate provision of necessary technological gadgets should be provided and used.Starting funds/capital should be provided for students upon their studies to kick off their entrepreneurship activities. Limitations and future research directions Despite the valuable insights gained from the systematic literature review on entrepreneurship education, several limitations and areas for future research warrant consideration.Firstly, the review primarily focused on literature published between 1997 and 2024, potentially overlooking recent developments and emerging trends in entrepreneurship education.Additionally, the review predominantly drew from studies conducted in specific geographic regions, limiting the generalizability of findings to a broader global context.Furthermore, the need for standardised metrics for assessing the effectiveness of entrepreneurship education programmes poses challenges for comparative analysis and evaluation.Future research should address these limitations by incorporating more diverse literature sources, including recent publications and studies from a broader range of geographical contexts.Moreover, there is a need for longitudinal studies to evaluate the long-term impact of entrepreneurship education on graduates' entrepreneurial outcomes and economic development.Additionally, efforts to develop standardised assessment tools and metrics for evaluating entrepreneurship education programmes could facilitate comparative analysis and contribute to evidence-based policy development in this field.Finally, exploring the role of technology and digital platforms in enhancing entrepreneurship education delivery and outcomes represents a promising avenue for future research.
7,569.8
2024-03-23T00:00:00.000
[ "Education", "Business", "Economics" ]
Quantum stochastic resonance in parallel A study of (aperiodic) quantum stochastic resonance (QSR) in parallel is put forward. By doing so, a generally stochastic input signal is fed into an array of parallel dissipative quantum two-level systems (TLS) and its integral response is studied against increasing temperature. The response is quantified by means of an information-theoretic measure provided by the rate of mutual information per element and, in addition, by the cross-correlation between the information-carrying input signal and the output response. For ohmic-like quantum dissipation, both measures exhibit QSR for biased two-level systems. Our prime focus here, however, is on the case with zero asymmetry between the two localized stable states. We then find that the mutual information measure exhibits QSR only for sufficiently strong dissipation ( > 3/2), as measured by the dimensionless ohmic friction strength . Moreover, the mutual information measure relates QSR within quantum linear response theory to the signal-to-noise-ratio (SNR), being independent of the input driving frequencies in this limit. In contrast, the cross correlation measure connects QSR to a genuine synchronization phenomenon. For a single symmetric TLS, aperiodic QSR is exhibited in the cross-correlation measure for a Gaussian exponentially correlated input signal for > 1 already. Upon feeding the aperiodic input signal into a parallel array of unbiased TLS's, QSR successively emerges above the critical ohmic dissipation strength > 1/2 with increasing number n of parallel units. Thus, QSR can occur in parallel despite the fact that it does not occur in each individual, unbiased, TLS for < 1. This paradoxical phenomenon - which can be tested with an array of bistable superconducting quantum interference devices - constitutes a true quantum effect: it is due to the power-law dependence on temperature of the tunnelling rate and the stochastic linearization of quantum fluctuations with increasing number of parallel units. Introduction The phenomenon of stochastic resonance (SR) constitutes a nonlinear noise-mediated cooperative phenomenon wherein feeble information of a deterministic signal can be enhanced in the presence of an optimal dose of noise.Since its inception in 1981, SR has been demonstrated in numerous systems including bistable elements such as tunnel diodes, superconducting quantum interference devices (SQUIDs), autocatalytic chemical reaction schemes, sensory neurons, or communication devices, to name only a few (the interested reader is referred to the popular reviews [1]- [3], or the comprehensive survey in [4]).Although the basic SR mechanism is by now well understood, there remain a number of challenging unsolved problems.In particular, most of the research thus far predominantly focused on classical stochastic systems.The borderline between the classical world and the quantum domain has been crossed only recently, in order to account for genuine, tunnelling-induced quantum mechanical SR-effects [5]- [9].Moreover, these few prior studies of quantum stochastic resonance (QSR) have all been restricted to the conventional definition of SR, i.e. to stochastic resonance with a periodic input signal.The subject of aperiodic SR, i.e. stochastic resonance in the presence of a wide-band random input signal experiences a flurry of activity in the context of classical neuronal systems.The corresponding response has been quantified either by information-theoretic, or by spectral cross-correlation measures [10]- [18]. In this work, our basic challenge is to move from the classical situation and to study the quantum mechanical version of parallel information transfer of an aperiodic input-signal [10,14] through a parallel array of bistable quantum systems, being typified by quantum two-level systems (TLS), see figure 1.As such, this study involves an interplay among (i) quantum dissipative dynamics, (ii) information theory aspects and (iii) nonequilibrium statistical mechanics.A certain amount of interdisciplinary knowledge is thus required which will be provided in subsequent units: after having set up our model (section 2) we derive in section 3 the result for the rate of mutual information in arrays of uncoupled TLS's.Aperiodic QSR with respect to the input-output cross-correlation measure is analyzed in section 4.An outlook together with our conclusions is presented in section 5. Set up of model dynamics In the following we review prominent results of the theory of quantum dissipation [19]-[22] as needed to set up the model dynamics.We consider an array of uncoupled quantum two-level systems which are subjected to a common, generally random classical signal, f (t), of vanishing statistical average.Moreover, each individual TLS is bilinearly coupled to a separate heat bath at a common temperature.The total Hamiltonian for a single TLS element coupled to the bath reads within the tunnelling or localized representation [19] correspond to normal mode oscillators of the thermal bath with frequencies ω λ .The operators σz , σx denote the usual Pauli matrices.The tunnelling dynamics itself can be characterized by the time-dependent position operator x(t) = x 0 σz (t).Furthermore, h∆ in (1) is the tunnelling matrix element between the two lowest-lying energy levels.The effect of the thermal bath is captured by an operator random force ξ(t) = λ κ λ (b + λ e iω λ t + b λ e −iω λ t ).Due to the inherent Gaussian statistics of the harmonic bath, its statistical properties are determined by the complexvalued autocorrelation function [19] Here, the spectral density J(ω) = (π/h) λ κ 2 λ δ(ω−ω λ ) of the thermal bath has been introduced, ... β denotes the thermal average wherein β = 1/k B T denotes the inverse temperature.We assume that J(ω) acquires an ohmic form, i.e.J(ω) = (2πh/4x 2 0 )αωe −ω/ωc .The dissipation parameter α quantifies the dimensionless viscous friction strength and ω c characterizes the physically relevant exponential cut-off of the spectral density.The driving force f (t) plays the role of an information-carrying input signal.For instance, in the case of SQUIDs the input signal corresponds to an applied magnetic flux variation whilst the output relates to the total magnetic flux [23]. This two-level approximation for the tunnelling dynamics is well justified at low temperatures k B T hω g and for a time-dependent bias | 0 + 2x 0 f(t)| hω g , where hω g measures the energy splitting between the lowest tunnel doublet and the closest higher-lying excited state in the full bistable double well.With SR generically operating in the overdamped regime, we consider the TLS quantum dynamics in the incoherent regime where the population dynamics of the localized states obey a nonstationary Markovian dynamics.This description holds true for ohmic friction at arbitrary temperature if the tunnelling coupling is small, i.e. ∆ ω c , and the coupling to the heat bath is sufficiently strong, α > 1/2 [22].The approximation in addition covers the regime at smaller dissipation strengths α < 1/2, if only the temperature is sufficiently high, i.e. k B T h∆ [19,21,22].As a consequence, the localized populations with the time-dependent relaxation rates governed by the golden rule result The functions Q (t) and Q (t) in ( 4) denote the real and imaginary parts of the bath correlation function, respectively, i.e. [21] New Journal of Physics 1 (1999) 14.1-14.14(http://www.njp.org/) 14.5 where hλ = 4x 2 0 ∞ 0 dωJ(ω)/πω denotes the bath reorganization energy [25].For the situation considered herein, the function Q(t) can be evaluated in closed analytical form to yield [21,27] (5) In ( 5), Γ(z) denotes the complex gamma-function, ω β = k B T /h, and κ = ω β /ω c .Note that in the limit of adiabatic driving varying on a time-scale τ f such that both ω c τ f , αk B T τ f /h 1, the time-dependent transition rates W ± (t) follow the instantaneous value of the bias (t).In this case, the relaxation rates W ± (t) obey the detailed balance condition in the form . Moreover, at extremely low temperatures, πk B T hω c , and a small bias, 0 hω c , one arrives from ( 4), ( 5) at the well known [19]- [21], [28] analytical approximation for the static relaxation rates W ± ( 0 ), where Γ(z) is the complex gamma-function.This result is applicable for any value of the viscous friction strength α. It is worth noting that the considered incoherent limit for the tunnel dynamics of driven, dissipative TLS allows for an effective quasiclassical interpretation in terms of a classical random telegraph process.Put differently, the position operator x(t) assumes effectively a classical twostate process x(t) → x(t) = ±x 0 .Its transition rates, however, are governed by the quantum expressions in (4); see also appendix 1.As such, the model presents the quantum analogue of the classical aperiodic SR-investigation in [14]. Mutual information We next consider the transfer of information from a random aperiodic classical input signal f (t) through a parallel array consisting of quantum two-level systems as depicted in figure 1.In doing so, we consider the observable for the sum of individual TLS responses, i.e. Within the considered quasiclassical approximation, any quantum coherence can safely be neglected (incoherent quantum dynamics), and consequently the output and its sum become classical objects, i.e., x i (t). Our focus here concerns the rate of mutual information between the summed output q(t) and the aperiodic input signal f (t).The average amount of mutual information per unit time [29] (or the transinformation rate) between two continuous-time random processes f (t) and q(t), with t ∈ [0, T ], is defined by the double functional integral [30]: New Journal of Physics 1 (1999) 14.1-14.14(http://www.njp.org/) 14.6 P [f (t), q(t)] denotes the joint probability density functional for the random processes q(t) and f (t); P [f (t)] is the a priori given probability density functional of the signal, and P [q(t)] = Df (t)P [f (t), q(t)] is the probability density functional of the output.Moreover, depending on the basis a of the logarithm in (8) the transinformation rate Ī is measured in binary units, bits/s, i.e. a = 2, natural units nats/s (a = e), or digits/s (a=10). In the absence of any external driving, the integral output signal q(t) can be described as a sum of identical independent random telegraph noises with signal-dependent transition rates (4).Note that due to the common signal f (t), an array of initially uncoupled TLS's becomes statistically dependent through the common input information f (t).In view of the subadditivity of the mutual information one finds that where Ī0 (x : f ) is the rate of mutual information for a single TLS element.Thus, the average amount of mutual information per element cannot exceed the one for a single element Ī0 .The focus of this work is on the situation with many elements.Then, in absence of the information signal f (t) we can invoke the the central limit theorem to treat the output signal q(t) approximately as a Gaussian process.The information-carrying signal f (t) is assumed to be approximately a Gaussian process as well.Then, by addressing mainly the case of weak signals, f (t), it follows that the integral output q(t) is approximated by Gaussian statistics as well.Consequently, the rate of mutual information between q(t) and f (t) is governed by a nontrivial result due to Pinsker [31] for the transinformation rate between two stationary Gaussian processes [32], reading where denotes the so-termed coherence function and S qf (ω) and S qq (ω) are the cross-spectral power density and the output spectral power density, respectively.For weak adiabatic driving f (t), one obtains-in close analogy to the classical case [14]-from quantum linear response theory, cf appendix, the result where χ(ω) is the linear susceptibility of a single TLS.Equation ( 12) allows one to recast (10) into the more familiar form [33] Ī(q : Upon close inspection, (13) just coincides with the celebrated Shannon's formula for the rate of transinformation across a Gaussian, memory-free channel [29].Here, Shannon's formula is applied to the filtered Gaussian signal s(t) = n t −∞ χ(t−τ )f (τ )dτ .By use of the rms amplitude A 0 , i.e.A 2 0 = ∞ −∞ S ff (ω)dω/2π, one can recast (13) by use of (A.7) into the appealing form New Journal of Physics 1 (1999) 14.1-14.14(http://www.njp.org/) 14.7 where SN R(ω) is the signal-to-noise ratio (SN R) for a single TLS driven by a weak periodic input perturbation at angular frequency ω.In the considered situation of weak adiabatic signals SN R(ω) does not depend on ω, cf (A.8).Therefore, by use of the inequality, log a (1 + x) ≤ x/ ln a, we obtain As a result, we find that the maximal achievable amount of information being transmitted by the parallel array is approximately determined by the conventional signal-to-noise ratio, independently of the spectrum of the information carrying input signal. To gain further insight, we next model the stationary Gaussian input signal f (t) by an exponentially correlated process (a so-called Ornstein-Uhlenbeck process) with zero average and autocorrelation With the decay rate γ obeying both, γ ω c , αk B T /h, we stay within the regime of adiabatic driving.The signal's power spectrum is clearly of Lorentzian shape, i.e., Upon combining ( 17) and (A.8) in ( 14) we arrive at the main result Note that for input signals of small bandwidth, γ nSN R, the transinformation rate becomes proportional to both the square root of SN R and to the square root of the signal bandwidth γ.The maximal (versus temperature) transinformation rate consequently coincides with the maximum of the signal-to-noise ratio measure.The position of this maximum T I max depends neither on the signal bandwidth γ, nor on the number n of elements in the parallel array.We also observe that no saturation in the temperature dependence of the information flow-the so-called 'stochastic resonance without tuning' [10]-occurs at n → ∞.Moreover, with increasing bandwidth γ the rate of mutual information (18) increases monotonically and achieves the upper boundary C n at γ n SNR.Thus, the quantity C n provides the informational capacity [29] of the whole array.As a consequence of this analysis, the conditions for occurrence of aperiodic QSRbeing quantified by the mutual information transmission-are essentially identical to those for conventional QSR, being quantified by the SN R-measure [1]- [4].By use of the rate expression in (6) and the result for SN R in (A.8), its temperature dependence is determined by Thus, the mutual information per unit time does not exhibit QSR in unbiased systems (i.e. In this regime, QSR requires a finite bias 0 = 0.However, QSR does also occur for unbiased systems if α > 3/2.In this case, it is necessary to go beyond the low temperature approximation in (6) by using the full result for the incoherent quantum rates in ( 4) and ( 5).This is in agreement with conventional QSR, as shown previously in [35].The results for the transinformation rate per element are depicted in figure 2 for a large ensemble (n = 10 3 ) of unbiased parallel TLS as a function of differing bandwidths γ for a viscous friction strength of α = 2.The solid line shows the result for the averaged informational capacity C n /n : this limit is approached rather quickly (γ/ω c > 10 −8 ) as the bandwidth parameter increases.The maximal value assumed by Ī/n increases monotonically with increasing γ, cf (18), and saturates at the value of SN R/(2π ln 2) for a single bistable element. Most importantly, the transinformation rate for aperiodic, parallel QSR connects this phenomenon with conventional QSR for a single unit as characterized by the SNR measure.Because the maximum position at T Ī max does not depend on the bandwidth parameter γ, the transinformation measure does not characterize parallel (aperiodic) QSR as a synchronization phenomenon. Aperiodic QSR as synchronization phenomenon In search for a quantification of aperiodic QSR as a synchronization phenomenon we consider the cross-correlation coefficient [10,14,17,18] It worth recalling that conventional aperiodic classical SR has originally been introduced for the Fitzhugh-Nagumo model of the neuronal dynamics [10].For this model, was shown that the cross-correlation coefficient ρ and the rate of mutual information Ī provide equivalent measures.As we show below, however, for the case of aperiodic QSR in parallel these two measures no longer provide the same information, but behave instead rather distinctly.Within quantum linear response theory, the application of equations ( 12), ( 17), (A.5), and ( 20) yields for the cross-correlation coefficient the result where c(T ) = x 0 A 0 /k B T cosh( 0 /2k B T ) and W ( 0 ) := W + ( 0 ) + W − ( 0 ). Aperiodic QSR for a single element Note that ( 21) is valid also for the case n = 1, ρ 1 := ρ, i.e. for QSR in a single element.The corresponding result can be simplified upon noting that c(T ) 1, yielding With the focus being on unbiased TLS's the analysis of (22) shows that aperiodic QSR for the cross-correlation measure already occurs for α > 1.Therefore, with 1 < α < 3/2 the inputoutput cross-correlations can be optimized by applying an appropriate dose of thermal noise whilst for the mutual information measure QSR only occurs for α > 3/2.The maximal value for ρ is assumed at a temperature where The substitution of ( 23) into (6) yields the relation This result inherits the condition for an approximate matching between the time-scales of (incoherent) tunnelling events and the autocorrelation time of the input signal at maximal cross New Journal of Physics 1 (1999) 14.1-14.14(http://www.njp.org/) 14.9 correlation.Thus, we indeed find that the cross-correlation coefficient ρ characterizes aperiodic QSR as a genuine synchronization phenomenon! The corresponding bell-shaped aperiodic QSR behaviour is depicted in figure 3 for a dissipative strength of α = 1.44; this specific value is of relevance for the observed experimental SQUID dynamics as investigated in [36] in absence of driving.Naturally, it is expected that this novel aperiodic QSR phenomenon can be verified experimentally as well.Note that for this value of ohmic dissipative strength no maximum for the mutual information rate occurs.Moreover, the cross correlation measure for synchronization is an increasing function versus decreasing bandwidth strength γ; see figure 3.This latter result is in accordance with conventional (periodic) SR where the maximum of spectral amplification increases with decreasing driving frequency for a periodic input signal [37]. Parallel aperiodic QSR The case of a large ensemble of parallel units, cf figure 1, with n 1 is even more striking.Then, upon combining (6) with (21) we find that QSR in the cross correlation measure emerges already for α > 1/2.Put differently, a large ensemble of identical independent, unbiased TLS's is able to exhibit QSR whilst a single element does not.This paradoxical result is depicted in figure 4 for the case with α = 0.9.The bottom curve in the figure depicts the result for a single symmetric TLS, where in agreement with the previous analysis no QSR occurs.The QSR phenomenon successively occurs with increasing number of parallel units.This surprising phenomenon is rooted in the diminishing role of internal, individual fluctuations of x i (t) in a large ensemble of parallel elements, cf (12).The phenomenon is due to a combination of this fact together with the power law dependence on temperature of the incoherent quantum rates in (6); as such the effect is of genuine quantum origin. Next we consider the limit n → ∞ in (21), i.e., Using the result for the quantum rate in (6) we find that in the considered limit the crosscorrelation ρ increases monotonically with increasing temperature for α > 1/2, until ρ reaches its maximal value ρ ≈ 1 at W ( 0 ) γ.This behaviour of growing cross-correlation with increasing temperature, which saturates at large noise dose, has been termed in the literature SR without tuning [10]; it can be explained in terms of a 'stochastic linearization' [14,17] as n → ∞. Summary and conclusions The main primer of this work has been the investigation of quantum stochastic resonance through an array of independent, parallel quantum two level systems.Before concluding it may be useful to recapitulate again our main ideas, involved assumptions and main findings.Our idea has New Journal of Physics 1 (1999) 14.1-14.14(http://www.njp.org/)14.11 been to investigate the transduction of information for a generally aperiodic (stochastic) input signal through an array of parallel bistable quantum systems-all being in contact with a thermal, identical environment-which we modelled in terms of ohmic-like, dissipative two level systems.Then, we proceeded by applying the rate of transinformation by Shannon's formula (13).The main assumptions used in this work, being valid in many practical situations, are: (i) use of an incoherent quantum dynamics for individual TLS systems, (ii) weak adiabatic signals f (t) with Gaussian statistics, and, for the case of QSR in parallel, (iii) a large number n of elements in the array. Explicit findings have been obtained for stochastic signals from an exponentially correlated Gaussian process, such as the insightful result for the rate of mutual information in (18).This very result demonstrates unambiguously that the rate of mutual information is determined by conventional SN R for a single element with external cosinusoidal driving.This statement is also valid for classical systems.Henceforth, we have established a universal connection between SR in parallel and the SN R characterization of conventional SR.Because the maximum position of the rate of transinformation does not depend on the characteristic time scale of the input signal, this measure does not quantify QSR as a synchronization effect.Moreover, the averaged amount of transinformation per one element per unit time, Ī/n, is generally less than that of the single element, Ī0 .The main reason for this behaviour is related to the fact that the dynamical behaviour of an array of independent (in the absence of the input signal) TLS's become statistically dependent when a common signal is present; the theoretical maximum of Īmax /n ≈ Ī0 is assumed only when the elements in the array become completely uncorrelated.Therefore, the introduction of additional mutual coupling among the TLS's will only result in a further deterioration of mutual information between input and output. In contrast, the cross correlation measure ρ n indeed characterizes QSR as a synchronization phenomenon.For weak adiabatic Ornstein-Uhlenbeck signals, it was demonstrated that the input-output cross-correlation can be optimized by a corresponding dose of thermal noise in a single symmetric TLS if α > 1.The study of the cross-correlation for QSR in a parallel array revealed a new paradoxical phenomenon: the appearance of QSR in ensembles of independent elements which by themselves all do not display QSR, cf figure 4.This surprising behaviour is the result of a synergetic interplay between classical stochastic linearization [17,14,16] and the inherent power-law dependence on temperature of the quantum rates.In generalizing the experimental setup used in [38,39] for detecting SR in a single SQUID element, this result can possibly be examined by use of a parallel array of SQUIDs of the type put forward recently by Wernsdorfer et al [40]. In conclusion, we can assert that the measure of mutual information overtakes within the theme of aperiodic (quantum) stochastic resonance the role of SN R, whilst the crosscorrelation coefficient overtakes the role of the spectral amplification measure [37].Both the cross-correlation coefficient and the spectral amplification characterize QSR as a genuine noiseoptimized, averaged synchronization measure.Moreover, our novel findings for aperiodic QSR in single elements and in parallel arrays are expected to be become experimentally observable in mesoscopic bistable quantum systems such as tunnelling of magnetic flux in rf-driven SQUIDs [36,38,39], tunnelling of impurities in mesoscopic bismuth wires [41], or in proton-transferring molecular complexes, as well as in parallel arrangements of such systems.Likewise, the results herein may also be of importance when nature optimizes electron transfer reactions due to nonequilibrium noise influences in biological complexes. Figure 1 . Figure1.Quantum stochastic resonance in parallel: a generally stochastic input information signal is fed into a parallel array of uncoupled bistable, dissipative quantum systems, being modelled by two-level systems.The output information q(t) corresponds to the combined sum of individual TLS responses.Note that the two level systems become mutually dependent via their common input signal f (t). Figure 2 . Figure 2. Average rate of mutual information Ī/n plotted against the scaled temperature in an array of n = 10 3 symmetric TLS's.The different curves correspond to differing signal bandwidths γ of a Gaussian exponentially correlated input signal f (t) (Ornstein-Uhlenbeck process).The dimensionless parameter values used are: friction strength α = 2.0, tunnelling coupling ∆ = 10 −4 ω c , and strength of input signal variance x 0 A 0 = 10 −2 hω c .The solid curve compares these findings against the channel informational capacity per element, C n /n; see text. Figure 3 . Figure 3. Aperiodic quantum stochastic resonance as a synchronization phenomenon: the cross correlation measure ρ for a single unbiased TLS unit is depicted versus the scaled temperature for differing bandwidth parameters γ of an Ornstein-Uhlenbeck input signal at an ohmic friction strength of α = 1.44; its maximal value now exhibits a distinct dependence on the chosen value of inverse noise correlation time γ.The remaining parameter values are: ∆ = 10 −4 ω c , x 0 A 0 = 10 −2 hω c . Figure 4 . Figure 4. Parallel aperiodic quantum stochastic resonance: the cross-correlation measure between a stochastic Ornstein-Uhlenbeck input signal process and the integral output in arrays containing a differing number of elements n is depicted versus the scaled temperature.The corresponding parallel arrays are composed of symmetric dissipative TLS's at the ohmic frictional strength α = 0.9, and ∆ = 10 −4 ω c , γ = 10 −9 ω c , x 0 A 0 = 10 −3 hω c .While no QSR occurs for a single TLS unit, an increasing number of elements n in the parallel array provides the stochastic resonance effect.
5,940.2
1999-08-27T00:00:00.000
[ "Physics" ]
Targeting myeloid-derived suppressor cells for cancer therapy The emergence and clinical application of immunotherapy is considered a promising breakthrough in cancer treatment. According to the literature, immune checkpoint blockade (ICB) has achieved positive clinical responses in different cancer types, although its clinical efficacy remains limited in some patients. The main obstacle to inducing effective antitumor immune responses with ICB is the development of an immunosuppressive tumor microenvironment. Myeloid-derived suppressor cells (MDSCs), as major immune cells that mediate tumor immunosuppression, are intimately involved in regulating the resistance of cancer patients to ICB therapy and to clinical cancer staging and prognosis. Therefore, a combined treatment strategy using MDSC inhibitors and ICB has been proposed and continually improved. This article discusses the immunosuppressive mechanism, clinical significance, and visualization methods of MDSCs. More importantly, it describes current research progress on compounds targeting MDSCs to enhance the antitumor efficacy of ICB. Introduction As cancer progresses, a complex tumor microenvironment (TME) is gradually formed through the interaction of immune cells infiltrating the tumor tissue with cancer cells 1 . To cope with the pathological changes in the body, the immune system activates effectors that exert antitumor effects 2 . However, myeloid-derived suppressor cells (MDSCs), regulatory T cells (Tregs), and other immunosuppressive cells are induced by tumors and antagonize the effector function of cytotoxic T lymphocytes (CTLs), thereby preventing their infiltration into tumor tissues 3,4 . Moreover, the inhibitory immune checkpoint molecules programmed death 1 (PD1) and cytotoxic T lymphocyte-associated antigen 4 (CTLA4), among other molecules expressed on T cells, bind their ligands and subsequently participate in mediating T cell inhibition 5 . These inhibitory reactions impair T cell activation and are considered the main mechanisms promoting tumor progression and immune escape 6 . Because the immune system has plasticity and can be reprogramed to exert antitumor effects, several immunotherapies emerged and quickly received widespread attention 7 . Immunotherapies, including monoclonal antibody (mAb) therapy and immune cell immunotherapy, have shown potentially beneficial results in clinical applications 8,9 . Phase III clinical trials with immune checkpoint blockade (ICB) therapies, including anti-PD1/PD-L1 or anti-CTLA4 mAbs, have shown positive antitumor activity and prolonged overall survival in the treatment of various solid and hematological malignancies 10 . However, some cancer patients show little or no response to ICB therapy 11,12 . The efficacy of immunotherapy depends on enhancing effective CTL responses to tumorassociated antigens, and patients with low immunogenic tumors may lack sufficient preexisting tumor-infiltrating lymphocytes (TILs), thus resulting in a limited response to ICB therapy 13,14 . Therefore, an urgent need remains to improve the responses of patients with various types of cancer to ICB therapy and to enhance its antitumor efficacy. Relieving immunosuppression, a typical feature of the TME 15 , is a potent strategy to enhance the efficacy of ICB therapy. Among the immunosuppressive cells infiltrating into tumors, MDSCs are significantly associated with intratumoral immunosuppression through multiple mechanisms, and their inhibitory activity is closely involved in disease progression and poor prognosis in cancer patients 16,17 . Vigorously amplified and activated MDSCs in cancer patients lead to T cell suppression and impair the antitumor immune response 18 . In addition, cancer tissues with high MDSC infiltration have been shown to be associated with patient resistance to various immunotherapies 15 . The aim of ICB in cancer patients is to reverse the immunosuppressive signals in the TME 19 , and the levels of MDSCs in cancer patients may be a prerequisite for initiating ICB therapy 20 . Therefore, the development of combined immunotherapies that target MDSC-mediated immunosuppressive pathways to improve the antitumor efficacy of ICB therapy has very broad prospects 21 . Given the gradually increasing clinical application of combined treatments, comprehensive and updated literature reviews on the current combination of MDSC inhibition and ICB are lacking. In this review, we describe the immunosuppressive properties, clinical value, and visualization methods of MDSCs. Importantly, we focus on strategies for targeting MDSCs and compounds combined with ICB therapy. In tumor tissues, M-MDSCs dominate the MDSC classification, whereas in peripheral lymphoid organs, the proportion of PMN-MDSCs is much greater than that of M-MDSCs 25 . M-MDSCs mainly upregulate the expression of arginase 1 (ARG1), inducible nitric oxide synthase (iNOS) and transforming growth factor β (TGFβ), thus causing nonspecific T cell inactivation 26 . PMN-MDSCs produce excessive reactive oxygen species (ROS) and reactive nitrogen species (RNS), which cause effector T cells to lose their response to antigen-specific stimulation but retain their ability to respond to nonspecific stimulation 27 . Therefore, the inhibitory characteristics of the MDSCs present in tumors and peripheral lymphoid organs differ. In addition, owing to the biochemical and functional heterogeneity of MDSCs, differences exist in the MDSC phenotypes present in different cancer types, and these phenotypes change with the cancer environmental conditions 28 . In most types of cancer, PMN-MDSCs account for more than 80% of all MDSCs 23 . Immunosuppressive function of MDSCs Under physiological conditions, only a small number of MDSCs exist in the circulation and participate in regulating tissue repair and immune responses 15 . In the tumor-driven microenvironment, the population of MDSCs is greatly expanded by the induction of tumor-derived growth factors and proinflammatory cytokines derived from the tumor stroma 29 . Activated MDSCs then induce anergy in effector T cells, thus impairing the innate and adaptive immune responses through multiple mechanisms (Figure 1). Enrichment in MDSCs results in excessive consumption of L-arginine and L-cysteine-amino acids necessary for T cell proliferation and activation 30,31 . After stimulation with cytokines including interferon γ (IFN-γ), interleukin 10 (IL-10), and tumor necrosis factor β (TNF-β), MDSCs overexpress ARG1 and consequently consume L-arginine 32 . MDSCs express cystine-glutamate transporters (Xc − ) and compete with antigen-presenting cells for the uptake of extracellular L-cysteine, thereby preventing T cells from importing L-cysteine 31 . In addition, the elimination of these necessary amino acids decreases T cell-CD3ζ, interferes with T cell Janus kinase/signal transduction and transcription activator (JAK/ STAT) signaling proteins and inhibits MHC class II molecules, thereby suppressing T cells 32,33 . L-Tryptophan is also an important amino acid for T cell function 34 . MDSCs express indoleamine-2,3-dioxygenase (IDO), which consumes L-tryptophan, and the resultant metabolite kynurenine diminishes the activity of T cells and natural killer (NK) cells and increases Treg production 35,36 . IDO helps tumors evade immune surveillance by depleting tryptophan in the TME and induces immunosuppressive responses by inhibiting the functions of CTLs 37 . These disordered T cells secrete IFN-γ, thus further increasing the expression of IDO and perpetuating the immunosuppressive cycle 38 . Oxidative stress mediated by ROS and RNS is a crucial mechanism in MDSC-mediated immunosuppression 33 . The NADPH oxidase (NOX) complex, composed of S100A8, S100A9, and gp91phox, regulates ROS generation by MDSCs. High ROS levels and their interaction with nitric oxide (NO) contribute to the formation of strong biological RNS such as peroxynitrite (ONOO − ) through the regulation of ARG1, NOX, and iNOS2 27,33 . Tumor-associated myeloid cells release ROS such as hydrogen peroxide (H 2 O 2 ), which mediates the loss of the TCR ζ-chain and consequently inhibits T cell activation 33 . RNS cause the nitration or nitrosylation of CC-chemokine ligand 2 (CCL2), thus inhibiting the infiltration of TILs into the tumor core 39 . More importantly, the generation of ROS and RNS by MDSCs eliminates the ability of CD8 + T cells to bind peptide-MHC complexes and weakens the antigen-specific responses of peripheral CD8 + T cells by modifying TCR and CD8 molecules 27 . In addition to the main mechanisms described above, MDSCs inhibit T cells through other means. MDSCs decrease the L-selectin levels on effector T cells through the plasma membrane expression of a disintegrin and metalloproteinase 17 (ADAM17), thus preventing T cells from homing to tumors or lymph nodes 40 . MDSCs accumulating at tumor sites are exposed to a hypoxic and inflammatory microenvironment 25 . Figure 1 Mechanisms of MDSC-mediated immunosuppression. In healthy individuals, hematopoietic stem cells (HSCs) differentiate into immature myeloid cells (IMCs); IMCs can differentiate into granulocytes or monocytes, or further differentiate into mature macrophages or dendritic cells (DCs). In cancer patients, the maturation process of IMCs is disrupted, thus resulting in a dramatic expansion of the MDSC population. The activated MDSCs in the TME (i) express ARG1 and Xc − , thus depriving T cells of L-arginine and L-cysteine, which are essential for proliferation and activation; (ii) consume L-tryptophan by expressing IDO, thus inhibiting the activity of T cells and NK cells and increasing Treg production; (iii) release ROS and RNS, which mediate loss of the TCR ζ-chain and the nitration or nitrosylation of TCR signaling complex components and CCL2; (iv) express ADAM17, which cleaves CD62L, thereby preventing naive T cells from migrating to tumors or lymph nodes and subsequently forming effector T cells; (v) express PD-L1, which binds PD1 on T cells; and (vi) secrete IL-10 and TGF-β, which stimulate Treg activation and expansion. Hypoxia-inducible factor-1α (HIF-1α) has been reported to participate in the regulation of high levels of PD-L1 expressed by MDSCs; PD-L1 binds PD1 and subsequently induces T cell failure 41 . Furthermore, MDSCs stimulate the activation and expansion of Tregs by secreting IL-10 and TGF-β 42 . Excessive Treg infiltration in the TME promotes tumor progression and is associated with poor prognosis in cancer patients 43 . Clinical importance and visualization of MDSCs The prognostic value of MDSCs Compared with those in healthy individuals, circulating MDSCs in cancer patients in all stages are significantly elevated, and MDSC levels are strongly associated with the clinical cancer stage and metastatic tumor burden 44 . MDSCs promote tumor angiogenesis 45,46 and can induce epithelial-mesenchymal transition and cancer cell stemness or cancer stem cell expansion, thereby promoting metastasis 47 . Before tumor cells reach premetastatic sites, MDSCs significantly decrease IFN-γ levels; increase proinflammatory cytokine and matrix metalloproteinase 9 (MMP9) production; and promote vascular remodeling, thereby forming an inflammatory and immunosuppressive environment 48 . MDSCs have also been reported to interfere with the senescence-related secretory phenotypes of tumors through secreting interleukin-1 receptor antagonists (IL-1Rα), thereby antagonizing tumor cell senescence 49 . On the basis of this evidence, high levels of MDSCs in cancer patients predict poor prognosis. The influence of MDSCs on treatment effects In cancer patients, expanded MDSC populations and immunosuppressive states arise by the time at which precancerous lesions are present and gradually become aggravated with tumor progression 50 . However, few effector T cells are found in preinvasive lesions, owing to MDSC infiltration into tumors in a mutually exclusive manner 51 . Therefore, blocking MDSCs early in the course of immunotherapy is important. Patients diagnosed with non-small cell lung cancer who cannot be treated surgically have lower overall survival if they have high M-MDSC levels in the peripheral blood before receiving chemotherapy 52 . In addition, among patients with pancreatic adenocarcinoma undergoing chemotherapy, those with progressive disease have clearly higher MDSC levels in the peripheral blood than those with stable disease 53 . Studies have shown that low baseline percentages of peripheral MDSCs before ICB therapy or their decrease during treatment indicates positive outcomes 54 . Thus, effective detection of the dynamic distribution of MDSCs in vivo would provide favorable information for evaluating cancer patients' responses to various therapies, as well as their prognoses. Methods for visualizing MDSCs Traditional MDSC detection is mainly dependent on measurements in vitro or invasive methods 50 . Because the imaging of S100A8/A9 released by MDSCs reflects the abundance of MDSCs in premetastatic sites and the establishment of an immunosuppressive environment, antibody-based single-photon emission computed tomography has been applied to detect S100A8/A9 in vivo and has made substantial progress 55 . A premodified CD11b-specific mAb has been used to radiolabel PMN-and M-MDSCs, and positron emission tomography (PET) imaging has subsequently been used to noninvasively and quantitatively monitor the migration of MDSCs in multiple cancer types 56 . Moreover, some RNA aptamers that specifically recognize tumor-infiltrating MDSCs have been identified. These aptamers can be used not only to detect MDSCs but also to conjugate them to chemotherapeutic drugs, and improve antitumor efficacy and targeted delivery of drugs to the TME 57 . Near-infrared II (NIR-II) fluorescence imaging overcomes the barriers of penetration/contrast in the field of visible imaging 58 ; on the basis of this technology, NIR-IIa and NIR-IIb PbS/CdS quantum dot-based nanoprobes conjugated with 2 MDSC-specific antibodies have been created to target MDSCs in vivo 59 . This nanoprobe can clearly reveal the realtime dynamic distribution of MDSCs in cancer patients in a non-traumatic manner through the colocalization of twocolor fluorescence; therefore, it has important clinical value for the evaluation of MDSC-targeted immunotherapy. Strategies and compounds for targeting MDSCs MDSCs promote tumor progression and metastasis and contribute to tumor immune escape through a variety of mechanisms. The accumulation of MDSCs with substantial immunosuppressive activity in tumor tissues is associated with the resistance of cancer patients to multiple immunotherapies and with poor prognosis 47 . Strategies for targeting MDSCs to improve the antitumor effects of immunotherapies have sparked widespread interest and have made positive progress. As shown in Figure 2, these strategies comprise those that (1) prevent the recruitment of MDSCs; (2) promote the differentiation of MDSCs into mature cells; (3) deplete MDSCs in the circulation and the tumor; (4) inhibit the elimination of L-arginine mediated by MDSCs; (5) inhibit the activation of IDO in MDSCs; and (6) decrease the levels of ROS and RNS in MDSCs. We have selected several compounds reported in recent years that affect MDSCs through these pathways. The results are divided into 6 categories according to the strategies used to target MDSCs ( Table 1). Preventing the recruitment of MDSCs The factors that control the recruitment of MDSCs to tumors are essentially the same as those that regulate the migration of monocytes and neutrophils 26 . The absence of peripheral proliferation by MDSC subgroups suggests that targeting the chemokine system to inhibit the migration of MDSCs to tumors is a potential therapeutic method for eliminating MDSCs from tumor tissues 60 . MDSCs are recruited to tumor sites through GM-CSF; monocyte chemoattractant protein 1 (MCP1); CXCL1, 2, 5, and 12; IL-8; CSF1; prokineticin 2 (Prok2); CCL2; S100A8/9; and other factors derived from the TME 47,61,62 . In addition, the hypoxic environment at the primary tumor site induces the production of growth factors and cytokines that recruit MDSCs and decrease the cytotoxic effector functions of NK cell populations, thereby creating a premetastatic niche 63 . In pancreatic ductal adenocarcinoma treated with CXCR2 signaling inhibitors, the migration of MDSCs and the formation of a niche at the metastatic site are significantly suppressed 64 . The phosphatidylinositol 3-kinase (PI3K) isoform p110γ is required for the integrin α4β1-mediated adhesion of myeloid cells. After activation, p110γ-dependent α4β1 recruits myeloid cells to tumor sites. In mice with breast carcinoma treated with the p110γ inhibitor TG100-115, MDSC recruitment is significantly decreased 65 . Promoting the differentiation of MDSCs into mature cells In the tumor-driven microenvironment, proinflammatory cytokines such as prostaglandin-E2 (PG-E2), macrophage colony-stimulating factor (M-CSF), granulocyte colony-stimulating factor (G-CSF), GM-CSF, stem cell factor (SCF), and vascular endothelial growth factor (VEGF) are involved in the induction of chronic inflammation in cancer 26 . The signaling pathways in MDSCs triggered by cytokines converge on members of the JAK protein family and STAT3, thereby participating in the regulation of cell proliferation and differentiation 22 . STAT3 suppresses dendritic cell (DC) differentiation and increases MDSC accumulation and activation by upregulating the expression of S100A8 and S100A9, the transcription factor CCAAT-enhancer-binding protein β (C/EBPβ), interferon regulatory factor 8 (IRF8), and other important proteins 33,66 . STAT3 also upregulates NOX2 components and increases ROS levels, thereby enhancing MDSC inhibition 67 . In addition, the interaction of T cell immunoglobulin 3 (TIM3) on T cells and galectin 9 (Gal9) on MDSC precursors promotes MDSC proliferation and inhibits T cell responses 68 . Therefore, targeting proinflammatory cytokines to promote the maturation and differentiation of MDSCs is a promising strategy to decrease their population and abolish their immunosuppressive function. When using the compound ICT, a derivative of icariin, to process MDSCs in vitro, the expression of S100A8 and S100A9 and the activation of STAT3 and protein kinase B (AKT) in MDSCs are significantly inhibited, thus leading to a lower percentage of MDSCs and their differentiation into DCs and macrophages. Moreover, the conversion of this cell type is accompanied by the downregulation of IL-10, IL-6, and TNF-α production 69 . All-trans-retinoic acid (ATRA) is a retinoid receptor agonist that inhibits retinoic acid signaling. By mediating the accumulation of glutathione (GSH) and neutralizing ROS, ATRA promotes the differentiation of MDSCs into mature myeloid cells 70 . The carotenoid astaxanthin (ATX) has considerable antioxidant activity. In ATX-treated tumorbearing mice, the Nrf2 signaling pathway in MDSCs is activated and induces the synthesis of GSH, which in turn promotes further differentiation of MDSCs into macrophages or DCs 71 . These drugs not only weaken the immunosuppressive effects of MDSCs but also enhance the body's innate immunity. Depleting MDSCs in the circulation and in tumors IL-4, IL-13, and IFN-γ, ligands for Toll-like receptors (TLRs) and TGF-β produced by activated T cells and tumor stromal cells, initiate multiple signaling pathways, such as STAT1, STAT6, and nuclear factor κB (NF-κB), thereby regulating MDSC activity 72 . The formation of a positive feedback loop between PG-E2 and cyclooxygenase 2 (COX2) contributes to stabilizing the phenotype of MDSCs and regulating their inhibitory function 73 . In addition to inhibiting the activation of MDSCs, inducing their apoptosis is an effective strategy to deplete MDSCs in the circulation and in tumors. Preclinical studies have used anti-Ly6C or anti-Ly6G antibodies to treat multiple tumor-bearing mouse models or to systematically eliminate MDSCs in mice, and enhanced antitumor effects have been observed 61 . In addition, the fully humanized, Fc-modified monoclonal antibody BI 836858 for CD33 has been found to consume MDSCs through antibody-dependent cell-mediated cytotoxicity 74 . According to the evidence that the differentiation of MDSCs relies on the signal transduction of cellular tyrosine kinases, sunitinib treatment in cancer patients significantly decreases the effects of c-Kit and vascular endothelial growth factor receptor on MDSCs, thereby diminishing MDSC levels 15 . Furthermore, low-dose chemotherapy effectively eliminates MDSC populations in tumor-bearing mice 26 . Multiple chemotherapeutics, such as doxorubicin 75 , gemcitabine 76 , and 5-fluorouracil (5-FU) 77 , have been reported to induce MDSC apoptosis, thus enhancing antitumor immune activity. Inhibiting the immunosuppressive function of MDSCs Eliminating MDSC-mediated immunosuppression and reversing the suppressed states of T cells are the main objectives of immunotherapy. In this process, effective inhibition of the key molecules in MDSC suppressive pathways is crucial. These strategies can be further divided into inhibiting the elimination of L-arginine mediated by MDSCs, inhibiting the activation of IDO, and decreasing the levels of ROS and RNS in MDSCs. The phosphorylation of STAT3 is required to induce MDSCs to upregulate the expression of IDO. Therefore, IDO-induced MDSC immunosuppressive activity can be blocked by the STAT3 antagonist JSI-124 or the IDO inhibitor 1-methyl-L-tryptophan 34 . The anti-inflammatory triterpenoid CDDO-Me stimulates the nuclear factor-erythroid 2-related factor 2 (Nrf2) pathway and consequently upregulates multiple antioxidant genes 80 . The application of CDDO-Me to target MDSCs effectively decreases ROS production, thus abolishing the immunosuppressive effect of MDSCs 81 . Compounds targeting MDSCs applied in combination with ICB As discussed above, activated MDSCs significantly inhibit T cell infiltration into lesion sites and antitumor activity through specific and nonspecific mechanisms. MDSCs are strongly associated with tumor progression and metastasis. Therefore, expanded MDSCs in the TME are considered crucial in ICB therapy resistance among cancer paitents 106 . With the rise of combination immunotherapies, reports combining MDSCtargeting strategies with ICB to treat cancer are increasingly being updated 87,107 . Here, we elaborate on several representative reports on the combination of compounds targeting MDSCs through multiple mechanisms and ICB (Figure 2). CXCR2 is a G protein-coupled receptor for the human CXC chemokines CXCL1, 2, 3, 5, 6, 7, and 8 64 . CXCL1/2 attracts CD11b + Gr-1 + myeloid cells to tumors, thus enhancing the survival of cancer cells through the generation of chemokines, including S100A8/9 108 . In various tumor models, the CXCR2 signaling pathway has been observed to recruit PMN-MDSCs to the TME, and drive tumor invasion and metastasis 109,110 . CXCR2 blockade promotes T cell infiltration into tumors and improves sensitivity to immunotherapy. The combination of a CXCR2 inhibitor (CXCR2 SM) with an anti-PD1 antibody to treat mice bearing pancreatic ductal adenocarcinoma clearly inhibits metastasis and enhances antitumor efficacy 64 . In addition, PD-L1 is highly expressed in murine rhabdomyosarcoma, but treatment with PD1 blockade alone has limited persistent effects. When combined with anti-CXCR2, anti-PD1 elicits enhanced antitumor effects 111 . Accumulation of CD8 + TILs and increased PD-L1 expression on tumor cells has been observed in tumor-bearing mice treated with SX-682 85 . Combining SX-682 with an anti-PD1 antibody to treat mice significantly inhibits tumor growth and increases mouse survival rates 85,112 . Importantly, combined treatment with SX-682 and the anti-PD1 antibody pembrolizumab has been tested in phase I clinical trials for metastatic melanoma 26 . Semaphorin 4D mAb The interactions of Semaphorin 4D (Sema4D) with its receptor Plexin-B1 regulate angiogenesis and tumor invasive growth 113 . Sema4D, derived from cancer cells, induces the formation of peripheral blood mononuclear cells into MDSCs in vitro. Additionally, the function of Sema4D in promoting tumor progression has been confirmed in various malignant tumors in human and animal models, and associated with a poor prognosis 114 . A decrease in PMN-MDSC recruitment has been observed after the use of Sema4D mAb to treat murine oral cancer 1 (MOC1); this phenomenon is associated with decreased expression of MAPK-dependent chemokines such as CXCL1, 2, and 5 in tumor cells 86 . The decrease in PlexinB1 downstream ERK and STAT3-dependent arginase production also weakens PMN-MDSC-induced T cell inhibition. Moreover, IFN-γ production increases in the TME. These changes increase the infiltration of CD8 + TILs into tumors and enhance the activation of T lymphocytes in draining lymph nodes. The combination of the Sema4D mAb and anti-PD1 in MOC1 or Lewis lung carcinoma mouse models suppresses tumor growth and significantly improves survival in both models. The decrease in PMN-MDSC recruitment and inhibition of immunosuppressive functions caused by the Sema4D mAb enhances the specific responses of T cells to tumor antigens, thus potentially explaining the increased immune response to PD1 blockade 86 . Similar combined effects have been observed in models of colon carcinoma 115 . Valproic acid When GM-CSF-stimulated murine bone marrow cells are exposed to histone deacetylases (HDACs), M-MDSCs substantially expand, and the proliferation of allogeneic T cells is inhibited 116 . The antiepileptic drug valproic acid (VPA) has been shown to be a strong class I HDAC inhibitor that may inhibit HDAC activity by binding the catalytic center 117 . VPA effectively alleviates tumor burden by decreasing the number of M-MDSCs infiltrating into tumors, and the antitumor immune response induced by anti-PD1 has been found to be improved by combination treatment with VPA 87 . In anti-PD1-sensitive EL4 and anti-PD1-resistant B16-F10 tumor-bearing mouse models, the combined application of VPA and anti-PD1 clearly increases CD8 + T cell infiltration into tumors and suppresses tumor progression 87 . The overexpression of the chemokine CCL2 and its receptor CCR2 has been observed in multiple cancer types 118,119 . The CCL2/ CCR2 pathway is strongly associated with MDSC migration into tumors, and a lack of these cytokines can impair the tumor-promoting effects of MDSCs 120 . In both mouse models, the application of VPA inhibits the activity of HDACs and decreases CCR2 expression on M-MDSCs, thereby inhibiting the recruitment of M-MDSCs to tumors. The decrease in histone acetylation in MDSCs caused by VPA may explain the observed downregulation of CCR2 expression. VPA also promotes CD8 + T cell and NK cell expansion and reactivation in the TME. In addition, the decrease in ARG1 and prostaglandin E synthase levels in the PMN-MDSCs of VPA-treated mice indicates the ability to avoid the immunosuppression caused by PMN-MDSCs 87,121 . Histamine dihydrochloride Histamine dihydrochloride (HDC) can be decomposed into histamine in solution 92 . Controlled by the STAT3 transcription factor, the upregulation of NOX2 activity increases the ROS levels in MDSCs and the suppression of T cell function. In the absence of NOX2 activity, MDSCs cannot effectively inhibit T cell responses and they will rapidly differentiate into mature DCs and macrophages 67 . Histamine, an inhibitor of myeloid NOX2, promotes the maturation of myeloid cells that produce ROS. By decreasing the production and extracellular release of ROS, histamine also helps to retain the function of NK cells and promote the NK cell-mediated removal of malignant cells 122 . In a tumor-bearing mouse model, MDSC accumulation and tumor progression are promoted in histamine-deficient mice compared with wild-type mice 123 . In 3 murine cancer models, EL4 lymphoma, MC38 colorectal carcinoma, and 4T1 mammary carcinoma, which exhibit MDSC accumulation, HDC treatment delays tumor growth 92 . In the EL4 and 4T1 models, HDC decreases MDSC accumulation and the level of NOX2-derived ROS. The negative correlation between the percentage of MDSCs in the tumor and tumor-infiltrating CD8 + T cells is clear in both models, thereby suggesting that HDC relieves MDSC-induced immunosuppression 92 . However, HDC has not been found to affect tumor progression in Nox2-KO mice or mice lacking Gr-1, thus indicating that its antitumor efficacy requires NOX2 + Gr-1 + cells 96 . More importantly, the combination of HDC and anti-PD1/anti-PD-L1 to treat mice with MC38 or EL4, respectively, has been found to be superior to any monotherapy in inhibiting tumor development. This finding might be associated with the increase in the proportion of CD8 + T cells showing an effector phenotype caused by anti-PD1/anti-PD-L1 treatment 92 . TRAIL-R2 agonistic antibody TNF-related apoptosis-inducing ligand (TRAIL) is an effective stimulator of apoptosis, and the TRAIL pathway is a promising target for promoting MDSC elimination. TRAIL ligates 2 receptor types: the death receptors TRAILR1 and TRAILR2 (also known as DR5) and the decoy receptors TRAILR3 and TRAILR4 124 . On the basis of the function of TRAIL receptors (TRAILRs) to selectively inhibit MDSCs, the efficacy of an agonistic DR5 antibody has been verified 125 . In cancer patients with high MDSC levels, DS-8273a, an agonistic antibody to TRAILR2, rapidly eliminates MDSCs without affecting neutrophils, monocytes, or other myeloid and lymphoid cells 97 . In tumor-free mice, the endoplasmic reticulum stress response causes changes in the expression of TRAILRs in MDSCs, thus resulting in a shorter lifespan for MDSCs than their counterparts (PMNs and monocytes). In tumor-bearing mice treated with the agonistic DR5 antibody (MD5-1 mAb), MDSCs are selectively inhibited. Combining the MD5-1 mAb and anti-CTLA4 to treat mice increases the sensitivity of tumors to anti-CTLA4 and significantly delays tumor progression 125 . Phenformin Previous reports have suggested that biguanides, such as phenformin, exhibit antitumor activity both in vivo and in vitro 126,127 . After treatment with phenformin, the numbers of PMN-MDSCs but not M-MDSCs significantly decrease in the spleens of mice with melanoma. The effects of phenformin on PMN-MDSCs are dependent on AMP-activated protein kinase, a major mediator of its antitumor activity 98 . Phenformin diminishes the PMN-MDSC population by inhibiting proliferation and promoting apoptosis 128,129 . These results suggest that phenformin has a selective inhibitory effect on PMN-MDSC-driven immunosuppression. The combination of phenformin and an anti-PD1 antibody to treat mouse models of BRAF/PTEN melanoma effectively inhibits tumor growth 98 . In these models, phenformin significantly decreases the levels of proteins such as ARG1, S100A8, and S100A9, which are critical to the immunosuppressive activity of MDSCs. In addition, this combination clearly decreases the ratio of PMN-MDSCs in the tumor and spleen and synergistically promotes CD8 + T cell infiltration, thus further indicating its positive prospects 98 . Dimethylbiguanide The metabolism of MDSCs in tumors and inflammatory tissues is greatly diminished, and this response may be associated with the accumulation of methylglyoxal in MDSCs 100 . Methylglyoxal can be administered to CD8 + T cells by cellto-cell transfer; it then causes the consumption of L-arginine inside T cells and the deactivation of L-arginine-containing proteins through glycosylation, thereby inhibiting their effector functions. Dimethylbiguanide (DMBG)-containing guanidine groups neutralize the glycosylation function of methylglyoxal and release the suppressed states of CD8 + T cells conferred by MDSCs 100 . After isolation from hepatocellular carcinoma patients, methylglyoxal has been detected in M-MDSCs but not in PMN-MDSCs, thus indicating one difference between human and mouse models 100 . Strong and lasting tumor regression has been observed in mice with melanoma specifically expressing ovalbumin after the combined application of DMBG and an anti-PD1 antibody. Tumor cells grown after treatment with this combined therapy lose ovalbumin expression 100 . These results clearly indicate that DMBG relieves MDSC inhibition of CD8 + T cells against tumor-specific antigens, and its combination with an anti-PD1 antibody synergistically increases tumor-specific immune responses. IDO pathway inhibitors IDO is activated by MDSCs in many human cancers, and its overexpression tends to be associated with poor prognosis 130 . Because IDO is regarded as an important target for cancer treatment, IDO pathway inhibitors have been applied in various types of cancer models and clinical trials 37,101 . In addition, IDO expression is associated with some immune checkpoints, such as PD-L1 and CTLA4, thus supporting a combined targeting strategy 131 . In non-Hodgkin lymphoma mouse models, the application of the IDO inhibitor indoximod decreases the number of Tregs in tumor-draining lymph nodes and effectively inhibits tumor growth 132 . In patients with melanoma, the combination of a PD1 antibody and indoximod has achieved positive disease control rates 101 . In addition, other IDO inhibitors, such as epacadostat and navoximod, when combined with PD1 blockade to treat head and neck squamous cell carcinoma, melanoma, and other solid tumors, have shown enhanced antitumor activity compared with treatment with PD1 blockade alone 133 . Entinostat Entinostat, a class I-specific HDAC inhibitor, impairs the dynamic interactions between host immune surveillance and the TME 134 . Entinostat enhances tumor cell immunogenicity in animals bearing tumors or in cancer patients by activating tumor antigen expression, antigen presentation, and costimulatory molecules 135,136 . Furthermore, entinostat inhibits the immunosuppressive functions of MDSCs infiltrating into tumors by significantly decreasing the levels of ARG1, iNOS, and COX2 104 . In one report, 2 mouse models bearing Lewis lung carcinoma or renal cell (RENCA) carcinoma have been used to assess the combined efficacy of entinostat and PD1 blockade. Enhanced antitumor effects have been observed in both models. In the entinostat and anti-PD1 combination group, compared with the control, entinostat alone or anti-PD1 alone group, MDSC function was suppressed, the FoxP3 protein level in CD4 + FoxP3 + cells was strongly decreased, and CD8 + T cell infiltration into the TME was increased. With clear changes in cytokine/chemokine release in vivo, the microenvironment changed from immunosuppressive to tumor suppressive, thus indicating that entinostat promotes the antitumor response to anti-PD1 104 . PI3Kδ and γ inhibitors PI3Ks are part of a family of signal transducing enzymes that mediate critical cellular functions in immunity and cancer 137 . p110δ and p110γ, class I PI3K isoforms, activate MDSCs, and both are associated with MDSC-mediated immunosuppression in solid tumors 65,138,139 . In mice with oral cancer, MDSCs significantly accumulate in the periphery and TME, thus resulting in the inhibition of T lymphocyte function 105 . The expression of the PI3Kδ and γ isoforms is higher in PMN-MDSCs than MOC cells. The inhibition of PI3Kδ and γ with IPI-145 in vitro partially reverses the immunosuppressive phenotype of peripheral and tumor-infiltrating PMN-MDSCs by altering the expression of ARG1 and NOS2. The combination of IPI-145 and anti-PD-L1 to treat tumor-bearing mice enhances the sensitivity of the mice to anti-PD-L1 and significantly improves the antigen-specific T-lymphocyte response 105 . Other drugs In addition to the drugs listed above, other MDSC-targeting compounds have been reported to have potential for combination with ICB. For instance, Prim-O-glucosylcimifugin impedes the proliferation and activity of PMN-MDSCs, mainly by suppressing the metabolism of arginine and proline and the citric acid cycle 140 . Cabozantinib and BEZ235 inhibit PI3K-AKT-mTOR signaling in tumors and decrease CCL5, CCL12, CD40, and hepatocyte growth factor levels, thus inhibiting the recruitment and activity of MDSCs 112 . The combination of these compounds with ICB relieves the suppressed state of effector T cells and increases their infiltration into tumors, thereby improving the responses of cancer patients to ICB and enhancing antitumor efficacy 112,140 . Conclusions The mechanisms of MDSC-mediated immunosuppression in the TME have been described. Additionally, the combined application of MDSC-targeting compounds and ICB to enhance antitumor effects has been shown to have broad prospects, and substantial progress has been made in this field. However, some challenges remain to be overcome in future clinical applications. The distributions of MDSCs and the dominant MDSC phenotypes in various cancer types are known to differ 28 . In addition, differences exist in the baseline percentages of MDSCs among individuals 52,54 . These differences can lead to failure of MDSC-targeted therapies in patients. Therefore, precise detection of the phenotype and the dynamic distribution of MDSCs in cancer patients is beneficial and necessary for personalized cancer therapy. Timely and accurate MDSC visualization methods will provide important reference values for evaluating the effectiveness and durability of immunotherapy, and more effective and less invasive tools also must be developed. Molecular imaging is a tracking method that reflects specific molecular events in disease progression, thus providing an important foundation for personalized cancer therapy. The use of specific molecules expressed on MDSCs as targets for real-time imaging can provide guidance for the combination of molecular imaging with molecular therapy. The targeting specificity of mAbs combined with the excellent resolution and sensitivity of PET has allowed immunoPET imaging to become an emerging molecular imaging technology with far-reaching value in the development of cancer diagnosis and personalized medicine 141,142 . Several immunoPET probes targeting anti-PD1 or anti-PD-L1 have been developed to detect the dynamic distribution of these antibodies in the body through precise imaging [143][144][145] . In future research, the exploration of immunoPET probes targeting MDSCs may be a meaningful strategy to select patients who are sensitive to MDSC-targeted therapy for further treatment, and to direct patients with little or no response to therapy toward multidisciplinary treatment. In addition, molecular imaging technologies based on ultrasound microbubbles 146 or MRI 147 are being applied in cancer treatment strategies, and these technologies are expected to be further developed for MDSC-targeted therapies. Another challenge is the limited clinical trials targeting MDSCs, particularly those testing the combination of targeted MDSC therapy and ICB therapy. Most combination strategies have been evaluated only in the preclinical stage or in small numbers of cancer patients. Furthermore, other effects of related drugs targeting MDSCs on the TME are not clear. Some MDSC-targeting compounds such as HDC have limitations in triggering the influx of CD8 + T cells into tumors, thus indicating that related combination therapies must be properly adjusted 92 . In the next phase, more clinical trials testing combination therapies are expected to be performed, and the patient cohorts must be expanded to further evaluate the effects of these therapies on patient prognosis and accompanying adverse effects. Theranostics is a new type of biomedical technology that effectively combines the diagnosis and treatment of diseases, and the emergence of nanoscale agents provides a new opportunity for its further development 148 . Because of their unique biological properties 149 , nanomaterials can be considered for integration into MDSC-targeting compounds with molecular markers for imaging to construct specific nanotheranostics. However, how to improve the specific uptake of MDSCtargeting nanotheranostics by tumor tissues and promote their penetration into the tumor core remains a challenge requiring further investigation. MDSCs intimately participate in mediating immunosuppression in the TME and cancer patients' resistance to ICB therapy. Therefore, exploration of the clinical diagnosis and MDSC-targeted therapies and visualization methods has very far-reaching value. More importantly, with the in-depth study of MDSC expansion, recruitment and immunosuppressive mechanisms, research dedicated to combining MDSCtargeting strategies with ICB therapies has gradually emerged and made positive progress. Given that many MDSC-targeted compounds have been approved by the FDA or are in different stages of clinical trials, more effective compounds and ICB combination strategies must be further explored to evaluate their antitumor efficacy. Conflict of interest statement No potential conflicts of interest are disclosed. Author contributions Conceived and designed the analysis: Hongchao Tang, Hao Li, Zhijun Sun. Collected the data and analysis: Hongchao Tang, Hao Li.
8,496.6
2021-08-17T00:00:00.000
[ "Biology", "Medicine" ]
STUDY INTO EFFECTS OF A MICROWAVE FIELD ON THE PLANT TISSUE Thermal treatment of materials of plant origin is determining for the majority of technological processes, specifically, drying, sterilizing and bio-stimulation. The energy crisis and increasing demand for products with enhanced quality necessitated improvement of traditional technologies and development of the new ones. In this direction, methods that utilize energy of a microwave electromagnetic field (MW EMF) have long proved to be highly effective [1]. Application of microwave heating is considered advisable for the modernization of a range of technological schemes of production [2–5]. However, an incomplete knowledge about effects of a microwave field on plant materials does not make it possible to accept efficient microwave technologies. Studying the processes of interaction between a microwave electromagnetic field and materials of plant origin, as well as determining treatment conditions, are important objectives for the development of performance-effective and energy-rational technologies. 2. Literature review and problem statement Introduction Thermal treatment of materials of plant origin is determining for the majority of technological processes, specifically, drying, sterilizing and bio-stimulation.The energy crisis and increasing demand for products with enhanced quality necessitated improvement of traditional technologies and development of the new ones.In this direction, methods that utilize energy of a microwave electromagnetic field (MW EMF) have long proved to be highly effective [1].Application of microwave heating is considered advisable for the modernization of a range of technological schemes of production [2][3][4][5].However, an incomplete knowledge about effects of a microwave field on plant materials does not make it possible to accept efficient microwave technologies. Studying the processes of interaction between a microwave electromagnetic field and materials of plant origin, as well as determining treatment conditions, are important objectives for the development of performance-effective and energy-rational technologies. Literature review and problem statement It was established in [3,4] that the effectiveness of application of each of the considered methods is related to a change in the structure of plant material in the course of MW treatment.While a method of seed bio-stimulation implied, as one of the tasks, the exclusion of regimes that violate integrity of the cell walls [6], the process of preparing a substrate based on straw involves a desirable destruction of plant structure.Study [7] showed that the surface of the original straw material is a smooth and relatively uniform surface by height, while the surface of the treated sample is characterized by considerable roughness.The results obtained suggest the expansion of capillaries due to micro-explosions. At present, there has been created a basis for practical application in agriculture of a microwave pre-sowing technology of seed treatment [2,3].An assessment of the impact of time of microwave exposure on the stimulation of germination [8] demonstrates the presence of an optimum; the germination peaked at 10 s of treatment.However, the authors fail to determine specific energy costs (per a kilogram Представлені результати експериментального дослідження впливу мікрохвильового електромагнітного поля на рослинну тканину. Вивчено ефекти мікрохвильового нагрівання насіння, зерна і зволоженої соломи при реалізації відповідних технологій біостимуляіїи, сушки і стерилізації. Показано вплив будови рослинної тканини і вмісту вологи на структурні зміни при мікрохвильовому нагріванні. Запропоновано метод оцінки величини енергії мікрохвильового поля, перетвореної у внутрішню енергію тіла Ключові слова: мікрохвильова енергія, нагрівання, рослинна тканина, біостимуляція, сушка, коефіцієнт корисної дії Представлены результаты экспериментального исследования влияния микроволнового электромагнитного поля на растительную ткань. Изучены эффекты микроволнового нагрева семян, зерна и увлажненной соломы при реализации соответствующих технологий биостимуляции, сушки и стерилизации. Показано влияние строения растительной ткани и влагосодержания на структурные изменения при микроволновом нагреве. Предложен метод оценки величины энергии микроволнового поля, преобразованной во внутреннюю энергию тела Ключевые слова: микроволновая энергия, нагрев, растительная ткань, биостимуляция, сушка, коэффициент полезного действия of seeds) taking into consideration performance efficiency of the chamber.It was unambiguously determined that the impact of treatment in a MW field better manifests itself on the seeds with primary low germination than on the seeds whose laboratory germination exceeded 95 %.The effects of microwaves significantly increased germination energy and germination as a whole for the 8-year-old carrot seeds [9].Maximum carrot seed germination was established at a frequency of 9.3 GHz during treatment over 5 min.The action of a microwave electromagnetic field on seeds can lead to a significant bio-stimulating effect, which manifests itself at all stages of plant vegetation [10,11].The manifestation of bio-stimulation includes an increase in the energy of germination, germination energy, and when growing plants from the treated seedsto strengthening the root system, reduction in vegetative phases.This effect was observed both during seed treatment and while treating potato tubers [12].Upon exposure to 20 minutes at 38 GHz, 46 GHz and 54 GHz, the researchers did not observe the influence of microwave radiation on the weight of potato harvest of the tubers Felka Bona.Radiation at a frequency of 2.45 GHz, for 10 seconds and with a microwave generator power of 100 W, resulted in the largest increase in the biomass of seed potato germs and an increase in the Felka Bona tuber weight.The experiments [13] revealed a significant increase in the biomass (up to 66 %) with increasing the exposure time from 12 minutes to 20 minutes compared to the control.In some cases, bio-stimulation proceeds simultaneously with disinfection [14,15].Seed treatment with MW EMF is an environmentally friendly and effective method [16].However, there are no definitive data on the treatment regimes for various seeds in a microwave field, which does not make it possible to accurately predict the result.This reduces effectiveness of the applied method. Studying the drying at a microwave heating shows that it is possible to significantly reduce energy consumption [17].Microwave heating under drying regimes demonstrates a considerable intensification of the process [18].At an increase in the output power of a magnetron by 4 times, the duration of drying reduces by about 5 times.However, the paper does not report an analysis of the impact of the type of material and the amount of loading on such required characteristics as treatment duration and power output of the magnetron.Research into microwave drying of fruits and plants is still under way [18,19], however, the drying of raw materials with a high moisture content is not feasible.This leads to loss of quality and high energy costs, since the primary substance that absorbs electromagnetic energy is water.The microwave drying of grain crops whose moisture level is 20-22 % appears to be promising.Using the study into kinetics of the drying of buckwheat groats in a microwave electromagnetic field as an example, it was shown that drying curves contain the periods observed when exploiting other ways to supply heat [20]. Special attention is paid to studying a temperature field in a material in order to establish rational modes [21,22].Of great importance is the analysis of the heterogeneity of heating, caused by the shape of the material and its composition, as well as the uneven distribution of an electromagnetic field in a microwave chamber [22]. An analysis of data from the scientific literature [9,12,17] allows us to draw the following conclusion.Essential limiting factors in the use of methods of microwave heating in different technologies are the insufficient completeness of theoretical and experimental studies.The lack of data does not make it possible to predict effects that occur in the material under the action of a microwave field. The aim and objectives of the study The aim of present research was to study the effects of a microwave field on the plant tissue.This would enable the creation of new energy-saving and highly-efficient technologies, which would make it possible to exploit the features of microwave energy conversion into internal energy of the material. To achieve the set aim, the following tasks have to be solved: -to investigate the effect of microwave treatment on seeds and to determine conditions for obtaining an optimal bio-stimulation effect, to estimate a threshold time for the exposure to a MW field; -to explore the sterilizing effect of microwave field exposure on damp straw, to determine the optimal treatment mode; -to explore the features of the process of drying a layer of grain under different conditions for the removal of evaporated moisture; -to estimate energy efficiency of converting microwave energy into internal energy of the material. 1. Characteristics of materials We exposed to the microwave treatment the grain intended for drying, seeds as a sowing material, and straw.Selection of straw is explained by its extensive use as a substrate for growing wood-destroying fungi.The research into phenomena of the bio-stimulation was conducted using the seeds of wheat, the variety of Odessa-267; ordinary soy, the variety of Hadzhibey; corn, the variety of Odessa-10.When studying the drying in a microwave field, we used grains of buckwheat and wheat.The initial moisture content of grain changed from 20 % to 22 %; initial temperature ranged from 17 to 26 °C; weightfrom 0.05 up to 1.2 kg; layer thickness ranged from 0.008 to 0.07 m; the surface area of the sample open to remove moisturefrom 8•10 -3 to 94•10 -3 m 2 .Power of the magnetron ranged from 80 to 800 Watts. 2. Schematic of experimental installation and research procedure Schematic of the experimental installation designed to study effects of a microwave field on the plant tissue is shown in Fig. 1. Microwave energy arrived at the working chamber with a rectangular cross section through the waveguide from the magnetron with a generation frequency of 2.45 GHz.The design of the microwave chamber enabled, simultaneously with feeding MW energy, blowing with air above the layer.At simultaneous microwave and convective heat feed, the air along an inlet duct was forced to chamber 2 by fan 6.To control air heating, there is heater 7 with measuring kit 8 and voltage regulator 9. The research procedure of grain drying implied the following.We placed the examined material in the experimental cell and switched on the magnetron.At specific intervals, we determined by a weight method the amount of evaporated moisture and calculated humidity (when studying the drying process).The research procedure into the influence of MW field on straw implied the following.Wheat straw was preliminary poured with hot water and pressed out to the moisture content 73-75 %.We formed packages weighing 0.4 kg, which were next placed to MW cell.The magnetron was turned on over a specified period.Upon treatment, the package was taken out and we measured the temperature of the straw. When examining bio-stimulation, the seeds were put into paper packets of 0.1 kg.Output power of the magnetron was 800 W. We also placed a glass of water (200 ml) to the microwave chamber in order to reduce the power received by the seeds. Experimental study into effect of a microwave field on the plant tissue The scope of research of the present work covers the following applications of microwave heating of materials of plant origin: pre-sowing seed treatment (bio-stimulation), sterilization of plant substrate for the production of wood-destroying fungi, grain drying.Despite their diversity, plant materials have common specificity, which involves the structure of plant cells, anisotropy, and the presence of substances with properties of polar dielectrics.The basic distinction between the treated plant materials in given techniques is a significant difference in the moisture content.Thus, at pre-sowing seed treatment, moisture content corresponded to the equilibrium; during preparation of the substrateto the level of humidity 73 %; at drying, moisture content was within the range of 20-22 %.Accordingly, we set different objectives in microwave treatment, which specifies a number of particular tasks for each of the developed methods. 1. Study into effects of a microwave electromagnetic field on the properties of seeds An example of the effect of a microwave field on seeds is shown in Table 1.The data are acquired in line with the procedure described above.As the data reveal, the value of germination energy and laboratory germination rate changes depending on the exposure.We made the following assumption about the mechanism of occurrence of a bio-stimulation effect.Under sufficiently mild modes of MW-exposure, the plant tissue integrity is not disrupted.At the same time, transport properties of the capillary system (intercellular structures, pores of plasma membranes, etc.) improve due to the development of large pressure gradients in the closed microvolumes.The action of a microwave field leads to active heat release in a cell.The temperature rises, as a result of which liquid in cellular tissue tends to expand.The volume of a cell can somewhat grow due to the air-bearing intercellular.However, increasing the time of MW exposure leads to the state when all reserved space is taken up by the increased volume while pores in the plasma membrane are not intended for the massive outflow of fluid.That is why the volume of a cell can be considered fixed, with the process of temperature rise in a cell proceeding by isochore.Estimates show that an increase in the temperature by 10 о С causes an increase in the pressure under conditions of constant volume up to 60 bars − such a pressure is definitely unacceptable for the living cell.The decisive role of relation between a growth in the internal pressure and rigidity of the cell wall is indicated by the fact that, for example, the cellulose destruction has an explosive character.Large hydraulic resistance leads to the event when the resulting mass flow cannot escape to the outside, the pressure increases rapidly, and this process ends up destroying the tissue.Such regimes are not acceptable. In a general form, calculation of boundary temperature is proposed to perform employing the following algorithm: 1.For each estimated unit, to determine maximum permissible volume V K , that is, the volume that can be achieved at thermal expansion of the protoplasm. 2. A temperature change in the process of volume increase from V 0 to V K is determined from equation: where β is the coefficient of thermal expansion of water. 3. The amount of heat absorbed in this process: For further calculations, in order to improve accuracy, it is recommended that the dependence be used, which makes it possible to determine the amount of heat spent on heating the total grain mass М up to the design temperature: The period over which this heat was received: where V is the volume of material, the value of q is determined from the following dependence: Over this time, an increase in the pressure will not be critical, with the seeds undergoing a stage of bio-stimulation. The cell walls cannot further increase their volume.Inside the object, at subsequent energy supply of MW field, pressure begins to rise sharply.In general, heat is used in order to increase body temperature and to change pressure according to the first law of thermodynamics: The relationship between temperature and pressure can be assigned only approximately because the equations of state of such complex mixtures as the protoplasm are lacking.In the first approximation the calculation is recommended to perform by adopting the properties of the protoplasm equal to the water properties. Applying these dependences, we performed calculations aimed at determining pressure and temperature at the end of the treatment process, depending on the moisture content and exposure time.When estimating the pressure, it was assumed that the body shells are rigidly fixed by the cell walls, that is, the case that the most characteristic of spores and sclerotium.The calculation is of estimated character, but it allows us to predict the reaction of a biological object for MW field.It was accepted that the volume of plant cell of a grain germ was V 0 =3.35·10 -14 m 3 .Maximally permissible cell volume is V K =3.366·10 -14 m 3 .Calculations were performed for the grain of mass 0.1 kg.Heat capacity of grain for wheat depends on humidity W. Thus, at W=33 %: c=2,398 J/(kg•K)' at W=14 %: c=1,852 J/(kg•K). Fig. 2 shows results of calculation of boundary curves for the wheat seed treatment time in the microwave chamber at output power of the magnetron Р=1 kW depending on seed humidity and at different values of the initial temperature.Initial temperature exerts a significant impact on the value of permissible treatment time, which is related to the dependence of dielectric characteristics and the coefficient of volumetric expansion on temperature.In the calculation, it was assumed that the initial temperature of the grain and the ambient temperature were the same.An analysis of the results of research into effect of a microwave field on seeds revealed the following.At the heating rate of seeds in a microwave chamber Δt/τ=0.14-0.17K/s, with a moisture content of u 0 =8-12 %, at duration τ=50-130 s (depending on the variety of seeds), there occurs a bio-stimulation effect that manifests itself in an increase in the energy of germination and germination rate.At lower specific power, a bio-stimulation effect is achieved by increasing the duration of treatment: at treating wheat seeds for specific power q v = =3.7·10 4 W/m 3 , the exposition is τ=180 s. 2. Thermal treatment in a microwave field of plant material as the base of substrate for the wood-destroying fungi The main purpose of thermal treatment is to destroy or oppress microflora, which is competitive to the cultivated fungus. The effect of MW treatment efficiency under modes at which Trichoderma was destroyed or oppressed was verified according to the following procedure.The crushed straw, pre-soaked for 48 hours, was pressed out to humidity W=73 %.The treatment was carried out at power P=800 W. We formed the samples weighing 0.4 kg on analytical scale, followed by the mycelium inoculation after cooling; next, we placed them into plastic bags.The main results in the observation of samples fouling and fungi yield are shown below.Results of the study into properties of the substrate for growing fungi under different modes of treatment and at different initial humidity are given in Table 2. Optimal treatment mode corresponds to the exposure for 140 s.For this mode, the chamber performance efficiency was 80 %.Energy consumption Q, taking into consideration the magnetron's performance efficiency η M =75 %: It was determined that the heat treatment of wet plant material in a microwave field with a heating rate of Δt/τ> >0.33 K/s leads to the improvement of its nutritional value, the material is sterilized, treatment time is reduced by 45 times compared to traditional sterilization.Fungi harvest increased by 40 % compared to the yield obtained when using traditional technologies, at the following operating parameters: specific power is q v =9.6·10 5 W/m 3 , duration of treatment is τ=140 s.Under the considered modes, development of competing fungi, specifically Trichoderma, is suppressed. 3. The drying of grain materials using a microwave electromagnetic field Fig. 3 shows typical experimental moisture content and temperature dependences on the drying duration in a microwave field at different loading mass.The drying process can be divided into periods, characteristic of colloidal capillary-porous solids at other heat supply techniques: warmup (zero), continuous (first) and falling (second) drying rate.The nature of a temperature change in the first period varied depending on the mass of the load (Fig. 3) and the supplied power.At values of specific power up to q=450 W/kg, the temperature remained largely unchanged.With an increase in the specific power, the temperature grew; at values q>600 W/kg, its change was essential.The period of the falling drying rate was determined by a change in the moisture content curve: the curve became flat.The temperature always increases over this period.The described pattern was typical for all materials. Research results showed that the rate of MW drying without overheating the grain is (0.8-6.2)·10 -4 s -1 , which is much higher than the values obtained using other techniques of heat supply.Thus, for grain, the conductive drying rate was 0.3·10 -5 s -1 , that of the conductive-convectiveto 0.2·10 -4 s -1 .The possibility to significantly intensify the process testifies to the prospects of applying a microwave field in order to dry the grain crops.Experiments have shown that under the optimal mode the rate of microwave drying with a simultaneous blowing a grain layer with air was 12.7·10 -4 s -1 ; in this case, specific energy cost per a kilogram of evaporated moisture amounted to 5.65 MJ/kg. In the process of MW heating of damp grain, overpressure inside the layer starts to rise.The excess pressure at the layer's surface is zero, and it is maximal in the center.A total pressure gradient arises in the layer, which is the driving force of filtration transfer.In order to detect the effect of increasing pressure in the grain layer, we devised the following technique.We placed a container with a layer of buckwheat of height 11 cm in a microwave chamber, and measured excess pressure in the center using a U-shaped kerosene manometer.The choice of kerosene was due to the fact that it did not absorb microwave energy.An exponential increase in the pressure occurred when temperature exceeded 70 о С.In this case, the layer's thickness was 0.1 m.Maximally possible overpressure inside the layer was equal to 640 Pa.When this magnitude was reached, we observed a spontaneous instantaneous pressure relief. The specificity of microwave heating is in the volumetric character of material's absorption of microwave energy.Microwave energy flux density is maximum in the surface layers, while advancing deep into the material the flow is weakened in line with the exponential law.Therefore, of particular interest for the evaluation of non-uniformity of temperature and moisture content was the kinetics of the layer-wise drying.To this end, we fabricated an experimental cell, which consisted of three layers separated by radiotransparent nets.The weight of each layer was 0.1 kg, thickness was 0.009 m, diameter -0.135 m, the surface area open to remove steam was 14.3•10 -3 m 2 .In the course of the experiment we determined a change in the moisture content and temperature of buckwheat over the drying process for the height of a layer.Only the upper surface of the sample was open to absorb MW energy and remove the steam, with the side and lower surfaces being heat-and moisture insulated.The curves of layer-wise drying kinetics are shown in Fig. 4, which demonstrate that the drying proceeded most intensively in the middle layer.In this experiment, the layer's mass was m= 0.1 kg, thickness l=0.009 m, Р=160 W. Moisture release rate in the upper layer was somewhat weaker (Fig. 4, a).This is related to the fact that the temperature of the upper layer was slightly lower than that of the second (Fig. 4, b).Another feature was discovered: the third layer's moisture content increased over time, reaching 0.215 kg/kg (the initial moisture content is 0.2 kg/kg).Therefore, the moisture from the upper layers of the material penetrated down, apparently due to the thermal diffusion mechanism and the force of gravity.Noteworthy is the following featuredespite the growing moisture content in the bottom layer, that is, an increase in the share of polar dielectric ("receiver" of microwave radiation) in this volume of the material, its temperature remains below temperature of the second and third layers.Experiments on cell, consisting of four layers, showed that the lower (fourth) layer also had the lowest temperature and the highest moisture content, which, as is the case for the previous experiment with three layers, increased over time.Thus, the moisture content of the lower layer increases regardless of the thickness of the sample.The reason for lowering the temperature of the sample at the boundary between the layer and the base of the chamber is the transfer of heat by the conductivity of experimental cell.This irregularity was not observed when using a netted cell that freely passed the steam in all directions.In this case, temperature of the material's layers differed by no more than 4 °C, the humidityby 0.007 g/kg.Comparing data on the drying kinetics derived at the cell with solid and radiotransparent netted bottom confirmed the importance of rational organization of steam removal.Thus, in the first case the average moisture content of the sample decreased from 0.2 kg/kg to 0.17 kg/kg in 14 minutes, in the secondin 7.5 min.Uneven distribution of temperature and moisture content occurs under conditions when steam removal through the bottom and side surfaces is difficult. 4. Energy efficiency of thermal effect of interaction between dielectric materials and a microwave field One of the key tasks was to determine the conditions for the greatest possible conversion of microwave electromagnetic field energy into internal energy of the examined material.In order to analyze the specifics of microwave energy absorption by the material, it was of great importance to study dependence of the magnitude of microwave energy absorbed on the chamber's loading.The expression for a general performance efficiency is represented as follows: η=η M ·η c , where η M is the magnetron's performance efficiency, η c is the microwave chamber's performance efficiency.A value of η M shows at which performance efficiency the magnetron converts energy of the electric field of industrial frequency (50 Hz) into energy with a microwave field frequency.This magnitude is technically specified.The value of η c depends on the conditions of aligning the magnetron with a waveguide and a loaded material; due to the complexity of its prediction, there is a need to conduct large-scale experiments.First of all, we chose water for the research as a substance whose electrophysical properties are well studied.The capability of water to absorb microwave energy approaches to maximum due to the high polarity of molecules.The chamber's performance efficiency is calculated as the ratio of heat, converted by material Q Σ , to the magnetron's power output Р, and includes the value of usable heat flow Q u , environmental losses as a result of natural convection Q c , and radiant heat exchange (Q ∑ =Q u +Q c +Q r ) between the sample and the chamber's walls: To study the dependence of performance efficiency on the chamber's loading, we used water at initial temperature 20 о С, the mass varied from 0.05 up to 1.1 kg.The output power of a microwave source was 800 W. One can see (Fig. 5) that with an increase in the mass of water performance efficiency continuously grows, reaching its maximum value of η c =90 % at m=1.1 kg, which allows us to argue about achieving the optimum loading of the chamber.Materials of plant origin have their own specifics that include structural characteristics and chemical composition, but, in order to assess the power released in the form of heat, the most important is the moisture content of these materials.The capacity of plant materials to absorb MW energy is significantly lower than that of water, which is why the chamber's performance efficiency when treating grain materials is less.We obtained empirical dependence for certain grain materials (oats, wheat, barley, buckwheat), which takes into consideration complete loading of the chamber through the introduction of simplex V M /V K , where V M is the volume taken up by the material, V K is the chamber's volume: (7) where u is the current moisture content, u 0 =0.2 (the initial moisture content).At a change in the relative volume V M /V K from 0.0015 to 0.03, an error in determining the chamber's performance efficiency is ±17 %.Data on performance efficiency are proposed to be used to measure the heat, converted in a material when interacting with a microwave field, in accordance with the following dependence: where V is the volume taken up by the material, Р is the output power of the magnetron.Dependence (8) allows us to determine energy efficiency of converting microwave energy into internal energy of the material. Discussion of results of study into effect of a microwave field on the plant tissue Due to the obtained data on heating effects in a MW field, there is the possibility to predict changes in the plant tissue.Considerable influence of moisture content on heating rate is confirmed.There is also a certainty in terms of calculating a threshold duration of heating the seeds 1,3 / 0,003 0 0,0017 0,45 0,67 1 , at bio-stimulation.The procedure for the calculation of a threshold time was compiled on the basis of hypothesis on the emergence of a bio-stimulating effect. The identified features of heating and drying of wet grain are advisable to apply when designing microwave dryers.It was established that at an increase in the layer's temperature above 70 о С, there occurs the exponential rise in pressure.It was also found that, during microwave drying, the conditions could arise at which the moisture content of the lower layer of grain would increase.In order to enable uniform and intense drying, a layer of grain must be provided with a free outlet of evaporated moisture. The benefits of the present study include determining the conditions for effective heat treatment in a microwave field, obtained in the course of comprehensive examination of the effects of interaction between MW field and plant materials.Of particular importance is the proposed approach to estimating the heat generated by converting MW energy into internal energy.In addition, we obtained experimental data on the effect of a cascade pressure rise in a layer of wet grain material and on the layered drying.Understanding these phenomena makes it possible to better comprehend the processes of heating in a microwave field. The shortcomings of present research include the absence of confirmed data on the penetration depth of electromagnetic energy into a layer of a material, as well as the substantiation of rational thickness of the layer.A constraint on the application of results is the lack of specialized microwave equipment; in this case, in order to implement each of the described applications of MW treatment, individual design is required.Otherwise, the effectiveness of using microwave energy will not be optimal. In the future, we should determine a rational thickness of the layer taking into consideration the depth of penetration of electromagnetic energy into specific plant materials.This will help facilitate the choice of rational regime for microwave heat treatment in different technologies.The main challenge in this direction is correct estimation of the microwave chamber's performance efficiency.To resolve the task, it is necessary to continue the experiments and obtain empirical dependences of the chamber's performance efficiency for different kinds of treated raw materials. Conclusions 1.The study of seeds in a microwave field revealed the occurrence of a bio-stimulation effect.It is shown that the conditions for obtaining an optimal bio-stimulation effect depends on the type of seeds and the duration of treatment.Thus, with a 90-second treatment we observed improvement in the laboratory germination rate and energy germination of wheat, soya, and corn.In this case, the rate of heating under optimal regimes is different: for wheat -0.15 K/s, for corn -0.14 K/s, for soya -0.17 K/s.To calculate a threshold time for seed exposure to a microwave field, we propose a procedure that determines the heating period until reaching an isochoric process in the plant cell.Exceeding this time would result in the destruction of cell walls and oppression of seed growth. 2. An effective sterilizing effect of microwave heat treatment of damp straw was found, which involves destruction of competitive spores and improvement of conditions for the germination of cultivated wood-destroying fungi.We established an optimal mode of microwave treatment of damp straw for the preparation of substrates.For the mass of 0.4 kg, the optimal treatment duration is 140 s at power output of the magnetron 800 W. 3. The study into grain drying in a microwave field, which we conducted, revealed the occurrence of effects of cascade pressure rise in the layer.Under these conditions, the temperature exceeds 7 о С with a layer thickness of 0.1 m; in this case, steam release from the side surface and bottom is difficult.Under the same conditions the drying becomes extremely uneven; moisture content of the lower layer may exceed the initial value.The drying of the middle layer proceeds most intensively. 4. We investigated energy efficiency of conversion of microwave energy into internal energy of a material depending on its type, the loading volume, and moisture content.It was established that in order to dry grain crops with an initial moisture content of 20 %, the microwave chamber's performance efficiency does not exceed 67 %.When heating water, the microwave chamber's performance efficiency can reach 90 %.A dependence is proposed to calculate the value of microwave energy absorbed by the assigned volume of the treated material.In this case, it is necessary to have data on the microwave chamber's performance efficiency, which are derived experimentally. Fig. 3 . Kinetics of wheat grain drying in a microwave field at different loading mass: a -change in the moisture content; b -temperature change Fig. 4 . Kinetics of the layer-wise drying of buckwheat in a microwave field: a -change in the moisture content; b -temperature change; 1 -upper layer; 2 -middle layer; 3 -bottom layer Fig. 5 . Fig. 5. Dependence of the microwave chamber's performance efficiency on the mass of water at Р=800 W Table 1 Effect of MW EMF on seed germination: 1 -wheat, the variety of Odessa-267; 2 -ordinary soy, the variety of Hadzhibey; corn, the variety of Odessa-10 Table 2 The nature of the substrate fouling with mycelium at different exposure to a microwave field.Initial humidity is W=73 %
7,624.4
2017-11-14T00:00:00.000
[ "Physics" ]
A convolutional neural network-based system to prevent patient misidentification in FDG-PET examinations Patient misidentification in imaging examinations has become a serious problem in clinical settings. Such misidentification could be prevented if patient characteristics such as sex, age, and body weight could be predicted based on an image of the patient, with an alert issued when a mismatch between the predicted and actual patient characteristic is detected. Here, we tested a simple convolutional neural network (CNN)-based system that predicts patient sex from FDG PET-CT images. This retrospective study included 6,462 consecutive patients who underwent whole-body FDG PET-CT at our institute. The CNN system was used for classifying these patients by sex. Seventy percent of the randomly selected images were used to train and validate the system; the remaining 30% were used for testing. The training process was repeated five times to calculate the system’s accuracy. When images for the testing were given to the learned CNN model, the sex of 99% of the patients was correctly categorized. We then performed an image-masking simulation to investigate the body parts that are significant for patient classification. The image-masking simulation indicated the pelvic region as the most important feature for classification. Finally, we showed that the system was also able to predict age and body weight. Our findings demonstrate that a CNN-based system would be effective to predict the sex of patients, with or without age and body weight prediction, and thereby prevent patient misidentification in clinical settings. Patients are sometimes misidentified during imaging examinations. For example, the wrong patient is scanned and/or the wrong images are registered on a picture archiving and communication system (PACS), sometimes leading to severe consequences 1,2 . In some clinical settings, no formal check is conducted to determine whether the obtained images are matched to the correct patient. Various efforts have been made to prevent patient misidentification; for example, many hospitals require that patients wear a wristband with identifying information. This method has significantly reduced the rate of misidentification accidents, but it cannot be applied to outpatients or emergency situations 3 . There remains a demand for a low-cost automated system that can correctly match patients and their images. Image analyses using a convolutional neural network (CNN), a type of machine-learning algorithm, are gaining attention as an important application of artificial intelligence (AI) to medical imaging [4][5][6][7] . CNNs are a class of deep learning techniques that is considered applicable to image analyses because they recognize complex visual patterns in a manner similar to the processes of human perception 8 . Thus, in a study using a CNN, tuberculosis was automatically detected on chest radiographs 9 . In another report, a CNN enabled brain tumor segmentation from magnetic resonance images 10 . Deep learning with a CNN showed high diagnostic performance in the differentiation of liver masses by dynamic contrast agent-enhanced computed tomography 11 . CNNs have also been successfully applied for the detection of lesions and prediction of treatment response by PET [12][13][14] . The rate of misidentification accidents could be significantly reduced if AI could predict patient characteristics (e.g., sex, age, and body weight) automatically from a PET-CT or other image alone; this output could then be Image acquisition and reconstruction. All clinical PET-CT studies were performed with either Scanner 1 or Scanner 2. All patients fasted for ≥6 hours before the injection of FDG (approx. 4 MBq/kg), and the emission scanning was initiated 60 min post-injection. For Scanner 1, the transaxial and axial fields of view were 68.4 cm and 21.6 cm, respectively. For Scanner 2, the transaxial and axial fields of view were 57.6 cm and 18.0 cm. A 3-min emission scan in 3D mode was performed for each bed position. Attenuation was corrected with X-CT images acquired without contrast media. Images were reconstructed with an iterative method integrated with (Scanner 1) or without (Scanner 2) a point spread function. The reconstructed images had a matrix size of 168 × 168 with a voxel size of 4.1 × 4.1 × 2.0 mm for Scanner 1, and a matrix size of 144 × 144 with a voxel size of 4.0 × 4.0 × 4.0 mm for Scanner 2. Maximum intensity projection (MIP) images (matrix size 64 × 64) were generated by linear interpolation. In this study, CT images were used only for attenuation correction, not for classification. Convolutional neural network (CNN). A neural network is a computational system that simulates neurons of the brain. Every neural network has input, hidden, and output layers. Each layer has a structure in which multiple nodes are connected by edges. A "deep neural network" is a network in which multiple layers are used for the hidden layer. Machine learning using a deep neural network is called "deep learning. " A CNN is a type of deep neural network that has been proven to be highly efficient in image recognition. A CNN does not require predefined image features. In this study, we proposed the use of a CNN to predict the sex of patients from an FDG PET-CT image. Architectures. In this study, we designed a CNN architecture to predict patient sex from FDG PET-CT images. Here we provide details on the CNN architecture and associated techniques used in this study. The detailed architecture is shown in Fig. 1a. Each neuron in a layer is connected to the corresponding neurons in the previous layer. The architecture of the CNN used in the present study contained four convolutional layers. This network also applied a ReLU function, local response normalization, and softmax layers. The softmax function is defined as follows: where x i is the output of the neuron i (i = 1, 2, …, n; n being the number of neurons in the layer). An input image is presented to the first layer, i.e., "Conv1" of Fig. 1a. The number of neurons in the first layer is equal to the number of pixels in the input gray-scaled image. There are two types of information processing that are applied iteratively: convolution and pooling. The convolution process works as a filter that extracts features from images or data in the previous layer. There are many filters in the convolution that are applied simultaneously. The parameter of the filters, which defines the feature to be extracted, is adjusted by learning algorithms. The size of the filters is smaller than that of the layer, so that the filter is repeatedly applied within a stratum. The pooling process selects the strongest activated value for a feature in a local area that is extracted in the convolution. Through pooling, even if the image is shifted slightly, the classification results are not affected, and the size of the layer is reduced by one-quarter. For the final layer, all of the neurons in one layer are connected to all of the neurons in the previous layer. The number of neurons in the final layer is equivalent to the number of class labels to be recognized. By using the softmax function, the output of the CNN can be represented as a probability. In the regression models to predict age and weight, a linear function was used for the final layer. Data augmentation. In this research, the number of images used for learning processes was increased 5-fold by image augmentation processing such as rotation, enlargement/reduction, parallel movement, and noise addition. Note that the original images used for the test were not subjected to such an augmentation process. Model training and testing. In the model training phase, we used "early stopping" and "dropout" to prevent overfitting. Early stopping is a function used to monitor the loss function of training and validation and to stop the learning before it falls into excessive learning [15][16][17] . Early stopping and dropout have been adopted in various machine-learning methods 18 . In the model test phase, we tested both an image-based method and a patient-based method. Each patient-based diagnosis was determined with the use of MIP images based on majority rule (19 MIP images for Scanner 1, and 36 MIP images for Scanner 2). Experiment 1 (Overall). All patients examined with the two scanners were mixed, and 4,462 (70%) randomly selected cases were extracted and learned as training data. After that, the remaining 2,000 cases (30%) were extracted and tested as test data. The process is shown in Fig. 1b. We repeated the process five times to calculate the accuracy. Experiment 2 (Scanner 1 for training and Scanner 2 for testing). We used all of the cases of Scanner 1 for training and all of the cases of Scanner 2 for testing. Experiment 3 (Scanner 2 for training and Scanner 1 for testing). Inversely, we used all of the cases of Scanner 2 for training and all of the cases of Scanner 1 for testing. Experiment 4 (Masking). To specify the important part of the image, we conducted a "mask" experiment. As shown in Fig. 2, images (mask images) were created to cover a part of the original image. After masking, training and testing were performed under the same conditions as in Experiment 1. Six different masks were employed, respectively covering the (1) head, (2) chest, (3) abdomen, (4) pelvis, (5) upper body (=(1) +(2)), and (6) lower body (=(3) +(4)). The average value of the entire image was used as the pixel value to fill each region. Each mask location was determined based on a typical image of a patient with average height and weight, and then applied to all of the other patients' images. The upper body was set as the head and chest, and the lower body was set as the abdomen and pelvis. Results We retrospectively analyzed the cases of 6,462 patients (3,623 males [56%] and 2,839 females [44%]) who underwent FDG PET-CT imaging between January 2015 and August 2017 for diagnosis of various cancers at our institution. A total of 137,500 MIP images were used, and the male and female datasets consisted of 77,000 and 60,500 images, respectively. The results of Experiments 1 to 3 are summarized in Fig. 3. Experiment 1 (Overall). The model was trained for 5 to 10 epochs by the early stopping algorithm. The CNN process spent approx. 5 min for training each fold dataset and <10 seconds per patient for prediction. The accuracy reached 98.9 ± 0.002% for the training dataset. When images for the testing (which had not been used for the training) were given to the trained model, the accuracy was 98.2 ± 1.3% for the image-based classification. For the patient-based classification, the patient sex was predicted from MIP images based on majority rule. The overall accuracy was 99.6 ± 0.5% for the patient-based classification. The accuracy values for the identification of "male" and "female" in the image-based classification were almost the same: 98.2 ± 1.2% for male, and 98.3 ± 1.4% for female prediction. Figure 4 provides representative images for which the patient sex was incorrectly predicted. Experiment 2 (Scanner 1 for training and Scanner 2 for testing). When the dataset of Scanner 1 was used for training (i.e., Scanner 2 was not used for training), a total of 5,641 cases were provided. The model was trained for 8 epochs by the early stopping algorithm. The CNN process spent 2 min for training each fold dataset www.nature.com/scientificreports www.nature.com/scientificreports/ and <10 seconds per patient for prediction. The accuracy reached 99.4% for the training dataset. When images of Scanner 2 for testing (which were not used in the training) were given to the learned model, the accuracy was 93.0% for the image-based classification. For the patient-based classification, the patient sex was predicted from MIP images based on the majority rule. The overall accuracy was 95.3% for the patient-based classification. The accuracy values for the male and female image-based classification were 95.9% for male and 90.2% for female. Experiment 3 (Scanner 2 for training and Scanner 1 for testing). When the dataset of Scanner 2 was used for training, a total of 821 cases were provided. The model was trained for 7 epochs by the early stopping algorithm. The CNN process spent approx. 2 min for training each fold dataset and <10 seconds per patient for prediction. The accuracy reached 99.3% for the training dataset. When images of Scanner 1 for testing (which had not been used for the training) were given to the learned model, the accuracy was 93.2% for the image-based classification. For the patient-based classification, the patient sex was predicted from the MIP images based on majority rule. The overall accuracy was 94.6% for the patient-based classification. The accuracy values for the male and female image-based classification were 94.1% for male and 92.4% for female. Experiment 4 (Masking). To identify the part of the image by which the CNN predicted sex, we performed masking experiments. The results are summarized in Table 1. When the lower body (especially the pelvic area) was masked, the accuracy was significantly degraded. Female patients were more frequently misidentified. When the pelvic area was masked, the accuracies for male and female patients were approx. 86% and approx. 57%, respectively. When other body parts were masked, the accuracy was less degraded. Experiment 5 (Grad-CAM). We further employed Grad-CAM to identify the part of the image the CNN paid attention to. Typical examples are shown in Fig. 5. For most cases, we observed that the chest of men and the pelvic region of women were highlighted. Experiment 6 (ResNet). To test a more complicated neural network, we employed ResNet. The model was trained for 25 epochs. The CNN process spent approx. 8 hours on the training dataset and <10 seconds per patient for prediction. The accuracy reached 99.8% for the training dataset. When images for the testing (which had not been used for the training) were given to the trained model, the accuracy was 99.8% for the image-based classification. For the patient-based classification, the patient sex was predicted from MIP images based on the majority rule. The overall accuracy was 99.9% for the patient-based classification. The accuracies for the identification of male and female in the image-based classification were 99.7% and 99.9%, respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ Experiment 7 (Age prediction). Age was predicted using a regression model. The model was trained for 50 epochs. When images for the testing (which were not used for the training) were given to the trained model, 83.2% of patients were accurately predicted with absolute error being smaller than 5 years. Also, 97% were predicted within ± 10 years. Figure 6a,b are mixing matrices with age being divided into 18 steps (0-10, 11-15…86-90, >91 years old). Experiment 8 (Body weight prediction). Body weight was predicted using a regression model as well. The model was trained for 50 epochs. When images for the testing (which had not been used for the training) were given to the trained model, 96.1% of patients were accurately predicted, with the absolute error being smaller than 5 kg. Also, 98% were predicted within ± 6 kg. Figures 4d and 6c are mixing matrices with body weight being divided into 14 steps (0-30, 31-35, 36-40… 86-90, >91 kg). Discussion In this research using a CNN, approx. 98.2% of MIP images of FDG PET were correctly categorized by sex, and the sexes of approx. 99.6% of the patients were correctly categorized. These data suggest that a CNN can predict the patient sex from MIP images of FDG PET. An additional alert system that reveals patient sex mismatches and informs medical staff would help prevent misidentification accidents. To further reveal the characteristics of the current CNN, we conducted two additional experiments (Experiments 2 and 3) to evaluate scanner effects. In both experiments, images from different scanners were used for training and testing, respectively. The accuracy was slightly lower in Experiments 2 and 3 compared to that in Experiment 1. These results suggest that scanner-dependent image quality (e.g., spatial resolution, noise level, matrix size), in addition to the different numbers of patients scanned by each scanner, may affect the performance www.nature.com/scientificreports www.nature.com/scientificreports/ of the CNN. The accuracy was sufficiently high when two types of PET scanners were used in combination (Experiment 1). These results indicate that the CNN developed in this study could acquire versatility by learning versatile images, suggesting that it is important to use as many scanners as possible to commercialize a CNN for practical applications. On what part of the image did the CNN focus to distinguish between males and females? To address this question, we conducted additional simulations, Experiments 4 and 5. In Experiment 4, comparing image masks of several regions, we found that the accuracy was lowest when the pelvic area was masked, indicating-not surprisingly-that the most important features to discriminate patients by sex lie in the pelvic area. This result is intuitive, of course, although various other parts of the body also show sex differences. For example, brain metabolism may differ by sex 19 , though in our model the CNN was unable to use this information, since the images of the brain were all saturated-i.e., blackened-due to the SUV window setting of 0-10. The breasts/chest may be an important locus of sex information, but there is a large variation in the degree of FDG accumulation in this region among women, depending on age and estrogenic status. In Experiment 5, Grad-CAM typically highlighted the chest region for men and the pelvic region for women. These results, together with the results of Experiment 4, suggested that both the chest and pelvic regions can have important information for identifying men. In contrast, for women, the pelvic region is important, but the chest region is less important, possibly because women have more inter-individual variability in the chest region (e.g., size and metabolism of the breasts) than in the pelvic region. We investigated some cases in which the patient's sex was incorrectly predicted in Experiment 1. One patient (Fig. 4a) was male but so slender that he might have been confused for a female. Another patient (Fig. 4b) was female; she was relatively obese, and she had head-and-neck cancer, which is a male-dominant disease. These factors might have led to the misprediction by the CNN. Experiment 6 was carried out to determine whether the accuracy might be improved by using a more complicated network such as GoogLeNet 20 or ResNet 21 . The results showed that these alternative networks took more time to train compared with the simple network used in Experiments 1 to 3, but they required roughly the same amount of time as our network to make their diagnoses. Thus, while the training process required approximately 100 times more time in Experiment 6 than in Experiment 1, the predictions in Experiment 1 and Experiment 6 both took less than 10 seconds per patient. The accuracy improved from 98.2% to 99.8%, suggesting that an improvement in learning accuracy can be expected by applying a complex network. We considered that patient misidentification accidents could be further prevented if not only patient sex, but also patient age and weight could be predicted from images and compared against known data. In Experiments 7 and 8, we tested the ability of our regression model to predict age and weight. The results showed that both age and body weight were appropriately estimated. Although the system was not always able to deliver precise predictions within 1 year of age or 1 kg of weight, the prediction system using the combination of sex, age, and body weight would contribute to a reduction in misidentification accidents. There are two problems associated with machine learning or deep learning: underfitting and overfitting. Both can be represented by a loss curve against epochs 15 . When underfitting occurs, the loss curve continues to decline for both training and validation. In overfitting, the loss curve of training approaches 100%, whereas the loss curve of validation moves away from 100%. We detected no evidence of underfitting or overfitting in the present study, as the loss curves at training and validation (Fig. 7) shifted in the same way. The computational complexity becomes enormous when a CNN directly learns with 3D images [22][23][24] . Another approach is to let a CNN learn slice images instead of MIP images. However, there are many slices that do not contain sex information. In contrast, MIP seems to be advantageous for a CNN because all MIP images may contain sex information somewhere in the image. For example, in a typical male case, of all 545 slices from the head to the thigh, only 13 (2%) slices covered the testes, although all the MIP images showed the testes. In a typical female case, of all 478 slices from the head to the thigh, only 36 (8%) slices covered the breasts, although all the MIP images showed the breasts. MIP images can also be directly generated from PET, in contrast to CT, for which bed removal is necessary before MIP generation. In addition, if a single MIP image from a single angle is fed to www.nature.com/scientificreports www.nature.com/scientificreports/ the network, the patient sex cannot always be predicted accurately. In the current study, however, MIP images generated from various angles were given to the neural network. This might have improved diagnostic accuracy. Deep learning has the potential to automate various tasks. Indeed, this technology has been used not only for diagnosis but also for image generation, and especially for image quality improvement and reduction of the radiation dose [25][26][27] . Combining our current network and others will contribute to safe medical imaging by reducing the misidentification incidents and radiation exposure and by preventing misdiagnosis. This study has some limitations. First, because our model was generated for whole-body images, spot images such as pelvic areas could not be recognized. Further studies will be needed, such as investigations of the potential improvement to the training data by cropping different areas. Second, while we investigated the use of images from 2 different scanners, there are many more scanners currently used in the world. In order to cope with various multicenter scanners, training dataset should have images of as many scanners as possible. Finally, it is not yet known how the accuracy of a CNN system changes if a tumor exists in the patient's pelvis. Conclusion Our findings indicate that the CNN-based sex prediction system successfully classified patients by sex, age and body weight categories. Such a system may be useful to prevent patient misidentification accidents in clinical settings.
5,205.2
2019-05-10T00:00:00.000
[ "Medicine", "Computer Science" ]
Breakdown of QCD factorization in hard diffraction Factorization of shortand long-distance interactions is severely broken in hard diffractive hadronic collisions. Interaction with the spectator partons leads to an interplay between soft and hard scales, which results in a leading twist behavior of the cross section, on the contrary to the higher twist predicted by factorization. This feature is explicitly demonstrated for diffractive radiation of abelian (Drell-Yan, gauge bosons, Higgs) and non-abelian (heavy flavors) particles. 1 QCD factorization in diffraction QCD factorization in inclusive processes is nowadays one of the most powerful and frequently used theoretical tools [1]. In spite of lack of understanding of the soft interaction dynamics, the contributions of the soft long-distance and hard short-distance interactions factorise. Making a plausible (not proven) assumption about universality of the former, one can measure it with electro-weak hard probes (DIS, Drell-Yan process) and apply to hard hadronic processes. Although it is tempting to extend this factorization scheme to diffractive, large rapidity gap processes, it turns out to be heavily broken [2, 3], as is demonstrated below. Ingelman-Schlein picture of diffraction [4]. It looks natural that on analogy of DIS on a hadronic target, DIS on the Pomeron probes its PDF (parton distribution function), like is illustrated in Fig. 1 . Once the parton densities in the Pomeron Figure 1. DIS on a hadron taget (left) and on the Pomeron, treated as a target (right). were known, one could predict any hard diffractive hadronic reaction. The Good-Walker mechanism of diffraction [6–8]. ae-mail<EMAIL_ADDRESS>EPJ Web of Conferences According to this quantum mechanical treatment of diffraction, the diffractive amplitude is given by the difference between the elastic amplitudes of different Fock components in the projectile particle. In the dipole representation hard diffraction of a hadron comes from the difference between elastic amplitudes of hadronic states with and without a hard fluctuation, Adi f f ∝ σq̄q(R + r) − σq̄q(R) ∝ rR ∼ 1/Q, (1) where σq̄q(R) is the total dipole-nucleon cross section [9]; R characterises the hadronic size, while small r ∼ 1/Q R is related to the hard process [10, 11]. Apparently such a mild Q-dependence contradicts factorization prediction, based on the DIS relation, Adi f f ∝ σq̄q(r) ∝ r2 ∼ 1/Q2, (2) which is a higher twist effect. 2 Drell-Yan reaction: annihilation or bremsstrahlung? Parton model is not Lorentz invariant, interpretation of hard reactions varies with reference frame. E.g. DIS is treated as a probe for the proton structure in the Bjorken frame, but looks differently in the target rest frame, as interaction of hadronic components of the photon. Only observables are Lorentz invariant. The Drell-Yan reaction in the target rest frame looks like radiation of a heavy photon (or Z, W), rather than q-qbar annihilation [12, 13], as is illustrated in Fig. 2 Figure 2. Radiation of a heavy photon, or gauge bosons in the target rest frame, corresponds to q̄q annihilation in the boson rest frame. The cross section, expressed via the dipoles [12, 13], looks similar to DIS, dσDY inc (qp→ γ∗X) dα dM2 = ∫ d2r ∣∣∣Ψqγ∗ (~r, α)∣∣∣2 σ (αr, x2) , (3) where Ψqγ∗ (~r, α) is the distribution function for the |γ∗q〉 Fock component of the quark; α = p+γ∗/p+q is the fractional ligh-cone momentum of the heavy photon. In DY diffraction the Ingelman-Schlein factorization is broken. Indeed, diffractive radiation of an abelian particle vanishes in the forward direction [13], due to cancellation of the graphs a, b and c depicted in Fig. 3, dσDY inc (qp→ γ∗qp) dα dM2 d2 pT ∣∣∣∣∣ pT=0 = 0. (4) In both Fock components of the quark, |q〉 and |qγ∗〉 only quark interacts, so they interact equally, and according to the Good-Walker picture cancel in the forward diffractive amplitude. This conclusion holds for any abelian diffractive radiation of γ∗, W, Z bosons, Higgs. XLV International Symposium on Multiparticle Dynamics Figure 3. Feynman graphs for diffractive radiation of a heavy photon by a quark. Diffractive DIS is dominated by soft interactions [2, 14]. On the contrary, diffractive Drell-Yan gets the main contribution from the interplay of soft and hard scales [10, 11] (see Eq. (2)). The saturated shape of the dipole cross section, σ(R) ∝ 1 − exp(−R/R0), leads to the unusual features of diffractive Drell-Yan cross section (compare with (2)), σDY sd σDY incl ∝ [σ(R + r) − σ(R)]2 ∝ exp(−2R/R0) R0 (5) As a result, the fractional diffractive Drell-Yan cross section is steeply falling with energy, but rises with the scale, because of saturation, as is shown in Fig. 4. QCD factorization in diffraction QCD factorization in inclusive processes is nowadays one of the most powerful and frequently used theoretical tools [1]. In spite of lack of understanding of the soft interaction dynamics, the contributions of the soft long-distance and hard short-distance interactions factorise. Making a plausible (not proven) assumption about universality of the former, one can measure it with electro-weak hard probes (DIS, Drell-Yan process) and apply to hard hadronic processes. Although it is tempting to extend this factorization scheme to diffractive, large rapidity gap processes, it turns out to be heavily broken [2,3], as is demonstrated below. Ingelman-Schlein picture of diffraction [4]. It looks natural that on analogy of DIS on a hadronic target, DIS on the Pomeron probes its PDF (parton distribution function), like is illustrated in Fig. 1 . Once the parton densities in the Pomeron were known, one could predict any hard diffractive hadronic reaction. a e-mail<EMAIL_ADDRESS>According to this quantum mechanical treatment of diffraction, the diffractive amplitude is given by the difference between the elastic amplitudes of different Fock components in the projectile particle. In the dipole representation hard diffraction of a hadron comes from the difference between elastic amplitudes of hadronic states with and without a hard fluctuation, where σq q (R) is the total dipole-nucleon cross section [9]; R characterises the hadronic size, while small r ∼ 1/Q R is related to the hard process [10,11]. Apparently such a mild Q-dependence contradicts factorization prediction, based on the DIS relation, which is a higher twist effect. Drell-Yan reaction: annihilation or bremsstrahlung? Parton model is not Lorentz invariant, interpretation of hard reactions varies with reference frame. E.g. DIS is treated as a probe for the proton structure in the Bjorken frame, but looks differently in the target rest frame, as interaction of hadronic components of the photon. Only observables are Lorentz invariant. The Drell-Yan reaction in the target rest frame looks like radiation of a heavy photon (or Z, W), rather than q-qbar annihilation [12,13], as is illustrated in Fig. 2 The cross section, expressed via the dipoles [12,13], looks similar to DIS, where Ψ qγ * ( r, α) is the distribution function for the |γ * q Fock component of the quark; α = p + γ * /p + q is the fractional ligh-cone momentum of the heavy photon. In DY diffraction the Ingelman-Schlein factorization is broken. Indeed, diffractive radiation of an abelian particle vanishes in the forward direction [13], due to cancellation of the graphs a, b and c depicted in Fig. 3, In both Fock components of the quark, |q and |qγ * only quark interacts, so they interact equally, and according to the Good-Walker picture cancel in the forward diffractive amplitude. This conclusion holds for any abelian diffractive radiation of γ * , W, Z bosons, Higgs. Diffractive DIS is dominated by soft interactions [2,14]. On the contrary, diffractive Drell-Yan gets the main contribution from the interplay of soft and hard scales [10,11] (see Eq. (2)). The saturated shape of the dipole cross section, σ(R) ∝ 1 − exp(−R 2 /R 2 0 ), leads to the unusual features of diffractive Drell-Yan cross section (compare with (2)), As a result, the fractional diffractive Drell-Yan cross section is steeply falling with energy, but rises with the scale, because of saturation, as is shown in Fig. 4. Diffractive Z and W production Abelian diffractive radiation of any particle is described by the same Feynman graphs, only couplings and spin structure are different [15]. In Fig. 4 (right panel) we present the results for the single diffractive cross sections for Z 0 , γ * (diffractive DY) and W ± bosons production, differential in the di-lepton mass squared, dσ sd /dM 2 . The single diffractive process pp → X p at large Feynman x F → 1 of the recoil proton is described by the triple Regge graphs, as is illustrated in Fig. 5. The results for the fractional diffractive cross sections of Z and W production is compared with the CDF measurements in Fig. 6 (left panel). Tevatron, 1.96 TeV CDF data on W and Z production Figure 6. Left: The diffractive-to-inclusive ratio vs dilepton invariant mass squared in comparison with the CDF measurements. Right: Cross section of diffractive production of heavy flavors in comparison with the CDF data for charm and beauty (see details in [16]). Diffractive heavy flavor production The dynamics of inclusive heavy flavor production can be classified as : (i) bremsstrahlung (like in DY); and (ii) production mechanisms [16], in accordance with the Feynman graphs presented in Fig. 7. The amplitudes for the two mechanisms are expressed via the amplitudes A i corresponding to Figure 7. Feynman graphs contributing to inclusive production of a heavy quark pair. the graph numbering in Fig. 7. The bremsstrahlung and production amplitudes have the form, For diffractive production one has to provide a colorless two-gluon exchange. Diffractive excitation of a quark turns out to be a higher twist effect, as is depicted in Fig. 8 Drell-Yan. In this case interaction with spectators again plays crucial role, providing a leading twist contribution, as is shown in Fig. 8, right [16]. Numerically, the leading twist production mechanism is much larger compared with the bremsstrahlung mechanism. The leading twist behavior 1/m 2 Q of the diffractive cross section is confirmed by CDF data, as is demonstrated in Fig. 6, right panel. Diffractive Higgsstrahlung Diffractive Higgsstrahlung is similar to diffractive DY. Z, W, since in all cases the radiated particle does not participate in the interaction. However, the Higgs decouples from light quarks, due to smallness of the coupling, so the cross section of higgsstrahlung by light hadrons is small. Although light quark do not radiate Higgs directly, they can do it via production of heavy flavors. Therefore the mechanism is closely related to non-abelian diffractive quark production, presented in the previous section. The rapidity dependent cross section of diffractive Higgs production, evaluated in [17], is plotted in Fig. 9, left. The cross section is rather small, below 1fb. Figure 9. Left: The differential cross section of single diffractive Higgs boson production in association with heavy quark pair vs Higgs rapidity (see details in [17]). Right: The cross section of diffractive exclusive Higgs production, vs its mass, for different intrinsic heavy flavors [18]. Higgs boson can also be diffractively produced due to intrinsic heavy flavors in light hadrons. Exclusive Higgs production, pp → H pp, via coalescence of heavy quarks was evaluated in [18,19]. The cross section of Higgs production was calculated fixing 1% of intrinsic charm in the proton, and assuming that heavier flavors scale as 1/m 2 Q [20]. At the Higgs mass 125 GeV intrinsic bottom and top give comparable contributions, as is demonstrated in Fig. 9, right. Summary Factorization of short and long-distance interactions is heavily broken in hard diffractive hadronic collisions. In particular, forward diffractive radiation of direct photons, Drell-Yan dileptons, and gauge bosons Z, W, by a parton is forbidden. Nevertheless, a hadron can diffractively radiate in the forward direction due to a possibility of soft interaction with the spectators. This property of abelian radiation breaks down diffractive factorization resulting in a leading twist dependence on the boson mass, 1/M 2 . Non-abelian forward diffractive radiation of heavy flavors is permitted even for an isolated parton. However, interaction with spectators provides the dominant contribution to the cross section. It comes from the interplay between large and small distances. Data well confirm the leading twist behavior. Diffractive higgsstrahlung is possible due to a double-step process, via heavy quark production. Therefore, the main contribution comes for Higgs production in association with a heavy quark pair. Another important contribution to diffractive Higgs production comes from coalescence of intrinsic heavy quarks in the proton. For M H = 125 GeV dominance of intrinsic bottom and top is expected.
2,885.4
2016-01-31T00:00:00.000
[ "Physics" ]
Impacts of Monetary Policy on Stock Market through Survey from Investors Analyzing the impacts of the monetary policy on the stock market is very important to investors. There are many papers studying this relationship, but study based on investors is still limited. This paper is conducted by interviewing experts and Stock Investors in Vietnam. After having research results, the authors continue to use multi-variables method (EFA, regression analysis) and get the following outcomes: According to investors, the policy of interest rate, required reserved ration and exchange rate have impacts on Vietnam stock market; the policy of money supply does not have influence on the market. At the same time, interest rate has the strongest impact on stock market following by the required reserved ratio and the exchange rate. Introduction Vietnam stock market was born in 2000 firstly in HCM city, but till 2005 the Stock center was established in Hanoi.Despite of being a newborn trading center, Vietnam Stock market has experienced in periods of both strong development and severe recession.From 2005-2008 the stock price reached a record height at 1137.69 points in Mar 12th 2007, then fell sharply to 245.74 point in Feb 24th 2009.Since this period, the market grows slowly with fluctuated level around 500 points (Vietstock, 2016). Recently, there are tens of billions of transactions on the Stock Exchange daily in Vietnam.Profits from stocks are the main income for many young peoples or the accumulated pension of the old (Maskay, 2007).The change (degradation) of the stock market will impact on the turmoil in investors' lives because it directly relates to their main income.These changes are due to the impact of international market factors (Maskay, 2007) or the monetary policy (macroeconomic) of the state bank.Therefore; the study of impacts of policies especially monetary policy and information shock is considered as an important key which helps investors to make right decisions. There have been many studies on fluctuation of the stock price and monetary policy.Most of them approved that stock indices react sensitively to changes of monetary policy (Zare, Azali, & Habibullah, 2013).Stock Investors always keep their eyes on market's changes in general and the monetary policy of the state bank in particular in order that they can make a right decision which will bring benefit.Hence, studying impacts of factors on stock price become a vital part which helps investors to make investing decision. In Vietnam, nowadays there are some studies on macroeconomics or the monetary policy (Ton & Nguyen, 2015) however, most of them focused on studying and analyzing macro data.There is no study which really researches the impact of the monetary policy on the stock market through analyzing investor survey' result.Therefore; this report is proceeded to consider this relationship. Research Design Manners of interviewing experts and making the factors as well as giving the observed variables, which have impact on stock market prices in Vietnam, are below: The first is interviewing directly experts, who are stock investors having one year of experience.The second is making a list of about 20 investors to serve the interview process.During the interview process, the authors collected the views of experts on interview aspects.They made interview and achieved the last information based on information saturation theory (Figure 1).By interviewing each person, the autor will get some general information compared to other professionals.Continuing interview until 3 consecutive people does not give new information compared to the previous ones, it is considered the point of saturation of information.Currently, they stopped interviewing and filtered the information obtained from the interviewees Figure 1.Interviewing expert methods Once the interview finished, the authors proceeded to filter information obtained from experts and forming the same factors aspects (observed variables of each factor).They designed the preliminary questionnaire and performed a trial investigation in about 100 votes to review and edited the appropriate questionnaires to find out the interviewee's ability in the survey, as well as survey results of preliminary data analysis After designing official questionnaires, the authors deliver about 200-500 survey notes to Vietnam Stock Investors (the research object). Sample Size Total studied samples are stock investors in Vietnam.However, within this title, studying all samples will be impossible.Therefore, the size of samples is selected towards the minimum rules to ensure reliability.For this research, the authors get sample of 200 according to the principle of Comrey & Lee (1992). Data Collection Method The authors conducted survey forms directly from stock investors at Vietnam Stock Exchange market.After recovery of the questionnaire, the authors conducted encoding and put it into SPSS for analysis. Assessment of Scale's Reliability The elements are formed from three or more different questions to ensure the initial conditions to create initially assumed factor.To check the reliability of this scale of factors, the authors used Cronbach`s Alpha coefficients to measure the synthesis confidence level (Suanders et al., 2007) and overall variable correlations to examine the relationship among indicators in each factor.Criteria for assessing a reliable scale in research is a minimum Cronbach`s Alpha ratio of 0.6 (Hoang & Chu, 2008) and the overall variable correlation coefficient with a minimum of 0.3 (Nunnally & Burstein, 1994).The results of the reliability of scale test from research data as following: Factor Analysis After factors are analyzed by Cronbach's Alpha, the authors continue to put them into EFA test.According to Hair et al. (2006), analyzing factors will help researchers to draw significant potential factors from a set of smaller observed variables.Some standards are applied when analyzing EFA as follows: If KMO value is bigger than 0.5, factor analysis is appropriate; whereas if the value KMO is less than 0.5, factors analysis is not suitable with the data.Variance explained needs to be greater than 50% to ensure the change of observations.In order that scaling reaches assemblage value, according to Anderson & Gerbing (1988), the correlation coefficient and factor loading needs to be bigger or equal to 0.5 in one factor.The method of principal components with rotated component is used to ensure the smallest amount of factors according to Hoang & Chu (2008). Regression Analysis With factors which appeared after analyzing EFA, the authors conducted to put these factors into regression analysis to find out, among many tools of monetary policy, what is the kind of monetary policy indeed affects the stock market according to the Investors.The authors also review which the factors has stronger impact on the stock market through standardized beta coefficient. Assessment of Scale's Reliability Basically, the assessment's result shows that all preliminary assessment results indicate the reliability of the scale factors with the coefficient Cronbach's alpha is greater than 0.6; total variable correlation coefficients are greater than 0.3.From this result, it clearly shows that the factors are designed with consistent internal scale (Table 1). Analysis of Factors for Independent Variables Results which are from analyzing factors with 4 independent variables and waiving observed variables with factor floading of less than 0.5 can be shown as belows: The analysis results show in Table 2 that the KMO coefficient is 0.795, more than 0.5; Batlett testing with p-value is 0.000, less than 0.05; the Total Variance Explained is 72.14%, more than 50%; the loading factors are greater than 0.5; and the observed variables formed four factors.Thus, standards using factor analysis are consistent with research data Analysis of Factors for Dependent Variables In Table 3, The analysis results show that the KMO coefficient is 0.702, more than 0.5; Batlett testing with p-value is 0.000, less than 0.05; the variance is 71.46% , more than 50%; the loading factors are greater than 0.5; and the observed variables formed 1 factor.Thus, standards using factor analysis are consistent with research data collection.After obtaining the explored factors, the authors evaluated relationships between factors through analysis of correlation and regression analysis. Correlation Analysis Results showed in the correlation tables are all related variables in the same direction (correlation coefficients are +).In terms of the relationship to the dependent variable Y: IR variables most strongly correlated with the Y (0.697), followed by the variable RR (0.603) and the weakest variables correlated with Y: the variable M (0.138). (Table 4), however, the correlation is not significant in the regression impact assessment (one-way) of the independent variables on the dependent variable.Therefore, to clarify the impact of monetary policy on the stock market, the authors conducted a regression analysis. Regression Analysis Regression analysis indicates the impact of the independent variables on the dependent variables.In the independent initial hypothesis variables, there are some real independent variables having an impact on Y (p-value less than 0.05 regression analysis).For the variables with p-value larger than 0.05, the authors will proceed to remove these excess variables from the model because there is no impact on the dependent variable Y. Initially, authors conducted to run regression with all variables under Enter method (taken at the same time all the variables in the model) and obtained the following results: Discussion Interest rates have the opposite effect on the stock price.It shows that tightening the monetary (increasing interest rates) would make the stock market decrease.In the case of high inflation, state banks have tightened the monetary policy by raising interest rates, which in the short term will not affect the stock market, but in the long term will have negative affect to businesses, especially companies that use large amounts of bank loans for their business operations.The research results of interest rate is compatible with previous studies of Ali ( 2014 The policy of required reserve ratio also has the opposite effect on the stock price.Increasing the ratio of required reserve in banks limited the amount exchanged between banks and outside individual business.In the short time, companies can invest or make liquidity by external borrowings.However, in the longer term, increasing required reserve ratio will cause many difficulties for enterprises' business activities. The exchange rate VND/USD has a negative impact on the stock price.It shows that adjusting exchange rate or currency devaluating will make stock market get worse.The reason comes from an increase in import price making difficulties for enterprises.It causes business activities in the country getting worse and stock price decreasing. Conclusion and Recommendations for Further Research Through analysing multi-variables data, the research result showed that policy factors impact on the stock price based on evaluation of investors (exchange rate, required reserve ratio policy have impact on Vietnam stock price, money suply has not impact on the stock price).Among them, interest rate policy has the most impact, next is required reserve ratio and exchange rate is the lowest.Money supply factor seems to have no impact on the stock price. The study still exists some limits in the number of samples.Therefore, the authors propose that the following: study should extend sample size to support for the detailed and persuasive result.Besides, the authors would like to develop research method (coonfirm factor analysis-CFA) to measure the variables closely. Table 1 . Result of test the credibility of scale Table 2 . Result of test of the factor analysis for independent variable Table 3 . Result of test of the factor analysis for dependent variable Table 4 . Relationship between pairs of variables .Correlation is significant at the 0.05 level (2-tailed).Notes.Y: Assessing the impact of monetary policy on the Vietnam stock market; IR: Interest rate policy; RR: Required reserve ratio; M: Money supply; EX: Exchange rate. * Table 5 . Result of initial regressionWith standardized beta coefficient indicates the impact of monetary policy factors on the stock market.The results show that interest rate policy has the strongest impact on the stock market (0.461); next is required reserve ratio (0.302) and the exchange rate policy has the weakest impact on the stock market Figure 2. Result of regression analysis
2,703.4
2016-05-23T00:00:00.000
[ "Economics", "Business" ]
The Effect of Profitability, Leverage, Liquidity, Size, and Company Growth on the Dividend Payout Ratio in the Indonesian Capital Market 2013-2018 The Capital Market is a place for investors to invest funds in the form of stocks or valuable assets with the aim of maximizing the wealth obtained through dividends. Not all of the companies listed on the IDX distribute dividends to investors, either in the form of cash dividends or shares. Dividend policy is a decision that the profit earned by the company will be distributed to shareholders as dividends or will be retained to finance investments in the future (Sartono, 2001). Dividend policy in this study uses a dividend payout ratio (DPR). Dividend Payout Ratio is the ratio of the percentage of net profit after tax that will be received by investors in the form of dividends (Sudana, 2011). Companies that decide to distribute profits to investors as dividends will reduce the amount of retained earnings which in turn reduces the sources of funds that the company will use to develop the company. Not all of the companies listed on the IDX (Indonesia Stock Exchange) distribute dividends to investors, either in the form of cash dividends or stock dividends. This is due to the company's consideration in making dividend payment policies to investors or shareholders. It is proven that the data obtained from the annual report of the Indonesia Stock Exchange (IDX) for the period 2013-2018. Some companies don't even pay dividends. Table 1.1 explains the total data of companies that distributed dividends from eight sectors on the IDX (Indonesia Stock Exchange) in 2013-2018. Abstract The table describes the number of companies that distributed dividends in 2013 to 2018 from eight sectors on the IDX (Indonesia Stock Exchange). In 2013, of all existing sectors, only 43 companies distributed dividends; in 2014 increased to 106 companies distributing dividends; whereas in 2015 the number of companies distributing dividends continued to increase, reaching 141 and continuously increasing until 2017. However, in 2018, the number of companies distributing dividends has decreased again to only 32 companies that distribute dividends. This research is also motivated by a research gap between the results of previous studies briefly in table 2. The results of the research presented in the table indicate that with regard to the dividend payout ratio, there are several factors that influence it. These factors are Profitability, Leverage, Liquidity, Size and Company Growth. However, the results of these studies still provide different results, so they need to be investigated further. Based on the description above, the researchers are interested in reexamining the effect of profitability, leverage, liquidity, size, and company growth on dividend payout ratio in Indonesian capital market 2013-2018. This research is expected to be useful for several parties, such as investors so that it can help and become a consideration in making investments. The company is expected to be able to assist the company as a matter of consideration in determining dividend payments. Dividend Policy Optimal dividend policy is a policy that contains a balance between current dividends and future growth that maximizes the company's share price. If a company increases the percentage of income that will be paid to shareholders as a cash dividend (dividend payout ratio), the share price will increase. This is because the dividend policy gives an impression to investors that the company has good prospects in the future (Sandy & Asyik, 2013). The size of dividends paid to shareholders depends on the dividend policy of each company, not all companies pay dividends to investors, because they want to hold the company's profits for other purposes. Profitability is the ratio of comparison to determine the company's ability to earn profits in relation to sales, total assets, and own capital. In this study, profitability uses the proxy return on assets (ROA). If the level of company profitability is high, the greater the profit generated by the company will be distributed in the form of dividends to investors (Ginting, 2018). Previous research has shown that profitability has a significant positive effect on the dividend payout ratio, but other studies have shown that profitability has no effect on the dividend payout ratio. Leverage is the company's ability to fulfill its obligations which can be shown by how much debt payments use its own capital. This study uses the proxy Debt to Equity Ratio (DER). This variable becomes the benchmark for the company in distributing dividends to investors. The higher the leverage of the company, the lower the ability to pay dividends because the number of assets owned by the company are financed by debt so that the net profit earned decreases because it is used to pay increasing debt (Permana & Hidayati, 2016). Previous studies have shown a significant negative effect on the dividend payout ratio, but other studies have shown that leverage has no effect on the dividend payout ratio o. The liquidity of the company has a big influence on company investment. The liquidity of a company shows the company's ability to fund the company's operations and pay off its short-term obligations. Companies with good liquidity are likely to have better dividend payments (Wijayantini et al., 2019). According to Afiezan et al (2020) a liquid company means that the company has large funds to pay all of its obligations. The more liquid the company is, the more internal funds it will have to meet its operational needs. In this study, the liquidity variable was measured by the current ratio (CR). Company size is the value of the size of the company as indicated or valued by total assets, total sales, total profit, tax expense and others so that it affects the company's social performance and causes the achievement of company goals (Brigham & Houston, 2010). The greater the size of the company, it can guarantee that the company will distribute profits to company owners in the form of dividends or cash. Company growth Company growth describes the percentage growth of company posts from year to year. This ratio shows the percentage increase in sales this year compared to last year. The higher this ratio, the better (Harahap, 2002). According to Yeniatie & Destriana (2010), company growth is the rate of change in total assets from year to year; Meanwhile, according to Steven & Lina (2011), company growth is a description of the company's development. Company growth is measured using changes in total income. Company growth is the difference between total income owned by the company in the current period with the previous period to the total revenue of the previous period Hypothesis Test Based on the theoretical review, the hypotheses of this study are as follows: III. Research Methods This research is a quantitative study using multiple linear regression analysis method. The sample was taken by using purposive sampling technique. The data in the study used secondary data in the form of annual financial report data and the Indonesian Capital Market Directory (ICMD) on eight sectors listed on the Indonesia Stock Exchange for the period 2013-2018 which were obtained from (www.idx.co.id). Systematically, the method of multiple linear regression analysis is as follows: The probability results are said to be significant if the significance value is above the 5% confidence. If the significance coefficient is greater than the specified significance level, the table above can be concluded that there is no heteroscedasticity (homoscedasticity) because the significance coefficient is greater than the specified significance, which is 5% (0.05). Variable Collinearity Statistics Hypothesis Test a. Simultaneous Significance Test (F test statistic) The results of the F statistical test can be seen in the following table: Based on the results of the simultaneous test, the calculated F value was 2.144 with a significance level of 0.074. A significance value greater than 0.05, it can be concluded that the variables of profitability, leverage, liquidity, size and company growth do not simultaneously affect the dividend payout ratio. b. Partial Significance Test The results of the t statistical test can be seen in the following table: Based on Table 8, it can be concluded as follows: 1) The effect of profitability on the dividend payout ratio Based on the partial test results, it can be seen that the significance value of profitability as proxied by return on assets (ROA) is 0.461> 0.05. This shows that profitability has no effect on the dividend payout ratio in the Indonesian capital market for the 2013-2018 period. 2) The effect of Leverage on the dividend payout ratio Based on the partial test results, it can be seen that the significant value of leverage as proxied by the debt to equity ratio (DER) is 0.925> 0.05. This shows that leverage has no effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. 3) The effect of Liquidity on the dividend payout ratio Based on the partial test results, it can be seen that the significant value of liquidity as proxied by the current ratio (CR) is 0.936> 0.05, indicating that liquidity has no effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. 4) The effect of company size on the dividend payout ratio Based on the results of the partial test, it can be seen that the significant value of the company size is proxied by size is 0.092> 0.05. This shows that company size has no effect on the dividend payout ratio. 5) The Effect of Company Growth on the dividend payout ratio Based on the partial test results, it can be seen that the significant value of growth is 0.036 <0.05, which means that company size has an effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. c. Coefficient of Determination (R 2 ) The results of the determination coefficient test can be seen in the following table: The results of the calculation of the regression coefficient in this study obtained the R Square value of 0.166. This means that all independent variables consisting of profitability, leverage, liquidity, size and company growth are able to explain the model, the remaining 16.6% is explained by other factors that are not included in the research model. The effect of profitability on the dividend payout ratio The results of this study indicate that profitability, which is proxied by return on assets, has a positive but insignificant effect on the dividend payout ratio in the Indonesian capital market for the 2013-2018 period. The results of this study are supported by previous research, namely, Deitiana (2009), and Permana & Hidayati (2016); but contrary to the results of research by Ginting (2018) (2014), Rafique (2012), Suroto (2015, and Mehta (2012) which state that profitability affects the dividend payout ratio. This shows that companies with small profitability are also able to pay larger dividends than companies with large profitability. So that the amount of profitability does not guarantee that the company will pay large dividends. It can be said that the size of the profitability does not affect the ups and downs of dividend distribution. 2. The effect of leverage on the dividend payout ratio The results of this study indicate that leverage proxied by the debt to equity ratio has a positive but insignificant effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. The results of this study are supported by previous research by Ma 'rufatin & Purwohandoko (2018), Ginting (2018), andMehta (2012); but contrary to the research results of Suroto (2015), Permana & Hidayati (2016), and Fitriana & Suzan (2018) which state that leverage affects the dividend payout ratio. This proves that the high amount of debt does not prevent the company from distributing dividends because the company also pays attention to the interests of the owners of capital. The effect of liquidity on the dividend payout ratio The results of this study indicate that liquidity, which is proxied by the current ratio, has a negative but insignificant effect on the dividend payout ratio in the Indonesian capital market for the 2013-2018 period. The results of this study are supported by research by Ginting (2018), Deitiana (2009), andMehta (2012); but contrary to the research results of Ingrit et al. (2017), Rehman & Takumi (2012), Ahmad & Wardani (2014), which state that liquidity affects the dividend payout ratio. This proves that companies that have good liquidity do not necessarily mean that dividend payments are better. 4. The effect of company size on the dividend payout ratio The results of this study indicate that company size as proxied by Size has a negative but insignificant effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. The results of this study contradict the results of research conducted by Ma'rufatin & Purwohandoko (2018), Ahmad & Wardani (2014), Rafique (2012), Nurhayati (2013), Lestari (2018), andSuroto (2015) which state that there is an influence between the size of the company to the dividend payout ratio. However, this research is supported by Sari (2015) which states that company size has no effect on the dividend payout ratio. The results of this study indicate that the large number of assets owned by large companies is not necessarily a guarantee for making dividend payments to investors. 5. The effect of company growth on the dividend payout ratio The results of this study indicate that company growth has a significant negative effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. The results of this study are supported by research results from John & Muthusamy (2010), Permana & Hidayati (2016), and Ressy & Chariri (2013) which state that there is an influence between company growth and dividend payout ratio. However, this study contradicts the results of research from Ma'rufatin & Purwohandoko (2018), Samrotun (2015), and Sari (2015) which state that company growth has no effect on the dividend payout ratio. The results of this study prove that there is an effect between the growth of the company on the dividend payout ratio, because the faster the growth rate of a company, the greater the need for funds in the future to finance its growth. V. Conclusion Simultaneously: profitability, leverage, liquidity, company size and growth have no significant effect on the dividend payout ratio in the Indonesian capital market for the period 2013-2018. Partially: profitability, leverage, liquidity, company size does not affect the dividend payout ratio; while the company growth variable has a significant negative effect on the dividend payout ratio. The R Square value is 0.166, which means that the independent variable consisting of profitability, leverage, liquidity, size and company growth is able to explain the 16.6% model for the dependent variable, namely the dividend payout ratio, while the rest is explained by other factors that are not included in the research model. Suggestions 1. Further research is expected to increase the number of samples by extending the observation period and can use other indicators, namely the dividend yield ratio. 2. It is hoped that in future studies it can add other variables or proxies that may be factors that affect the dividend payout ratio, for example investment opportunities, managerial ownership capital structure, and others; so that it can provide a broader picture of what factors can affect the dividend payout ratio.
3,560.2
2021-02-22T00:00:00.000
[ "Business", "Economics" ]
Hyperthermia, Radiotherapy Background: The combination of ionizng irradiation (RT) and because of the steep dose fall-off of the source is a relevant technical approach for local dose escalation. The risk of lymph node metastasis may be predicted by the histology of the tumor, by the depth of the lesion and by the density of the capillary lymphatics on the primary site. Vascular invasion predicts a high rate of metastases. Lymphatic spread increases with recurrent tumors. Extracapsular spread (ECE) is associated with a 50% decrease in survival rates and needs higher doses. One of the greatest advantages of afterloading technique using a stepping source is the possibility to plan dose distribution after preparing the implant. Individual 3D implant planning by computer is generally superior to techniques using standard dose reference points. Implementation accuracy increases when target volume and critical organs as well as applicator geometry and dose distribution are visualized in one image. Usual applied implant techniques are the use of templates with needles or the plastic tube technique. The use of plastic tubes allows a fractionated treatment schedule with radiobiological advantages as well as an intraoperative imnplant preparation. According to our experience plastic tubes do not cause a higher infection rate by adequate nursing as have been seen after surgery without implantations, even on Ithe base of the skull lying more than 3 weeks. Due to cross sectional imaging-based 3D treatment planning for the target regions we can consider high doses for the target, and in the same time keeping the dose well below tolerance on critical organs. Unfortunately, there exist no large experiences in the literature with brachytherapy treatment of neck metastases, althought many institutions apply the method as a local boost. First experiences of intraoperative brachytherapy applications are encouraging. 11.2 Prospects of application of a hyperthermia for nonresectable cervical lymph node metastases of head and neck cancer I.S. Romanov, S.I. Tkachev, E.G. Matyakin, V.S. Alferov, E.M. Zak, Dept. of Surgery, Blochin' s Cancer Research Center, Moscow, Russia The objective of our research was study results of treatment of the patients with inoperable cervical lymph node metastases with application of a hyperthermia. From 1980 to 1994 we observed 145 patients which had inoperable cervical lymph node metastases of intra-oral squamous cell cancer. On the first stage 107 cases were treated by thermoradiotherapy (basic group), 38 cases were treated by irradiation alone (control group). We used local microwave hyperthermia (460 MHz, 41-45 degrees C, each session 60 min). In 2-4 weeks after ending the first stage of treatment in the basic group as a result of appreciable decrease of metastases at 23 (21%) patients the radical neck dissection of units was possible to execute. In the control group the radical procedure is executed in 5 (13%) patients after ending the first stage of treatment. The five years' survival rate in the basic group after the combined and independent treatment has consisted 24%. Duration of life in the basic group after the combined treatment following: more than 1 year was lived by 82%, and more than 3 and 5 years by 42% of the patients. In the control group the five years survival rate after the independent and combined treatment has made 9%. From results of research it is necessary to consider, that the further study the combination of application of irradiation and local microwavehyperthermia with an objective of expansion of opportunities of the subsequent surgical treatment advanced cervical lymph node of metastases at the given patients is necessary. 11.4 Computer-assisted 3D-brachytherapy in metastases of the head and neck region A.R. Gunkel, Dept. of ORL, University of Innsbruck, Austria Interstitial brachytherapy is one therapeutic option in the spectrum of head and neck cancer therapy. In particular it is used in the treatment of tumor recurrence of inoperable metastases causing for example massive pain or pressure to neighboring structures. The efficiency of a localized interstitial radiation therapy decisively depends on the accuracy and the reproducibility by which the hollow radiation needle(s) can be placed into the tumorvolume. Previously these needles were placed under control by manual palpation or ultrasound only, which obviously led to a rather poor targeting accuracy. Based on our substantial experience with intraoperative computer assisted 3D navigation in paranasal and skull base surgery, we have adapted this technique for the demands of brachytherapy. By means of different 3D navigation systems, the development of a non invasive head immobilizer, and a targeting device we were able to place hollow radiation needle(s) with superior accuracy into metastases of the head and neck region. The principle of this new technique is presented and illustrated on exemplary cases. 11.5 Target volume in 3D-conformal radiotherapy of cervical lymph nodes of head and neck cancer: an individual approach according to tumor stage and tumor side M. Marx. T. Feyerabend, E. Richter, Dept. of Radiation Oncology and Nuclear Medicin, University of Lubeck, Germany Background and Purpose: Radiotherapy has an important role in the control of subclinical tumor disease of cervical lymph nodes and the primary management of lymph node metastases of head and neck cancer. To reduce acute reactions and late morbidity by radiotherapy, it is mandatory to define the target volume individually according to tumor stage and tumor site. Methods: We reviewed the major radiooncologic literature and the recommendations on the extent of the target volume concerning the irradiation of the cervical lymph nodes. We analyzed the available data on the frequency of metastastic lymph node involvement depending on tumor stage and tumor site. Results: The probability of metastatic involvement of cervical lymph nodes in head patients with head and neck cancer differs widely depending on initial tumor stage and site. According to these results an individual radiotherapeutic treatment of the cervical lymph nodes is possible supporting a risk-adapted definition of the target volume. Clinical experiences with examples of conformal 3D-planned irradiation techniques are presented. Conclusion: Taking into account the different metastatic spread pattern of head and neck cancers into lymph nodes a highly individualised 3D-conformal radiotherapy is possible and allows a reduction in acute and late toxicity. 11.6 Conformal 3D-irradiation technique for head and neck cancer U. Gotz, I.C. Kiricuta, Inst. of Oncology, St. Vincenz Hospital, Limburg, Germany Parpose: We describe an irradiation technique for head and neck cancer routinely applied in our department on the example of nasopharyngeal carcinoma to improve target volume coverage. Methods: The volume to be irradiated should include the primary tumor, the involved lymph nodes, all these defined as gross tumor volume (GTV) as well as the adjacent tissues, parapharyngeal lymphatics and the cervical lymphatics defined as clinical target volume (CTV). The planning target volume (PTV) consists of CTV and a margin to account for variations in size, shape and position relative to the treatment beams. The PTV is thus a geometrical concept used to ensure that the CTV receives the prescribed dose. The PTV is devided in three well defined levels. Level one is the nasopharynx, which extends from the body of the sphenoid to the upper border of first vertebra. Level two extends from here to the epiglottis. The third level includes the low neck lymphatics. All levels are irradiated by two wedged beams from 600 and 3000. The nasopharynx is additional irradiated by two lateral wedged fields and one ventral field. The second level is encompaced by offset-arc beams to ensure adequate conformal coverage of a concave target outline which includes the deep posterior nodes, the lateral pharyngeal nodes and the jugulodigastric nodes. The lower neck lymphatics are irradiated by an anterior offset-field devided by a block for the spinal cord in combination with the above mentioned two wedged fields from 3000 and 600. This irradiation technique was verified by filmdosimetric measurements on Alderson-Phantom. Results: This three level technique guaranties a conformal coverage of the PTV. A dose of 70 Gy can be delivered to the GTV in all three levels and a minimal dose of 56 Gy to the CTV. The optimisation of the technique was achieved by using Dose-Volume-Histograms for target volumes and organs at risk. The dose to the spinal cord is lower than 60% of the delivered dose. Conclusion: The one isocenter three level technique presented, permits a conformal coverage of planning target volume. Delivery of high-dose to gross tumor with minimal normal tissue reactions and low spinal cord irradiation is guaranted. This technique is simple to simulate and to controll it by portal imaging.
1,950.2
1998-01-01T00:00:00.000
[ "Medicine", "Physics" ]
Facile Strategy of Improving Interfacial Strength of Silicone Resin Composites Through Self-Polymerized Polydopamine Followed via the Sol-Gel Growing of Silica Nanoparticles onto Carbon Fiber In the present research, to enhance interfacial wettability and adhesion between carbon fibers (CFs) and matrix resin, hydrophilic silica nanoparticles (SiO2) were utilized to graft the surface of CFs. Polydopamine (PDA) as a “bio-glue” was architecturally built between SiO2 and CFs to obtain a strong adhesion strength and homogenous SiO2 distribution onto the surface of CFs. The facile modification strategy was designed by self-polymerization of dopamine followed by the hydrolysis of tetraethoxysilane (TEOS) onto carbon fibers. Surface microstructures and interfacial properties of CFs, before and after modification, were systematically investigated. The tight and homogeneous coverage of SiO2 layers onto the CF surface, with the assistance of a PDA layer by self-polymerization of dopamine, significantly enhanced fiber surface roughness and wettability, resulting in an obvious improvement of mechanical interlocking and interfacial interactions between CFs and matrix resin. The interlaminar shear strength (ILSS) and the interfacial shear strength (IFSS) of CF/PDA/SiO2 reinforced composites exhibited 57.28% and 41.84% enhancements compared with those of untreated composites. In addition, impact strength and the hydrothermal aging resistance of the resulting composites showed great improvements after modification. The possible reinforcing mechanisms during the modification process have been discussed. This novel strategy of developed SiO2-modified CFs has interesting potential for interfacial improvements for advanced polymer composites. Introduction Carbon fiber (CF) reinforced polymer composites with strong strength, light weight, and environmental stability have been widely used as structural materials in aerospace, automotive, and defense industries [1][2][3][4][5]. However, the lack of polar groups and the smooth graphitic fiber surface result in a poor-quality interface between CFs and matrix resin [6,7], limiting wide-ranging application of the composites. The mechanical properties of composites mainly depend on the interface between fibers and the matrix [8,9], and interfacial properties are strongly dependent on time, temperature, and fiber orientation. Zhandarov et al. [10] reported a valuable strategy for estimating local fiber/matrix interfacial strength parameters from micro-bond test data. Almeida Jr. et al. [11] studied the effects of fiber orientation on interlaminar and in-plane shear properties of glass fiber/epoxy composites using four different shear test methods; they also investigated the interfacial and creep characteristics of carbon fiber-reinforced epoxy laminates with different fiber orientations [12]. There are several mechanisms for fiber-matrix bonding, like chemical bonding, mechanical interlocking, physical adsorption, and Preparation of CF/MPSR Composites MPSR composites reinforced by untreated and modified CFs were prepared by using the compression molding method according to our former work [4]. First, a metal frame wrapped around the CFs dissolved in MPSR solution to make the resin fully saturate into CFs, and then the unidirectional fiber prepreg was obtained. Next, the prepreg was placed into the given mold and cured using a hot press machine under the controlled conditions described in a previous paper [16]. The fiber contents in composite laminates were nearly in the range of 60% to 70%, and the laminate's dimensions of about 2 mm × 20 mm × 6 mm were used for property testing. Characterization Techniques X-ray photoelectron spectroscopy (XPS, ESCALAB 220i-XL, Thermo VG Scientific, UK) was performed to study the surface chemical composition and functional groups of CFs surface with and without modifications. XPS measurement was carried out using a monochromatic Al Ka source of 1486.6 eV at a base pressure of 2 × 10 −9 mbar. The XPS peak version 4.1 program was used for data analysis. Scanning electron microscopy (SEM, Quanta 200FEG, Hitachi Instrument Inc., Japan) was utilized to observe surface morphologies of CFs before and after grafting and the fractured microstructures of different composites. All fiber and composite samples needed to be sputtered with gold before SEM analysis to increase sample conductivity to obtain stable and clear images. Schematic of binding SiO 2 coating onto the modified fiber surface via self-polymerized polydopamine. Preparation of CF/MPSR Composites MPSR composites reinforced by untreated and modified CFs were prepared by using the compression molding method according to our former work [4]. First, a metal frame wrapped around the CFs dissolved in MPSR solution to make the resin fully saturate into CFs, and then the unidirectional fiber prepreg was obtained. Next, the prepreg was placed into the given mold and cured using a hot press machine under the controlled conditions described in a previous paper [16]. The fiber contents in composite laminates were nearly in the range of 60% to 70%, and the laminate's dimensions of about 2 mm × 20 mm × 6 mm were used for property testing. Characterization Techniques X-ray photoelectron spectroscopy (XPS, ESCALAB 220i-XL, Thermo VG Scientific, UK) was performed to study the surface chemical composition and functional groups of CFs surface with and without modifications. XPS measurement was carried out using a monochromatic Al Ka source of 1486.6 eV at a base pressure of 2 × 10 −9 mbar. The XPS peak version 4.1 program was used for data analysis. Scanning electron microscopy (SEM, Quanta 200FEG, Hitachi Instrument Inc., Japan) was utilized to observe surface morphologies of CFs before and after grafting and the fractured microstructures of different composites. All fiber and composite samples needed to be sputtered with gold before SEM analysis to increase sample conductivity to obtain stable and clear images. A dynamic contact angle meter (DCAT21, Data Physics Instruments, Stuttgart, Germany) was used to examine dynamic contact angles for evaluating fiber wettability and surface energies. It was recorded by immersing CFs into two testing solutions (deionized water and diiodomethane), and followed by the calculation of fiber surface free energy, and its dispersive and polar components using the obtained contact angles and the known surface energies of testing liquids. The ILSS values of CF/MPSR composites were tested on a universal testing machine (WD-1, Changchun, China) using a three-point short-beam bending testing mode. Single filament pull-out tests (FA620, Tokyo, Japan) were used to examine interfacial adhesion between CFs and MPSR by pulling a single fiber out from the cured resin droplets. A carbon fiber monofilament was fixed on a metal holder with adhesive tape. Microdroplets were prepared by dropping small resin droplets with a pin and followed by the controlled curing procedure. Impact toughness of different composites were studied via a drop weight impact test system (9250HV, Instron, Boston, Massachusetts, USA). The impact span, drop weight of the wedge-shaped impactor, and velocity were 40 mm, 3 kg and 1 m s −1 , respectively. To investigate effects of surface modification onto composites hydrothermal aging resistance, interfacial strength of composites which immersed in the boiling water for 48 h were characterized, and traced the changes in ILSS values compared to ILSS results without aging. Surface Composition and Microstructure of CFs Surface elements and groups of different CFs were analyzed using XPS. Figure 2 shows the wide-scan and high-resolution XPS spectra of untreated and modified CFs. According to the wide spectra results (Figure 2a), there are C1s and O1s peaks in the wide spectra of untreated CF. After the introduced PDA coating, CF/PDA displays the new peak of N1s, indicating the successful deposition of the PDA layer onto the untreated CF surface. After the hydrolysis of TEOS onto CF/PDA surface, the appearance of S2p peak implies that SiO 2 nanoparticles have been bound onto the surface of CF/PDA. The binding characterizations for different CFs have also been determined by the high-resolution XPS C1s spectra, as shown in Figure 2b , and COOH (288.9 eV) onto the fiber XPS spectrum [34]. Compared to untreated CF, the C-N peaks (285.7 eV) appeared in CF/PDA, and the contents of C-O and C=O peaks increased obviously, owing to the formed PDA coating by the self-polymerization of DA. CF/PDA/SiO 2 presents the decrease of C-N contents and the increase of C-O contents attributed to the sol-gel growing of SiO 2 nanoparticles onto CFs. Surface microstructures of untreated and modified CFs are presented in Figure 3. As is known, the surface of untreated CF is typically smooth and neat [7]. After the self-polymerization of PDA, the surface of CF/PDA (Figure 3a,b) becomes rougher compared to that of untreated CF, due to the coverage of PDA layer onto the fiber surface. The formed PDA acts as, not only the active platform for SiO 2 grafting, but also as a shielding layer for hydrothermal aging. This surface morphology is analogous to that of fibers grafted with PDA coating [33]. As for CF/PDA/SiO 2 (Figure 3c,d), many nodule-like structures are observed on the fiber surface owing to the hydrolysis of TEOS in situ and the introduction of SiO 2 nanoparticles. The PDA/SiO 2 layer not only augments the surface roughness and improves the capacity of mechanical interlocking, but also creates many curing reactive areas to introduce chemical bonding at the interface between CFs and matrix resin, with the aim of achieving outstanding interfacial properties. resolution XPS C1s spectra, as shown in Figure 2b-d. For untreated CF (Figure 2b), there were five peaks (C=C (284.5 eV), C-C (285.2 eV), C-O (286.6 eV), C=O (287.8 eV), and COOH (288.9 eV) onto the fiber XPS spectrum [34]. Compared to untreated CF, the C-N peaks (285.7 eV) appeared in CF/PDA, and the contents of C-O and C=O peaks increased obviously, owing to the formed PDA coating by the self-polymerization of DA. CF/PDA/SiO2 presents the decrease of C-N contents and the increase of C-O contents attributed to the sol-gel growing of SiO2 nanoparticles onto CFs. Surface microstructures of untreated and modified CFs are presented in Figure 3. As is known, the surface of untreated CF is typically smooth and neat [7]. After the self-polymerization of PDA, the surface of CF/PDA (Figure 3a,b) becomes rougher compared to that of untreated CF, due to the coverage of PDA layer onto the fiber surface. The formed PDA acts as, not only the active platform for SiO2 grafting, but also as a shielding layer for hydrothermal aging. This surface morphology is analogous to that of fibers grafted with PDA coating [33]. As for CF/PDA/SiO2 (Figure 3c,d), many nodule-like structures are observed on the fiber surface owing to the hydrolysis of TEOS in situ and the introduction of SiO2 nanoparticles. The PDA/SiO2 layer not only augments the surface roughness and improves the capacity of mechanical interlocking, but also creates many curing reactive areas to introduce chemical bonding at the interface between CFs and matrix resin, with the aim of achieving outstanding interfacial properties. Surface Wettability Analysis of CFs High fiber surface energy is critical to enhance the specific activity and wettability with matrix resin, and results in a good quality of fiber-matrix interface. Hence, dynamic contact angles and surface energy have been carried out to evaluate the relative wettability of different CFs with matrix resin. The dynamic contact angles and surface energy of untreated and modified CFs are summarized in Table 1. Based on the non-ideal graphitic basal planes on the surface of untreated CF, θ of untreated CF in water (θwater) and in diiodomethane (θdiiodomethane) are 78.50° and 58.91°, respectively. It is well known that the smaller contact angle indicates the better wettability of CFs. Hence, Surface energy of untreated CF is only 35.86 mN·m −1 , with the dispersion component of 29.20 mN·m −1 and the polar component of 6.66 mN·m −1 . After being introduced by PDA coating, θwater of CF/PDA decreases from 78.50° to 51.36°, whereas θdiiodomethane decreases from 58.91° to 49.72°; the surface energy of CF/PDA increases up to 54.17 mN·m −1 . The phenomenon also confirms that the PDA layer containing hydrophilic groups has been successfully deposited onto the surface of CFs. After SiO2 grafting, θwater and θdiiodomethane decrease sharply to 44.32° and 43.26°, the main contributors being the improved polar groups and surface roughness. Compared to those of untreated CF and CF/PDA, the surface energies of CF/PDA/SiO2 increases to 60.18 mN·m −1 by 67.82% and 11.10%, respectively. As a result, the Surface Wettability Analysis of CFs High fiber surface energy is critical to enhance the specific activity and wettability with matrix resin, and results in a good quality of fiber-matrix interface. Hence, dynamic contact angles and surface energy have been carried out to evaluate the relative wettability of different CFs with matrix resin. The dynamic contact angles and surface energy of untreated and modified CFs are summarized in Table 1. Based on the non-ideal graphitic basal planes on the surface of untreated CF, θ of untreated CF in water (θ water ) and in diiodomethane (θ diiodomethane ) are 78.50 • and 58.91 • , respectively. It is well known that the smaller contact angle indicates the better wettability of CFs. Hence, Surface energy of untreated CF is only 35.86 mN·m −1 , with the dispersion component of 29.20 mN·m −1 and the polar component of 6.66 mN·m −1 . After being introduced by PDA coating, θ water of CF/PDA decreases from 78.50 • to 51.36 • , whereas θ diiodomethane decreases from 58.91 • to 49.72 • ; the surface energy of CF/PDA increases up to 54.17 mN·m −1 . The phenomenon also confirms that the PDA layer containing hydrophilic groups has been successfully deposited onto the surface of CFs. After SiO 2 grafting, θ water and θ diiodomethane decrease sharply to 44.32 • and 43.26 • , the main contributors being the improved polar groups and surface roughness. Compared to those of untreated CF and CF/PDA, the surface energies of CF/PDA/SiO 2 increases to 60.18 mN·m −1 by 67.82% and 11.10%, respectively. As a result, the obvious increase of surface energy can facilitate the resin impregnation and enhance curing reactivity and wettability between CFs and MPSR aiming to improve composites interfacial strength. Interfacial Property Testing of Composites A good quality of fiber-matrix interface is a key factor to increase the integrative mechanical properties of resultant composites. The ILSS and IFSS results of MPSR composites reinforced by different CFs are presented in Figure 4. ILSS and IFSS values of untreated CF composites are only 29.47 and 40.37 MPa, respectively. As for CF/PDA composites, they reach 40.23 MPa with a 36.51% amplification for ILSS and 49.87 MPa improved by 23.53% for IFSS as compared to those of untreated CF composites. This may be due to the improved wettability and interfacial interaction between CFs and matrix resin by introducing many polar groups onto the fiber surface after PDA grafting. As for CF/PDA/SiO2 composites, the recorded ILSS value is 46.35 MPa (57.28% amplification to that of untreated composites), whereas the recorded IFSS value is 57.26 MPa (42.19% amplification to that of untreated composites). Compared to CF/PDA composites, PDA/SiO 2 deposition combines more active groups and bigger surface roughness, and exhibit a synergistic effect, leading to higher amplifications in ILSS and IFSS evaluation of the resulting composites. Interfacial Property Testing of Composites A good quality of fiber-matrix interface is a key factor to increase the integrative mechanical properties of resultant composites. The ILSS and IFSS results of MPSR composites reinforced by different CFs are presented in Figure 4. ILSS and IFSS values of untreated CF composites are only 29.47 and 40.37 MPa, respectively. As for CF/PDA composites, they reach 40.23 MPa with a 36.51% amplification for ILSS and 49.87 MPa improved by 23.53% for IFSS as compared to those of untreated CF composites. This may be due to the improved wettability and interfacial interaction between CFs and matrix resin by introducing many polar groups onto the fiber surface after PDA grafting. As for CF/PDA/SiO2 composites, the recorded ILSS value is 46.35 MPa (57.28% amplification to that of untreated composites), whereas the recorded IFSS value is 57.26 MPa (42.19% amplification to that of untreated composites). Compared to CF/PDA composites, PDA/SiO2 deposition combines more active groups and bigger surface roughness, and exhibit a synergistic effect, leading to higher amplifications in ILSS and IFSS evaluation of the resulting composites. In general, the reinforcing mechanisms of interfacial strength may be due to the introduced mechanical interlocking, the increased surface wettability, as well as the formed chemical bonding at the fiber-matrix interface. After being introduced to PDA/SiO2 layers, polar hydroxy groups can enhance fiber surface free energy, and make the surface of CFs easier to be wetted with matrix resin, which maximizes the degree of molecular contact. Moreover, the polar groups as the curing agents can react with matrix resin to form chemical bonding at the interface. In addition, PDA/SiO2 deposition layers onto the fiber surface significantly enhance fiber surface roughness, especially on a nano-scale, and thus increase mechanical interlocking between CFs and matrix resin aiming to improve composite interfacial strength. The above analysis of the results is also confirmed by SEM images of the composites fracture surfaces after the ILSS test, as shown in Figure 5. As for untreated CF composites (Figure 5a), there are many pulled-out fibers and holes observed on the fracture surfaces of composites, confirming the In general, the reinforcing mechanisms of interfacial strength may be due to the introduced mechanical interlocking, the increased surface wettability, as well as the formed chemical bonding at the fiber-matrix interface. After being introduced to PDA/SiO 2 layers, polar hydroxy groups can enhance fiber surface free energy, and make the surface of CFs easier to be wetted with matrix resin, which maximizes the degree of molecular contact. Moreover, the polar groups as the curing agents can react with matrix resin to form chemical bonding at the interface. In addition, PDA/SiO 2 deposition layers onto the fiber surface significantly enhance fiber surface roughness, especially on a nano-scale, and thus increase mechanical interlocking between CFs and matrix resin aiming to improve composite interfacial strength. The above analysis of the results is also confirmed by SEM images of the composites fracture surfaces after the ILSS test, as shown in Figure 5. As for untreated CF composites (Figure 5a), there are many pulled-out fibers and holes observed on the fracture surfaces of composites, confirming the weak interfacial adhesion between untreated CF and MPSR. After the formation of PDA coating (Figure 5b), many resin fragments emerge on the fracture surfaces of CF/PDA, and simultaneously shows benign adhesion between CFs and MPSR. However, the existence of a few holes, slight breakage of CFs and a few pulled-out CFs for PDA modified composites. As for CF/PDA/SiO 2 composites (Figure 5c), with the lack of interface debonding, fracture step, and pulled-out fibers, a favorable and desired fracture surface has been observed owing to the formation of a good PDA/SiO 2 interface, which can increase the fiber-matrix interface combination and realize the applied load transfer effectively and high-efficient load implementation by PDA/SiO 2 modification. weak interfacial adhesion between untreated CF and MPSR. After the formation of PDA coating (Figure 5b), many resin fragments emerge on the fracture surfaces of CF/PDA, and simultaneously shows benign adhesion between CFs and MPSR. However, the existence of a few holes, slight breakage of CFs and a few pulled-out CFs for PDA modified composites. As for CF/PDA/SiO2 composites (Figure 5c), with the lack of interface debonding, fracture step, and pulled-out fibers, a favorable and desired fracture surface has been observed owing to the formation of a good PDA/SiO2 interface, which can increase the fiber-matrix interface combination and realize the applied load transfer effectively and high-efficient load implementation by PDA/SiO2 modification. Figure 6 illustrates the possible mechanism of binding the SiO2 layer onto the PDA pre-modified CFs surface and the interfacial reaction of CF/PDA/SiO2 composites. When adding CF/PDA into the solution of TEOS, the catechol groups of the PDA layer attract the hydrolyzed SiO2 in-situ in the solution, and the good coordination bonds can be formed between SiO2 nanoparticles and CF/PDA. Many key factors, like chemical bonding, mechanical interlocking, and interfacial wettability, affect interfacial adhesion between CFs and matrix. Generally, the formed chemical bonding at the interface contributes to the largest share of interface properties of composites. The presence of hydroxyl groups of CF/PDA/SiO2 has the high reaction activity with MPSR, creating chemical bonding at the interface of composites. The homogenous distribution and the tight binding of the SiO2 coating onto the fiber surface and the possible interfacial reaction may be crucial factors for better interfacial improvement. Figure 6 illustrates the possible mechanism of binding the SiO 2 layer onto the PDA pre-modified CFs surface and the interfacial reaction of CF/PDA/SiO 2 composites. When adding CF/PDA into the solution of TEOS, the catechol groups of the PDA layer attract the hydrolyzed SiO 2 in-situ in the solution, and the good coordination bonds can be formed between SiO 2 nanoparticles and CF/PDA. Many key factors, like chemical bonding, mechanical interlocking, and interfacial wettability, affect interfacial adhesion between CFs and matrix. Generally, the formed chemical bonding at the interface contributes to the largest share of interface properties of composites. The presence of hydroxyl groups of CF/PDA/SiO 2 has the high reaction activity with MPSR, creating chemical bonding at the interface of composites. The homogenous distribution and the tight binding of the SiO 2 coating onto the fiber surface and the possible interfacial reaction may be crucial factors for better interfacial improvement. Impact Property Testing of Composites The impact property testing results of different CFs reinforced MPSR composites are shown in Figure 7. Untreated CF composites have the low impact strength of 58.46 kJ/m 2 . After being introduced via PDA coating, impact strength of CF/PDA composites increases obviously, with the impact toughness of 69.74 kJ/m 2 (19.30% amplification to that of untreated composites). For CF/PDA/SiO 2 composites, the impact strength of the resulting composites enhances sharply to 80.2 kJ/m 2 , increased by 37.19% compared with that of untreated composites and 15.00% compared with that of CF/PDA composites. Such an improvement in impact toughness can be related to the formation of the PDA/SiO 2 stratified interface, which serves as a shielding layer to prevent the crack tips from directly contacting the surface of CFs and cause the crack path to deviate away from the fiber surface to the interface region of composites. Moreover, the pull-out or rupture of tightly binding SiO 2 nanoparticles at the interface can consume a great deal of additional energy, which contributes to the increase of the composites impact strength. The schematic illustration of the composites interface is shown in Figure 7a,b. Figure 6 illustrates the possible mechanism of binding the SiO2 layer onto the PDA pre-modified CFs surface and the interfacial reaction of CF/PDA/SiO2 composites. When adding CF/PDA into the solution of TEOS, the catechol groups of the PDA layer attract the hydrolyzed SiO2 in-situ in the solution, and the good coordination bonds can be formed between SiO2 nanoparticles and CF/PDA. Many key factors, like chemical bonding, mechanical interlocking, and interfacial wettability, affect interfacial adhesion between CFs and matrix. Generally, the formed chemical bonding at the interface contributes to the largest share of interface properties of composites. The presence of hydroxyl groups of CF/PDA/SiO2 has the high reaction activity with MPSR, creating chemical bonding at the interface of composites. The homogenous distribution and the tight binding of the SiO2 coating onto the fiber surface and the possible interfacial reaction may be crucial factors for better interfacial improvement. Impact Property Testing of Composites The impact property testing results of different CFs reinforced MPSR composites are shown in Figure 7. Untreated CF composites have the low impact strength of 58.46 kJ/m 2 . After being introduced via PDA coating, impact strength of CF/PDA composites increases obviously, with the impact toughness of 69.74 kJ/m 2 (19.30% amplification to that of untreated composites). For CF/PDA/SiO2 composites, the impact strength of the resulting composites enhances sharply to 80.2 kJ/m 2 , increased by 37.19% compared with that of untreated composites and 15.00% compared with that of CF/PDA composites. Such an improvement in impact toughness can be related to the formation of the PDA/SiO2 stratified interface, which serves as a shielding layer to prevent the crack tips from directly contacting the surface of CFs and cause the crack path to deviate away from the Hydrothermal Aging Resistance Testing of Composites Carbon fiber reinforced polymer composites as high-performance engineering materials are widely used in extreme weathering environments with high humidity and temperatures. CFs composites can absorb moisture, which severely affects their long-term durability and overall performance. Hence, higher requirements on hydrothermal aging resistance have been put forward. In the work, the modification strategy is also expected to improve their hydrothermal aging resistance. ILSS values with and without hydrothermal aging of untreated CF, CF/PDA, and CF/PDA/SiO2 composites are presented in Figure 8. ILSS values of untreated CF composites decrease sharply from 29.47 MPa without hydrothermal aging to 20.52 MPa after aging, with ILSS retention ratios of 69.63%. In fact, CFs virtually absorb no moisture, and absorption is largely matrix-dominated. Water uptake is mainly affected via the hydrophilic characters of matrix resin and fiber reinforcement, interfacial adhesion between the reinforcement and matrix resin, microcracks as well as voids [35]. Moisture permeation is dominated by diffusion, capillarity, and/or transport by microcracks. Based on the weak quality of the interface between untreated CF and MPSR, defects, voids as well as microcracks are easily formed under the hydrothermal aging condition, and many water molecules penetrate into Hydrothermal Aging Resistance Testing of Composites Carbon fiber reinforced polymer composites as high-performance engineering materials are widely used in extreme weathering environments with high humidity and temperatures. CFs composites can absorb moisture, which severely affects their long-term durability and overall performance. Hence, higher requirements on hydrothermal aging resistance have been put forward. In the work, the modification strategy is also expected to improve their hydrothermal aging resistance. ILSS values with and without hydrothermal aging of untreated CF, CF/PDA, and CF/PDA/SiO 2 composites are presented in Figure 8. ILSS values of untreated CF composites decrease sharply from 29.47 MPa without hydrothermal aging to 20.52 MPa after aging, with ILSS retention ratios of 69.63%. In fact, CFs virtually absorb no moisture, and absorption is largely matrix-dominated. Water uptake is mainly affected via the hydrophilic characters of matrix resin and fiber reinforcement, interfacial adhesion between the reinforcement and matrix resin, microcracks as well as voids [35]. Moisture permeation is dominated by diffusion, capillarity, and/or transport by microcracks. Based on the weak quality of the interface between untreated CF and MPSR, defects, voids as well as microcracks are easily formed under the hydrothermal aging condition, and many water molecules penetrate into the interface of composites with the aid of these microcracks or voids, leading to interfacial debonding and matrix cracking. Hence, the aging treatment deteriorates matrix resin and fiber-matrix interfaces, reducing the overall interfacial strength of the resulting composites. After 48 h of hydrothermal aging, the ILSS retention ratio of CF/PDA composites is 76.26%, which declines much slower compared to that of untreated CF composites. However, the ILSS value of CF/PDA/SiO 2 composites after aging is 42.67 MPa with a slight decrease of 7.94%. The tested samples failed via delamination s at the specimen mid-plane. Moisture penetration by the carbon fiber/matrix resin surface results in interfacial debonding and matrix plasticization. The formed good compaction of hierarchical PDA/SiO 2 layers used as the buffer layers helps to inhibit the formation of microcracks and voids at the interface, and thus prevent water molecules from penetrating and damaging the interface and matrix resin effectively. Additionally, the introduced Si-O-Si bonds are to be destroyed by needing more energy and a long period of time. Hence, CF/PDA/SiO 2 composites show the best hydrothermal aging resistance in the three types of fibers. The aging treatment has a strong effect on matrix-dominated properties due to the degradation of matrix resin and interface, resulting in a subsequent decrease in interfacial strength of composites inevitably. An ILSS retention ratio of CF/PDA/SiO 2 composites is relatively high compared to other types of fiber's composites, owing to the introduced hierarchical PDA/SiO 2 layers. Conclusions In summary, a facile and effective strategy to modify CFs by self-polymerized polydopamine and subsequent hydrolysis of tetraethoxysilane has been reported to change interfacial microstructure and properties of MPSR composites. Characterization results indicated that SiO2 has been successfully deposited onto the pre-modified CFs surface. The hierarchical PDA/SiO2 layer led to an increased surface roughness for mechanical interlocking and surface polarity for interfacial wettability and reaction with a greater degree to that of CF/PDA as compared to untreated CF. As a result, CF/PDA/SiO2 composites have the highest enhancement of interfacial strength and impact toughness, affording the ILSS value (46.35 MPa), IFSS value (57.26 MPa) as well as impact toughness value (80.2 kJ/m 2 ). In addition, the formation of PDA/SiO2 interface and the introduction of Si-O-Si bonds could effectively protect the interface from penetration of water molecules, leading to the best hydrothermal aging resistance among the three studied fibers. Author Contributions: Z.Y.W.and W.X.Y., Wang performed the experiments and the data analyses and wrote the manuscript. W.G.S., Wu contributed to the conception of the study and revised the manuscript. All authors reviewed the manuscript. Conclusions In summary, a facile and effective strategy to modify CFs by self-polymerized polydopamine and subsequent hydrolysis of tetraethoxysilane has been reported to change interfacial microstructure and properties of MPSR composites. Characterization results indicated that SiO 2 has been successfully deposited onto the pre-modified CFs surface. The hierarchical PDA/SiO 2 layer led to an increased surface roughness for mechanical interlocking and surface polarity for interfacial wettability and reaction with a greater degree to that of CF/PDA as compared to untreated CF. As a result, CF/PDA/SiO 2 composites have the highest enhancement of interfacial strength and impact toughness, affording the ILSS value (46.35 MPa), IFSS value (57.26 MPa) as well as impact toughness value (80.2 kJ/m 2 ). In addition, the formation of PDA/SiO 2 interface and the introduction of Si-O-Si bonds could effectively protect the interface from penetration of water molecules, leading to the best hydrothermal aging resistance among the three studied fibers.
6,911.8
2019-10-01T00:00:00.000
[ "Materials Science", "Engineering" ]
On Evaluating Embedding Models for Knowledge Base Completion Knowledge graph embedding models have recently received significant attention in the literature. These models learn latent semantic representations for the entities and relations in a given knowledge base; the representations can be used to infer missing knowledge. In this paper, we study the question of how well recent embedding models perform for the task of knowledge base completion, i.e., the task of inferring new facts from an incomplete knowledge base. We argue that the entity ranking protocol, which is currently used to evaluate knowledge graph embedding models, is not suitable to answer this question since only a subset of the model predictions are evaluated. We propose an alternative entity-pair ranking protocol that considers all model predictions as a whole and is thus more suitable to the task. We conducted an experimental study on standard datasets and found that the performance of popular embeddings models was unsatisfactory under the new protocol, even on datasets that are generally considered to be too easy. Moreover, we found that a simple rule-based model often provided superior performance. Our findings suggest that there is a need for more research into embedding models as well as their training strategies for the task of knowledge base completion. Introduction A knowledge base (KB) is a collection of relational facts, often represented as (subject, relation, object)-triples. KBs provide rich information for NLP tasks such as question answering (Abujabal et al., 2017) or entity linking (Shen et al., 2015). Since knowledge bases are inherently incomplete (West et al., 2014), there has been considerable interest into methods that infer missing knowledge. In particular, a large number of knowledge graph embedding (KGE) models have been pro-posed in the recent literature (Nickel et al., 2016a). These models embed the entities and relations of a given knowledge base into a low-dimensional latent space such that the structure of the knowledge base is captured. The embeddings are subsequently used to assess whether unobserved triples constitute missing facts or are likely to be false. To evaluate the performance of a KGE model, the most commonly adopted protocol is the entity ranking (ER) protocol. 1 The protocol takes as input a set of previously unobserved test triples, such as (Einstein, bornIn, Ulm), and uses the embedding model to rank all possible answers to the questions (?, bornIn, Ulm) and (Einstein, bornIn, ?). Model performance is then assessed based on the rank of the answer present in the test triple (Einstein and Ulm, resp.). Since each question is constructed from a test triple, the protocol ensures that questions are meaningful and always have a correct answer. Throughout this paper, we refer to the task of answering such questions as question answering (QA). The ER protocol is, in principle, well-suited to evaluate performance of KGE models for QA, although concerns about the benchmark datasets (Toutanova and Chen, 2015), the considered models (Kadlec et al., 2017) and the evaluation (Joulin et al., 2017) have been raised. In this paper, we aim to study the performance of popular embedding models for the task of knowledge base completion (KBC): given a relation of a knowledge base (bornIn), infer true missing facts ((Einstein, bornIn, Ulm)). This task is different from QA (as defined above) since no information about potential missing triples is provided upfront. We argue that the ER protocol is not well-suited to assess model performance for KBC. To see this, observe that models that assign high confidence scores to incorrect triples such as (Ulm, bornIn, Einstein) are not penalized by the ER protocol because the corresponding questions (e.g., (Ulm, bornIn, ?)) are never asked. Thus a model that performs well on ER may still not perform well for KBC. In fact, we argue here that some commonly used KGE models are inherently not well-suited to KBC. We propose a simple entity-pair ranking (PR) protocol (PR), which is more suitable to assess model performance for KBC. Given a relation such as bornIn, PR uses the KGE model to rank all possible answers-i.e., all entity pairs-to the question (?, bornIn, ?), and subsequently assesses model performance based on the rank of the test triples for relation bornIn in the answer. The protocol ensures that a model's performance is negatively affected if the model assigns high scores to false triples such as (Ulm, bornIn, Einstein). We conducted an experimental study on commonly used benchmark datasets under the ER and the PR protocols. We found that the performance of popular embeddings models was often good under the ER but unsatisfactory under the PR protocol, even on "simple" datasets that are generally considered to be too easy. Moreover, we found that a simple rule-based model often provided superior performance for PR. Our findings suggests that there is a need for more research into embedding models as well as their training strategies for the task of knowledge base completion. Preliminaries Given a set of entities E and a set of relations R, a knowledge base K ⊆ E × R × E is a set of triples (i, k, j), where i, j ∈ E and k ∈ R. We refer to i, k and j as the subject, relation, and object to the triple, respectively. Embedding models. A KGE model associates an embedding e i ∈ R de and r k ∈ R dr with each entity i and relation k, resp. We refer to d e and d r ∈ N + as the size of the embeddings. Each KGE model uses a scoring function s : E × R × E → R to associate a score s(i, k, j) to each triple (i, k, j) ∈ E ×R×E. The scores induce a ranking: triples with high scores are considered more likely to be true than triples with low scores. Roughly speaking, the models try to find embeddings that capture the structure of the entire knowledge graph well. In this work, we consider a popular family of embedding models called bilinear models. Bilinear KGE models. Bilinear models use the relation embedding r k ∈ R dr to construct a mixing matrix R k ∈ R de×de , and they use scoring function s(i, k, j) = e T i R k e j . The models differ mainly in how R k is constructed. Unless stated otherwise, the models use the same embedding sizes for entities and relations (i.e., d r = d e ). RESCAL (Nickel et al., 2011) is the most general bilinear model: it sets d r = d 2 e and stores in r k the values of each entry of R k . Analogy (Liu et al., 2017) constrains R k ∈ R de×de to a block diagonal matrix in which each block is either (i) a real scalar or (ii) a 2 × 2 matrix of form x −y y x with x, y ∈ R. DistMult (Carroll and Chang, 1970;Yang et al., 2014) is a symmetric factorization model with R k = diag (r k ) or, equivalently, considers only case (i) of Analogy. Com-plEx (Trouillon et al., 2016) and HolE (Nickel et al., 2016b) are equivalent to a model that restricts R k to case (ii). TransE (Bordes et al., 2013) is a translation-based model with scoring function s(i, k, j) = − e i + r k − e j 2 (or · 1 ); it can also be written in bilinear form . Rule learning. Rule learning methods derive logical rules that encode dependencies found in the KBs (Galárraga et al., 2013). We consider a simple rule-based model called RuleN (Meilicke et al., 2018) as baseline. The model learns (weighted) implication rules of form r(i, j) ← r 1 (i, z 1 ) ∧ · · · ∧ r n (z n , j) where r i are relations, c is a constant entity, and i, j, and z i are variables quantified over entities. To perform KBC, rule-based models query the KB for instances of the bodies of each rule and interpret the corresponding head as (weighted) predicted fact. Evaluation Protocols We first review two widely used evaluation protocols for QA. We then argue that these protocols are not well-suited for assessing KBC performance, because they focus on a small subset of all possible facts for a given relation. We then introduce the entity-pair ranking (PR) protocol and discuss its advantages and potential shortcomings. Current Evaluation Protocols The triple classification (TC) or the entity ranking (ER) protocols are commonly used to assess KGE model performance, where ER is arguably the most widely adopted protocol. We assume throughout that only true but no false triples are available (as is commonly the case), and that the available true triples are divided into training, validation, and test triples. Triple classification (TC) The goal of triple classification is to test the model's ability to discriminate between true and false triples (Socher et al., 2013). Since only true triples are available in practice, pseudo-negative triples are generated by randomly replacing either the subject or the object of each test triple by a random entity (that appears as a subject or object in the considered relation). All triples are then classified as positive or negative according to the KGE scores. In particular, triple (i, k, j) is classified as positive if its score s(i, k, j) exceeds a relation-specific decision threshold τ k (learned on validation data using the same procedure). Model performance is assessed by classification accuracy. Entity ranking (ER) ER assesses model performance by testing its ability to perform QA (as defined before). In particular, for each test triple t = (i, k, j), two questions q s = (?, k, j) and q o = (i, k, ?) are generated. For question q s , all entities i ∈ E are ranked based on the score s(i , k, j). To avoid misleading results, entities i = i that correspond to observed triples in the dataset-i.e., (i , k, j) occurs in the training/validation/test triples-are discarded to obtain a filtered ranking. The same process is applied for question q o . Model performance is evaluated based on the recorded positions of the test triples in the filtered ranking. Models that tend to rank test triples (known to be true) higher than unknown triples (assumed to be false) are thus considered superior. Usually, the micro-average of filtered<EMAIL_ADDRESS>the proportion of test triples ranking in the top-K-and filtered MRR-i.e., the mean reciprocal rank of the test triples-are used to assess model performance. found that most models achieve a TC accuracy of at least 93% on a benchmark dataset. This is because each test triple is com-pared against a single negative triple, and due to the high number of possible negative triples, it is unlikely that the chosen triple has a high predicted score, rendering most classification tasks "easy". Consequently, the accuracy of triple classification overestimates model performance. This protocol is less adopted in recent work. Discussion We argue that ER is appropriate to evaluate QA performance, but may overestimate model performance for KBC. Since ER generates questions from true test triples, it only asks questions that are known to have a correct answer. The question itself thus provides useful information. This perfectly matches QA, but it does not match KBC where such information is not available. To better illustrate why ER can lead to misleading assessment of a model's KBC performance, consider the DistMult model and the asymmetric relation nominatedFor. As described in Sec. 2, DistMult models all relations as symmetric in that s(i, k, j) = s(j, k, i). Now consider triple t = (H. Simon, nominatedFor, Nobel Prize), and let us suppose that the model correctly assigns t a high score s(t). Then the inverse triple t = (Nobel Prize, nominatedFor, H. Simon) will also obtain a high score since s(t ) = s(t). Thus the score produced by DistMult does not discriminate between the true triple t and the false triple t . In ER, however, questions about t are never asked; there is no test triple for this relation containing either Nobel Prize as subject or H. Simon as object. The symmetry of DistMult's prediction thus barely affects its performance under the ER protocol. For another example, consider TransE and the relation k = marriedTo, which is symmetric but not reflexive. One can show that for all (i, k, j), the TransE scores satisfy For symmetric relations, TransE should aim to assign high scores to both (i, k, j) and (j, k, i). To do so, TransE has the tendency to push the relation embedding r k towards 0 as well as e i and e j towards each other. But when r k ≈ 0, then s(i, k, i) is high for all i, so that the relation is treated as if it were reflexive. Again, in ER, this property only slightly influences the results: there is only one "reflexive" tuple in each filtered entity list so that the correct answer i for question (?, k, j) ranks at most one position lower. More expressive models such as RESCAL or ComplEx do not have such inherent limitations. Nevertheless, our experimental study shows that these models (at least in the way are currently trained) also tend to assign high scores to false triples. Entity-Pair Ranking Protocol We propose a simple alternative protocol called entity-pair ranking (PR). The protocol is more suitable to assess a model's KBC performance (although it is not without flaws either; see below). PR proceeds as follows: for each relation k, we use the KGE model to rank all triples for a specified relation k, i.e., to rank all answers to question (?, k, ?). As in ER, we filter out all triples that occur in the training and validation data to obtain a filtered ranking, i.e., to only rank triples that were not used during model training. If a model tends to assign a high score to negative triples, its performance is likely to be negatively affected because it becomes harder for true triples to rank high. Note that the number of candidate answers considered by PR is much larger than those considered by ER. Consider a relation k and let T k be the set of test triples for relation k. Then ER considers 2|T k | |E| candidates in total during evaluation, while PR considers |E| 2 candidates. Moreover, PR considers all test triples in T k simultaneously instead of sequentially. For this reason, we cannot use the MRR metric commonly used in ER. Instead, we assess model performance using weighted<EMAIL_ADDRESS>the weighted mean average precision in the top-K filtered results-and weighted<EMAIL_ADDRESS>the weighted percentage of test triples in the top-K filtered results. We weight the influence of relation k proportionally to its number of test triples (capped at K), thereby closely following ER: Here AP k @K is the average precision of the top-K list (w.r.t. test triples T k ) and Hits k @K refers to the fraction of test triples in the top-K list. Note that K should be chosen much larger for PR than for ER since it roughly corresponds to the number of triples we aim to predict for relation k. The PR protocol is more suited to evaluate KBC performance because it considers all model predictions. The protocol also has some disadvantages, however. First, as ER, the PR protocol may underestimate model performance due to unobserved true triples ranked high by the model. Since a larger number of candidates is considered, PR may be more sensitive to this problem than ER. We explore the effect of underestimation in our empirical study in Sec. 4.4. Another concern with PR is its potentially high computational cost. For current benchmark datasets, we found that the PR evaluation is feasible. Generally, one may argue that an embedding model is suitable for KBC only if it is feasible to determine high-scoring triples in a sufficiently efficient way. Since PR only requires the computation of the top-K predictions, performance can potentially be improved using techniques such as maximum innerproduct search Shrivastava and Li (2014). Experimental Study We conducted an experimental study to assess the performance of various bilinear embedding models for KBC. 2 All datasets, experimental results, and source code are publicly available. 3 For all models, we performed evaluation under both the ER and PR protocols in order to assess their performance for the QA and KBC tasks, respectively. We found that many KGE models provided good ER but low PR performance. We also considered a simple rule-based system called RuleN (Meilicke et al., 2018), which provided good performance under the ER protocol, and found that RuleN performed better in both ER and PR. Our results imply that more research into KGE models for KBC is needed. We also investigated the extent to which PR 2 Some other KGE models do not support KBC directly due to their architecture; e.g., ConvE (Dettmers et al., 2018). 3 http://www.uni-mannheim.de/dws/ research/resources/kge-eval/ underestimates model performance due to unobserved true triples. We found that underestimation is not the main reason for the low PR performance of many KGE models; in fact, many KGE models ranked high clearly wrong tuples (e.g., with incorrect types). Experimental Setup Datasets. We used the four common KBC benchmark datasets: FB15K, WN18, FB-237, and WNRR. The latter two datasets were created since the former two datasets were considered too easy for embedding models (based on ER). Key dataset statistics are summarized in Table 1. Negative sampling. We trained the embedding models using negative sampling to obtain pseudonegative triples. We consider three sampling strategies in our experiments: Perturb 1: For each training triple t = (i, k, j), sample pseudo-negative triples by randomly replacing either i or j with a random entity (but such that the resulting triple is unobserved). This strategy matches ER, which is based on questions (?, k, j) and (i, k, ?). Perturb 1-R: For each training triple t = (i, k, j), sample pseudo-negative triples by randomly replacing either i, k or j. The generated negative samples are not compared with the training set (Liu et al., 2017). Perturb 2: For each training triple t = (i, k, j), obtain pseudo-negative triples by randomly sampling unobserved tuples for relation k. This method appears more suited to PR. Training and implementation. We trained DistMult, ComplEx, Analogy and RESCAL with AdaGrad (Duchi et al., 2011) using binary crossentropy loss. We used pair-wise ranking loss for TransE (as it always produces negative scores). All embeddding models are implemented on top of the code of Liu et al. (2017) 4 in C++ using OpenMP. For RuleN, we use the original implementation provided by the authors. The evaluation protocols were written in Python, with Bottleneck 5 used for efficiently obtaining the top-K entries for PR. We found PR (which took ≈30-90 minutes) was about 3-4 times slower than ER . Hyperparameters. The best hyperparameters were selected based on MRR (for ER) and MAP@100 (for PR) on the validation data. For both protocols, we performed an exhaustive grid search over the following hyperparameter settings: d e ∈ {100, 150, 200}, weight of l 2regularization λ ∈ {0.1, 0.01, 0.001}, learning rate η ∈ {0.01, 0.1}, negative sampling strategies Perturb 1, Perturb 2 and Perturb 1-R, 6 and margin hyperparameter γ ∈ {0.5, 1, 2, 3, 4} for TransE. For each training triple, we sampled 6 pseudonegative triples. To keep effort tractable, we only used the most frequent relations from each dataset for hyperparameter tuning (top-5, top-5, top-15, and top-30 most frequent relations for WN18, WNRR, FB-237 and FB-15k, respectively). We trained each model for up to 500 epochs during grid search. In all cases, we evaluated model performance every 50 epochs and used the overall best-performing model. For RuleN, we used the best settings reported by the authors for ER (Meilicke et al., 2018). For PR, we learned path rules of length 2 using a sampling size of 500 for FB15K and FB-237. For WN18 and WNRR, we learned path rules of length 3 and sampling size of 500. Table 2 summarizes the ER results. Embedding models perform competitively with respect to RuleN on all datasets, except for their MRR performance on FB15K. Notice that this generally holds even for the more restricted models (TransE and DistMult) on the more challenging datasets, which were created after criticizing FB15K and WN18 as too easy (Toutanova and Chen, 2015;Dettmers et al., 2018). In particular, although DistMult can only model symmetric relations, and although most relations in these datasets are asymmetric, DistMult has good ER performance. Likewise, TransE achieved great performance in Hits@10 on all datasets, including WN18 which contains a large number of symmetric relations, which are not easily modeled by TransE. Performance Results with PR The evaluation results of PR with K = 100 are summarized in Table 3. Note that Tables 2 and 3 are not directly comparable: they measure different tasks. Also note that we use a different value of K, which in PR corresponds to the number of predicted facts per relation. We discuss the effect of the choice of K later. For the embeddings, observe that with the exception of Analogy and ComplEx on WN18, the performance of all models is unsatisfactory on all datasets, especially when compared with RuleN on FB15K and WN18, which were previously considered to be too easy for embedding models. Specifically, DistMult's Hits@100 is slightly less than 10% on WN18, meaning that if we add the top 100 ranked triples to the KB, over 90% of what is added is likely false. Even when using Com-plEx, the best model on FB15K, we would potentially add more than 50% false triples. This implies that embedding models cannot capture simple rules successfully. The notable exceptions are ComplEx and Analogy on WN18, although both are still behind RuleN. TransE and DistMult did not achieve competitive results on WN18. In addition, DistMult did not achieve competitive results on FB15K and FB-237 and TransE did not achieve competitive results in WNRR. In general, Com-plEx and Analogy performed consistently better than other models across different datasets. When compared with the RuleN baseline, however, the performance of these models was often not satisfactory. This suggests that better KGE models and/or training strategies are needed for KBC. RuleN did not perform well on FB-237 and WNRR, likely because the way these datasets were constructed makes them intrinsically difficult for rule-based methods (Meilicke et al., 2018). This is reflected in both ER and PR results. To better understand the change in performance of TransE and DistMult, we investigated their predictions for the top-5 most frequent relations on WN18. Table 4 shows the number of test triples appearing in the top-100 for each relation (after filtering triples from the training and validation sets). The numbers in parentheses are discussed in Section 4.4. We found that DistMult worked well on the symmetric relation derivationally related form, where its symmetry assumption clearly helps. Here 93% of the training data consists of symmetric pairs (i.e., (i, k, j) and (j, k, i)), and 88% of the test triples have its symmetric counterpart in the training set. In contrast, TransE contained no test triples for derivationally related form in the top-100 list. We found that the norm of the embedding vector of this relation was 0.1, which was considerably smaller than for the other relations (avg. 1.4). This supports our argument that TransE tends to push symmetric relation embeddings to 0. Note that while hyponymy, hypernymy, member meronym and member holonym are semantically transitive, the dataset contains almost exclusively their transitive core, i.e., the dataset (both train and test) does not contain many of the transitive links of the relations. As a result, models that cannot handle transitivity well may still produce good results. This might explain why TransE performed better for these relations than for derivationally related form. DistMult did not perform well on Influence of Unobserved True Triples Since all datasets are based on incomplete knowledge bases, all evaluation protocols may systematically underestimate model performance. In particular, any true triple t that is neither in the training, nor validation, nor test data is treated as negative during ranking-based evaluations. A model which correctly ranks t high is thus penalized. PR might be particularly sensitive to this due to the large number of candidates considered. It is generally unclear how to design an automatic evaluation strategy that avoids this problem. Manual labeling can be used to address this, but it may sometimes be infeasible given the large number of relations, entities, and models for KBC. To explore such underestimation effect in PR, we decoded the unobserved triples in the top-100 predictions of the 5 most frequent relations of WN18. We then checked whether those triples are implied by the symmetry and transitivity properties of each relation. In Table 4, we give the resulting number of triples in parentheses (i.e., number of test triples + implied triples). We observed that underestimation indeed happened. TransE was mostly affected, but still did not lead to competi-tive results when compared to ComplEx and Analogy. RuleN achieves the best possible results in all 5 relations. These results suggest that (1) underestimation is indeed a concern, and (2) the results in PR can nevertheless give an indication of relative model performance. Type Filtering When background knowledge (BK) is available, embedding models only need to score triples consistent with the BK. We explored whether their performance can be improved by filtering out type-inconsistent triples from each model's predictions. Notice that this is inherently what rulebased approaches do, since all predicted candidates will be type-consistent. In particular, we investigated how model performance is affected when we filter out predictions that violate type constraints (domain and range of each relation). If a model's performance improves with such type filtering, it must have ranked tuples with incorrect types high in the first place. We can thus assess to what extent models capture entity types as well as the domain and range of the relations. We extracted from Freebase type definitions for entities and domain and range constraints for relations. We also added the domain (or range) of a relation k to the type set of each subject (or object) entity which appeared in k. We obtained types for all entities in both FB datasets, and domain/range specifications for roughly 93% of relations in FB15K and 97% of relations in FB-237. The remaining relations were evaluated as before. We report in Table 5 the Hits@100 and MAP@100 as well as their absolute improvement (in parentheses) w.r.t. Table 3. We also include the results of RuleN from Table 3, which are already type-consistent. The results show that all KGE models improve by type filtering; thus all models do predict triples with incorrect types. In particular, DistMult shows considerable improvement on both datasets. Indeed, about 90% of the relations in FB15K (about 85% for FB-237) have a different type for their domain and range. As DistMult treats all relations as symmetric, it introduces a wrong triple for each true triple into the top-K list on these relations; type filtering allows us to ignore these wrong tuples. This is also consistent with DistMult's improved performance under ER, where type constraints are implicitly used since only questions with correct types are considered. Interestingly, ComplEx and Analogy improved considerably on FB15K, suggesting that the best performing embedding models on this dataset are still making a considerable number of type-inconsistent predictions. On FB15K, the relative ranking of the models with type filtering is roughly equal to the one without type filtering. On the harder FB-237 dataset, all models now perform similarly. Notice that when compared with RuleN, embedding models are still behind on FB15K, but are no longer behind on FB-237. Conclusion We investigated whether current embedding models provide good results for knowledge base completion, i.e., the task or inferring new facts from an incomplete knowledge base. We argued that the commonly-used ER evaluation protocol is not suited to answer this question, and proposed the PR evaluation protocol as an alternative. We evaluated a number of popular KGE models under the ER and PR protocols and found that most KGE models obtained good results under the ER but not the PR protocol. Therefore, more research into embedding models and their training is needed to assess whether, when, and how KGE models can be exploited for knowledge base completion.
6,623.4
2018-10-17T00:00:00.000
[ "Computer Science" ]
Adsorption/Desorption Characteristics and Simultaneous Enrichment of Orientin, Isoorientin, Vitexin and Isovitexin from Hydrolyzed Oil Palm Leaf Extract Using Macroporous Resins : Oil palm leaves (OPL) containing flavonoid C -glycosides are abundantly generated as oil palm byproducts. The performances of three macroporous resins with different physical and chemical properties for the enrichment of isoorientin, orientin, vitexin, and isovitexin from acid-hydrolyzed OPL (OPLAH) extract were screened. The XAD7HP resin exhibited the best sorption capacities for the targeted flavonoid C -glycosides and was thus selected for further evaluation. Static adsorption using the XAD7HP resin under optimal conditions (extract adjusted to pH 5, shaken at 298 K for 24 h) gave adsorption kinetics that fit well with a pseudo -second-order kinetic model. The adsorption of isoorientin and orientin was well described by Langmuir isotherms, while vitexin and isovitexin fit well with the Freundlich isotherms. Dynamic sorption trials using the column-packed XAD7HP resin produced 55–60-fold enrichment of isovitexin and between 11 and 25-fold enrichment of isoorientin, vitexin, and orientin using aqueous ethanol. The total flavonoid C -glycoside-enriched fractions (enriched OPLAH) with isoorientin (247.28–284.18 µ g/mg), orientin (104.88–136.19 µ g/mg), vitexin (1197.61–1726.11 µ g/mg), and isovitexin (13.03–14.61 µ g/mg) showed excellent antioxidant free radical scavenging activities compared with their crude extracts, with IC 50 values of 6.90–70.63 µ g/mL and 44.58–200.00 µ g/mL, respectively. Hence, this rapid and efficient procedure for the preliminary enrichment of flavonoid C -glycosides by using macroporous resin may have practical value in OPL biomass waste utilization programs to produce high value-added products, particularly in the nutraceuticals, cosmeceuticals, pharmaceuticals, and fine chemicals industries. extract, prehydrolyzed with acid and adjusted to a pH of 5, was shaken at 298 K for a period of 24 h for static adsorption. The adsorption process of the target flavonoids on the XAD7HP resin could be well-described with the pseudo -second-order kinetic model. The equilibrium experimental data of the adsorption of isoorientin and orientin on the XAD7HP resin at 298 K were well fitted to the Langmuir isotherm model, while those of vitexin and isovitexin were well described by the Freundlich isotherm model. The enriched fractions recovered using the isocratic (with 95% EtOH) and gradient elution modes (with 40% EtOH) produced up to 60-fold flavonoid enrichment with excellent antioxidant free radical scavenging activities. The enriched OPLAH contained isoorientin (247.28–284.18 µ g/mg), orientin (104.88–136.19 µ g/mg), vitexin (1197.61–1726.11 µ g/mg), and isovitexin (13.03–14.61 µ g/mg), as compared to OPLAH with isoorientin (2.34 µ g/mg), orientin (9.35 µ g/mg), vitexin (84.11 µ g/mg), and isovitexin (0.25 µ g/mg). Additionally, the enriched OPLAH also showed excellent antioxidant free radical scavenging activities compared with OPLAH, with IC 50 values of 6.90–70.63 µ g/mL and 44.58–200.00 µ g/mL, respectively. Strong hydrogen bonding may explain the efficient enrichment of the target flavonoid C -glycosides. The results indicated the combination of acid treatment and MARs could selectively and effectively enrich flavonoid C -glycosides from OPL. This study presents a simple, rapid, and efficient method for enriching the flavonoid C -glycoside content of oil palm leaf extract, a major type of agriculture waste, which has been underutilized. This method several potential Introduction The oil palm (Elaeis guineensis Jacq.) tree was introduced into Malaysia in 1875, with the first oil palm tree plantation established at Tennamaran Estate in Kuala Selangor [1]. Fueled by full support from Malaysian government agricultural diversification initiatives, palm oil plantations expanded tremendously and now cover a large acreage of agricultural land areas [2]. Currently, Malaysia is the second-largest oil palm producer in the world after Indonesia [3]. In fact, the latest statistics show that Malaysia was reaching 20 million tons of crude palm oil production in 2020 [4]. However, in the wake of this massive cultivation, a huge amount of oil palm biomass is generated as agricultural waste. Apart from mesocarp fibers (MF), empty fruit bunches (EFB), and palm kernel shells (PKS) from downstream cultivation, a huge amount of oil palm biomass is generated as agricultural waste. Apart from mesocarp fibers (MF), empty fruit bunches (EFB), and palm kernel shells (PKS) from downstream processing in oil palm mills, in parallel, oil palm trunks (OPT), oil palm fronds (OPF), and oil palm leaves (OPL) were also generated, presenting a huge environmental problem if left unutilized [5]. Like many other species in the plant kingdom, OPL byproduct is an excellent source of phytochemicals which could be used for some applications. OPL have in fact been reported to contain bioactive compounds that are responsible for various medicinal properties, such as treating kidney diseases, cancer, cardiovascular diseases, and wounds [6]. A previous study on OPL revealed the presence of both flavonoid O-and C-glycosides [7]. In general, flavonoid C-glycosides are not widely present in plants, and due to this, they have received less attention in comparison with their O-glycosyl counterparts. Nevertheless, several recent biological and pharmacological studies have shown that flavonoid Cglycosides also possess a wide spectrum of biological properties, which include anticancer, hepatoprotective, antioxidant, and antidiabetic properties [8]. Flavonoid C-glycosides differ from flavonoid O-glycosides in that they are more resistant to hydrolysis, since the aglycone is linked to the anomeric carbon of the sugar moiety via an acid-resistant C-C bond. Figure 1 shows the structures of four flavonoid C-glycosides of OPL, which include orientin, isoorientin, vitexin, and isovitexin. These flavonoid C-glycosides were present in considerable amounts in comparison with other luteolin and apigenin derivatives in OPL [7,[9][10][11][12]. It is worth noting that the global demand for flavonoids, including flavonoid C-glycosides, is forecasted to reach USD 1.2 billion by 2024 [13]. Therefore, their presence in a widely available and abundant biomass material warrants further investigation into the development of efficient methods for the preparative purification for downstream purposes and applications. Utilization of macroporous resins (MARs) in separating and purifying the flavonoid C-glycosides present in plant extracts has been practiced in recent years. It offers an alternative to conventional methods, which often start with solid-liquid extraction, followed by liquid-liquid extraction and eventually column chromatography [14]. These conventional approaches are not only time-consuming and inefficient, but they also require high consumption of solvents and energy [15,16]. The chemical nature of MARs allows them to selectively adsorb through hydrogen bonding and Van der Waals interactions with benzene rings and hydrogen groups present in the molecular structure of the targeted flavonoids [17]. The entrapment of these flavonoids on MARs is due to similarity in their physical and chemical characteristics, such as the appropriate surface area, average pore diameter, and polarity of both the MARs and the targeted flavonoids [18][19][20]. The high flavonoid sorption capacities made MARs a useful and practical adsorbent to enrich and Utilization of macroporous resins (MARs) in separating and purifying the flavonoid C-glycosides present in plant extracts has been practiced in recent years. It offers an alternative to conventional methods, which often start with solid-liquid extraction, followed by liquid-liquid extraction and eventually column chromatography [14]. These conventional approaches are not only time-consuming and inefficient, but they also require high consumption of solvents and energy [15,16]. The chemical nature of MARs allows them to selectively adsorb through hydrogen bonding and Van der Waals interactions with benzene rings and hydrogen groups present in the molecular structure of the targeted flavonoids [17]. The entrapment of these flavonoids on MARs is due to similarity in their physical and chemical characteristics, such as the appropriate surface area, average pore diameter, and polarity of both the MARs and the targeted flavonoids [18][19][20]. The high flavonoid sorption capacities made MARs a useful and practical adsorbent to enrich and purify flavonoid C-glycosides from various plants such as Cajanus cajan (L.) Millsp. Previously, we reported the adsorption behavior of the total flavonoids of OPL extract on different macroporous resins [9]. As an extension of this study, we further examine the adsorption and desorption properties of flavonoid C-glycosides, specifically (1) orientin, (2) vitexin, (3) isoorientin, and (4) isovitexin on a selection of MARs. The MARs with the best sorption properties for the target compounds were then used to develop a rapid and efficient method for the enrichment and purification of C-glycosyl flavonoids from acid-hydrolyzed OPL (OPLAH) extract. Factors affecting the sorption properties of the individual flavonoids C-glycoside were optimized, and their kinetics and isotherms were simultaneously evaluated. The method developed in this study presents an improved process for converting OPL biomass into fine chemicals at a high purity for potential applications. Pretreatment of MARs The chemical and physical properties of the selected MARs (XAD7HP, DAX-8, and XAD4) are summarized in Table 1. All MARs were pretreated prior to use to remove residual monomers and porogenic agents, which could be trapped in the pores of the resins during manufacturing. The MARs were soaked in 95% EtOH at a 1:20 ratio and washed with deionized water. The resins were then immersed in 1 mol/L NaOH and washed several times with deionized water to remove the base. Subsequently, the resins were subjected to a second immersion in 1 mol/L HCl and then washed thoroughly with deionized water to remove the acid. For each stage of the pretreatment, the resins were allowed to soak for 24 h before washing. The washed resins were then dried in a drying oven (model 100-800, Memmert, Schwabach, Germany) at 60 • C until reaching a constant weight. Preparation of Crude and Acid-Hydrolyzed Extracts Mature OPL were harvested from oil palm trees growing in the University Agricultural Park at the Universiti Putra Malaysia (UPM). A voucher specimen (SK 3332/18) was placed in the mini herbarium of the Institute of Bioscience (IBS) at UPM after the species was authenticated by an appointed botanist. The optimized procedures of preparing crude and acid-hydrolyzed OPL extract were described in our recent publication [10]. Briefly, the powdered OPL was mixed with aqueous MeOH (4:1 MeOH:water, v:v) and vortex-mixed for 0.5 min at 3000 g/min. The mixture was ultrasonicated at a frequency of 40 Hz for 30 min at 25 • C. Subsequently, the crude OPL extract was mixed with distilled water and 6 mol/L HCl in a ratio of 1:10:10 (w:v:v). The mixture was incubated for 45 min at 95 • C. At 25 • C, 40 mL MeOH was added. After centrifuging at 4000 g/min for 15 min, the supernatant was separated, vacuum-evaporated to dryness, and then freeze-dried at 0.064 mbar and −50 • C using a Labconco ® FreeZone Freeze Drier System (Kansas, MO, USA) to yield acid-hydrolyzed OPL extract (OPLAH). The gradient program employed was performed according to the previously reported method [9,12]. Briefly, the program was started with 10% solvent B for 0.6 min, gradually increased to 11.3% until 1.5 min, maintained isocratically until 5.5 min, and slightly increased to 11.4% until 8.0 min and 11.8% until 8.2 min. Solvent B was further increased to 12% until 12.0 min and then decreased to 10% for 1.0 min and maintained until 25 min. The column temperature was maintained at 25 • C, and the UV detector was set to a wavelength of 340 nm. Peak identification was based on the retention time and comparison of UV spectra with the respective reference standards. For sample analysis, 5 mg/mL of each sample solution was prepared and filtered through a 0.22 µm membrane filter. A 2 µL sample injection volume was used for all sample analysis. The quantification method was developed and validated based on the following characteristics: specificity, linearity, limit of detection (LOD) and quantification (LOQ), accuracy, repeatability, intermediate precision, and robustness, according to the International Conference on Harmonization (ICH) guidelines [21]. The full information with regard to method validation has been recently published [22]. Briefly, the developed method displayed good calibration curves with linearity (R 2 = 0.999) in the ranges of 16-500 µg/mL for isoorientin, 31-800 µg/mL for orientin, 47-1500 µg/mL for vitexin, and 16-500 µg/mL for isovitexin. In addition, the LODs for isoorientin, orientin, vitexin and isovitexin were 17.99, 30.22, 80.63, and 17.69 µg/mL, respectively while the LOQs for these compounds were 54.52, 91.58, 244.35, and 53.61 µg/mL, respectively. The recovery percentages were between 95% and 105% for all tested compounds, while for the inter-and intraday precisions, the relative standard deviation (RSD) values were found to be below 5%. For robustness, the chromatographic conditions, such as the detected changes in wavelength, column temperature, and sample stability showed insignificant changes, as indicated by t-test results (p > 0.05). Preliminary Selection Macroporous Resin as an Effective Adsorbent The static adsorption capacities of the resins were first screened to select the best resin for flavonoid enrichment. An accurately weighed amount (0.1 g) of each of the Processes 2021, 9, 659 5 of 18 pretreated resins was transferred into 15 mL centrifuge tubes. A 5 mL aliquot of OPLAH was then added into the tubes. These centrifuge tubes were capped, placed horizontally, and taped tightly in an orbital shaker (Wisube WIG-10RL Precise Shaking Incubator, Wisd Laboratory Instruments, Wertheim, Germany). The mixture was shaken for 24 h at 298 K with an agitation speed of 150 g/min to reach adsorption equilibrium. The filtrates were then analyzed by UHPLC. To desorb the flavonoid C-glycosides from the resins, 5 mL of 95% EtOH was added into each tube, and the mixing was repeated using the same conditions, followed by filtering and the filtrates being analyzed by UHPLC. Three individual experiments were performed. Selection of the optimal MAR for use in subsequent studies was made based on the adsorption and desorption capacities of each MAR. Optimization of Sorption Conditions Using Batch Adsorption Tests In the present study, the four main operating parameters of temperature, pH, equilibrium time point, and initial concentration were optimized. The optimum conditions for the adsorption of flavonoid C-glycosides from OPLAH were performed using batch adsorption tests, where the 15 mL OPLAH solution was mixed with a selected adsorbent (0.3 g) and subjected to continuous agitation using an orbital shaker with a agitation speed of 150 g/min. All experiments were carried out in triplicate. The optimal conditions were selected based on the quantification of orientin, isoorientin, vitexin, and isovitexin using a developed and validated UHPLC-UV/PDA method. Simultaneously, sorption behaviors such as the kinetics and isotherm were assessed. To select a suitable sorption temperature, the adsorption and desorption were performed at different oscillation temperatures (298 K, 308 K, and 318 K). The OPLAH solution was adjusted to a pH of 5, and the mixture was then agitated for 24 h. EtOH (95%) was used as a desorbing solvent. To optimize the pH solution of the OPLAH, three different pHs (5, 7, and 9) were adjusted with 1 mol/L HCl or 1 mol/L NaOH. Concurrently, the equilibrium time point was monitored by withdrawing an aliquot of supernatant at 0, 15, 30, 60, 120, 180, 240, 300, 360, 480, and 1440 min. The adsorption kinetics curves for the target flavonoid C-glycosides on the XAD7HP resin were constructed. The kinetic data were subjected to two common kinetic models-pseudo-first-order [23] and pseudo-second-order models [24]-and one particle diffusion kinetic model [25]. To optimize the suitable initial concentration, different OPLAH concentrations with a known amount of target flavonoid C-glycosides were prepared, whereby the concentrations of isoorientin, orientin, vitexin, and isovitexin, were in ranges of 1.62-45.12 µg/mL, 12.64-89.76 µg/mL, 66.76-863.22 µg/mL and 1.00-12.06 µg/mL, respectively. The OPLAH solution was adjusted to the optimized pH and temperature. Simultaneously, the isotherm data was subjected to two well-known theoretical isotherm models: the Langmuir [26] and Freundlich models [27]. The R L is a dimensionless constant that was applied to signify the important equilibrium parameter of the Langmuir isotherm [28]. Dynamic Sorption Experiments on the Chromatography Column The dynamic sorption procedure was carried out according to our recent publication [9] with modifications. By using a 2.5 cm × 46 cm glass column wet-packed with 4.4 g of dried XAD7HP resin, the dynamic adsorption and desorption experiments were performed. The resin bed volume (BV) was kept at 200 mL. The 750 mg of OPLAH was mixed with 150 mL of deionized water to form a 5 mg/mL solution. The pH of the filtered solution was adjusted to pH 5, applied to the glass column, and allowed to elute at a flow rate of 0.3 mL/min. The eluates were collected every 10 mL for UHPLC analysis. The 5% breakthrough and 95% saturation points were set based on the final to the initial concentration ratio (C/C o ) of each flavonoid C-glycoside. After reaching the saturation point, the desorption process proceeded by first washing the column with 30 mL of deionized water to remove the residue and eluting with EtOH, which acted as desorbing solvent, at a flow Processes 2021, 9, 659 6 of 18 rate of 0.3 min/mL. The eluates were collected every 10 mL for UHPLC analysis. All dynamic sorption experiments were carried out in triplicate and at an optimized temperature. The breakthrough and desorption curves were plotted to determine the breakthrough and saturation points. To select a suitable ethanol concentration for optimal desorption, both the isocratic and gradient elution modes were performed. For the isocratic mode, upon reaching equilibrium, 20% EtOH was loaded to elute the adsorbed flavonoids. The experiment was repeated by using different EtOH concentrations (40%, 60%, 80%, and 95%). For the gradient elution mode, a separate set of experiments was performed by eluting the adsorbed flavonoids using different EtOH concentrations of 20%, 40%, 60%, 80%, and 95%. The collected fractions for both modes were concentrated using a rotary evaporator, freeze-dried, weighed, and subjected to UHPLC analysis. Adsorption and Desorption Capacity, Kinetics, and Isotherm Model Equations Adsorption capacity: Desorption capacity: Pseudo-second-order: Intraparticle diffusion: Langmuir: Freundlich: R L : where q e , q d , q m , and q t are the adsorption capacity, desorption capacity, maximum adsorption capacity, and adsorption capacity at different contact times (t, min), respectively, which are stated as mg/g of dry resin; C o and C e , are the initial and equilibrium sample concentrations, respectively, while C d is the sample concentration in the desorption solution (these concentrations are measured in mg/mL); V, W, and C are the volume of the initial sample solution (mL), weight of the resin (g), and the constant representing the boundary layer diffusion effects (mg/g), respectively; k 1 , k 2 , and k p are the pseudo-first-order rate constant (1/min), pseudo-second-order rate constant (g/mg.min), and particle diffusion rate constant (mg/g.min 1/2 ), respectively; K L is the Langmuir constant (mg/mL), and K f and 1/n are the Freundlich constant ((mg/g)(mL/mg) 1/n ). Determination of the Total Flavonoid Content and Antioxidant Free Radical Scavenging Activities Evaluation of the total flavonoid content (TFC) was conducted using an aluminum chloride complex colorimetric assay [9]. Briefly, a 125 µL aliquot of 0.1 mg/mL OPL extract was transferred into a 2 mL microcentrifuge tube. Subsequently, 375 µL of 95% EtOH, 25 µL of a 10% aluminum chloride solution, 25 µL of a 1 mol/L sodium acetate solution, and Processes 2021, 9, 659 7 of 18 700 µL of distilled water were added, and the mixture was vortex-mixed (Vortex IKA MS 3 Basic, Selangor, Malaysia). A 200 µL aliquot of the mixture was then transferred into 96 well plates and incubated for 40 min at 25 • C, and the absorbance was recorded at 415 nm on a Tecan Infinite F200 Pro plate reader (Tecan Group Ltd., Männedorf, Switzerland). All tests were performed in triplicate. The TFC values were expressed in milligrams of quercetin equivalents per gram of extract (mg QCE/g extract). The antioxidant assays, such as 1,1-diphenyl-2-picrylhydrazyl (DPPH), and nitric oxide (NO)-free radical scavenging activities were carried out according to the previous report [9]. The samples were prepared at 1000 µg/mL as a stock solution and serially diluted. For the DPPH assay, aliquots of 50 µL of the sample working solution were pipetted into a microtiter well plate, and each was added with 100 µL of a 59 µg/mL DPPH solution. The reaction mixtures were mixed well and incubated in the dark for 30 min, after which their absorbances were recorded at 515 nm. Similarly, for the NO assay, aliquots of 60 µL of the test concentrations were pipetted into the microtiter well plate, and each was added with 60 µL of a sodium nitroprusside solution. The reaction mixtures were mixed well and incubated for 150 min at 25 • C. Griess reagent (60 µL) was then added to each well, and the absorbance was measured at 550 nm. The scavenging activity (SA) was assessed as SA% where A o and A s are the absorbances of the blank and test sample, respectively. In this experiment, quercetin was used as a positive control. The experiment was carried out in triplicate, and the results were expressed as IC 50 values in µg/mL. Statistical Analysis The InStat V2.02 statistical package (GraphPad Software, San Diego, CA, USA) and Minitab statistical software (Version 16, Minitab Inc., State College, PA, USA) were employed for all data analyses. For analysis of significance differences, one-way analysis of variance (ANOVA) done by Tukey's test was employed. The significant level was determined at p < 0.05. All data are shown as the mean of three replicates (n = 3). Adsorption and Desorption Capacities of Selected MARs The sorption capacities of the three MARs (XAD7HP, DAX-8, and XAD4) for the four flavonoid C-glycosides in OPLAH are shown in Figure 2. The adsorption capacity of the XAD7HP resin was 7.62 mg/g, which was considerably higher than those of DAX-8 and XAD4 at 7.41 mg/g and 0.92 mg/g, respectively. With a value of 6.73 mg/g, the desorption capacity of the XAD7HP resin was also higher than that of DAX-8 at 4.89 mg/g. Meanwhile, no desorption of flavonoid C-glycosides was observed for XAD4. Referring to these data, the XAD7HP resin had the best sorption capacities, demonstrating that an acrylic matrix, moderately polar resin, medium surface area, and large average pore diameter are the most suitable characteristics of MARs for the adsorption and desorption of the major OPLAH flavonoid C-glycosides. The findings are consistent with the previous findings, which reported high sorption capacities of the XAD7HP resin and low sorption capacities for XAD4 for grapefruit polyphenols [29] and oleuropein from olive (Olea europaea) leaves [30]. Therefore, the XAD7HP resin was selected for further evaluation. for XAD4 for grapefruit polyphenols [29] and oleuropein from olive (Olea europaea) leaves [30]. Therefore, the XAD7HP resin was selected for further evaluation. Effect of Oscillation Temperatures on the Sorption Capacities The oscillation temperature is crucial for the optimum sorption properties of the resins, as the intermolecular forces between the adsorbates and adsorbents could be altered by subjection to a suitable temperature. According to the results shown in Figure 3, there was no significant difference in the adsorption capacity of flavonoid C-glycosides at the three different oscillation temperatures. However, the results were different for the desorption capacity, which decreased with an increase in the oscillation temperature. Similar results were reported in a study using MARs to enrich C-glycosyl flavonoids found in trolliflowers and Abrus mollis [18,31]. Within the evaluated temperature range, the sorption process is thermopositive [32]. Hence, the optimal oscillation temperature selected was 298 K, due to it demonstrating the highest adsorption and desorption capacities for flavonoid C-glycosides. Adsorption Kinetics of the XAD7HP Resin The pH is a factor affecting the ionization capability of certain compounds in the solvent, which ultimately influences their adsorption affinity. Hence, it is vital to perform the sorption at the right pH [33]. Figure 4 shows that the adsorption capacities (qe) of the XAD7HP resin for isoorientin, orientin, vitexin, and isovitexin were higher at a pH of 5 Effect of Oscillation Temperatures on the Sorption Capacities The oscillation temperature is crucial for the optimum sorption properties of the resins, as the intermolecular forces between the adsorbates and adsorbents could be altered by subjection to a suitable temperature. According to the results shown in Figure 3, there was no significant difference in the adsorption capacity of flavonoid C-glycosides at the three different oscillation temperatures. However, the results were different for the desorption capacity, which decreased with an increase in the oscillation temperature. Similar results were reported in a study using MARs to enrich C-glycosyl flavonoids found in trolliflowers and Abrus mollis [18,31]. Within the evaluated temperature range, the sorption process is thermopositive [32]. Hence, the optimal oscillation temperature selected was 298 K, due to it demonstrating the highest adsorption and desorption capacities for flavonoid C-glycosides. Effect of Oscillation Temperatures on the Sorption Capacities The oscillation temperature is crucial for the optimum sorption properties of the resins, as the intermolecular forces between the adsorbates and adsorbents could be altered by subjection to a suitable temperature. According to the results shown in Figure 3, there was no significant difference in the adsorption capacity of flavonoid C-glycosides at the three different oscillation temperatures. However, the results were different for the desorption capacity, which decreased with an increase in the oscillation temperature. Similar results were reported in a study using MARs to enrich C-glycosyl flavonoids found in trolliflowers and Abrus mollis [18,31]. Within the evaluated temperature range, the sorption process is thermopositive [32]. Hence, the optimal oscillation temperature selected was 298 K, due to it demonstrating the highest adsorption and desorption capacities for flavonoid C-glycosides. Adsorption Kinetics of the XAD7HP Resin The pH is a factor affecting the ionization capability of certain compounds in the solvent, which ultimately influences their adsorption affinity. Hence, it is vital to perform the sorption at the right pH [33]. Figure 4 shows that the adsorption capacities (qe) of the XAD7HP resin for isoorientin, orientin, vitexin, and isovitexin were higher at a pH of 5 Adsorption Kinetics of the XAD7HP Resin The pH is a factor affecting the ionization capability of certain compounds in the solvent, which ultimately influences their adsorption affinity. Hence, it is vital to perform the sorption at the right pH [33]. Figure 4 shows that the adsorption capacities (q e ) of the XAD7HP resin for isoorientin, orientin, vitexin, and isovitexin were higher at a pH of 5 than at pHs of 7 and 9. The q e values for isoorientin, orientin, vitexin, and isovitexin decreased linearly as the pH increased. Based on the observations, hydrogen bonding was deemed to play a significant role in the sorption of the XAD7HP resins. The reduction in adsorption capacity at higher pH values may have been due to the decrease of hydrogen bonding interactions, caused by the deprotonation of hydroxyl groups in the flavonoid C-glycosides and the formation of their corresponding anions [19]. On the other hand, a low pH led to an abundance of hydronium ions at the surface of the resins, which may have enhanced the hydrogen bonding between hydroxyl groups present in the flavonoid C-glycosides with the XAD7HP resin, subsequently enhancing the adsorption capacity. The better adsorption capacity of flavonoid C-glycosides in acidic rather than basic conditions has been reported previously [18,19]. than at pHs of 7 and 9. The qe values for isoorientin, orientin, vitexin, and isovitexin decreased linearly as the pH increased. Based on the observations, hydrogen bonding was deemed to play a significant role in the sorption of the XAD7HP resins. The reduction in adsorption capacity at higher pH values may have been due to the decrease of hydrogen bonding interactions, caused by the deprotonation of hydroxyl groups in the flavonoid Cglycosides and the formation of their corresponding anions [19]. On the other hand, a low pH led to an abundance of hydronium ions at the surface of the resins, which may have enhanced the hydrogen bonding between hydroxyl groups present in the flavonoid Cglycosides with the XAD7HP resin, subsequently enhancing the adsorption capacity. The better adsorption capacity of flavonoid C-glycosides in acidic rather than basic conditions has been reported previously [18,19]. The kinetics of adsorption, which explains the solute uptake rate governing the contact time of the sorption reaction, is an important characteristic that defines the sorption efficiency [34]. Hence, the adsorption behavior of the XAD7HP resins could be comprehended by accessing the adsorption kinetics of the flavonoid C-glycosides. Figure 4D presents the adsorption capacity qt versus contact time (t, min) curves for the XAD7HP resin at different pH levels at 298 K. Overall, the qt values were enhanced with an increase in time before achieving equilibrium [34]. The equilibrium time for the flavonoid C-glycosides was up to 24 h on the XAD7HP resin. There are three commonly suggested kinetics models for adsorption: pseudo-first-order, pseudo-second-order, and intraparticle diffusion kinetic models [24,25]. Overall, the correlation coefficient (R 2 ) values revealed that the adsorption of flavonoid C-glycosides on the XAD7HP resin fit better to a pseudo-second order kinetic model compared with a pseudo-first-order model. In addition, Table 2 also reveals the multilinear characteristics of adsorption of the flavonoid C-glycosides on the XAD7HP resin, based on the R 2 values of the intraparticle diffusion kinetics model. The intraparticle diffusion curves of the The kinetics of adsorption, which explains the solute uptake rate governing the contact time of the sorption reaction, is an important characteristic that defines the sorption efficiency [34]. Hence, the adsorption behavior of the XAD7HP resins could be comprehended by accessing the adsorption kinetics of the flavonoid C-glycosides. Figure 4D presents the adsorption capacity q t versus contact time (t, min) curves for the XAD7HP resin at different pH levels at 298 K. Overall, the q t values were enhanced with an increase in time before achieving equilibrium [34]. The equilibrium time for the flavonoid C-glycosides was up to 24 h on the XAD7HP resin. There are three commonly suggested kinetics models for adsorption: pseudo-firstorder, pseudo-second-order, and intraparticle diffusion kinetic models [24,25]. Overall, the correlation coefficient (R 2 ) values revealed that the adsorption of flavonoid C-glycosides on the XAD7HP resin fit better to a pseudo-second order kinetic model compared with a pseudo-first-order model. In addition, Table 2 also reveals the multilinear characteristics of adsorption of the flavonoid C-glycosides on the XAD7HP resin, based on the R 2 values of the intraparticle diffusion kinetics model. The intraparticle diffusion curves of the XAD7HP resin show poor linear curves over time. By taking a pH of 5 as an example, the whole process was divided into three major phases: boundary layer diffusion (0-30 min), where the adsorption took place rapidly; a gradual adsorption phase (30-240 min), where the adsorption happened slowly; and finally, the equilibrium phase (240-1440 min), where the adsorption reached equilibrium. Similar results were reported in previous studies that showed intraparticle diffusion took place in the adsorption phase [35]. In the present study, the whole adsorption phase could not be represented by the particle diffusion kinetic models due to weak R 2 values. Nevertheless, it could still explain the adsorption mechanism up to a certain phase [36]. However, it is important to note that different flavonoids, including isoorientin, orientin, vitexin, and isovitexin, will have different ratios of their molecular status to ionic status in various pH environments. This will probably result in a more complex adsorption mechanism and kinetics, which will require more extensive future studies for greater insights into the mechanisms involved. Table 2. Pseudo-first-order and pseudo-second-order kinetic equations and the intraparticle diffusion equation for the major C-glycosyl flavonoids in the OPLAH extract on the XAD7HP resin. Adsorption Isotherms on the XAD7HP Resin The adsorption isotherms of flavonoid C-glycosides on XAD7HP resins was performed at room temperature (298 K) after taking into consideration several factors, including the practicality and energy conservation. Figure S1 shows the isotherm curves for the individual flavonoid C-glycosides in OPLAH. The adsorption behaviors of flavonoid C-glycosides on the XAD7HP resin were further assessed by using two adsorption isotherm equations, namely Langmuir and Freundlich equations. The equations revealed the interaction between the compounds and the resin [37]. The Langmuir and Freundlich parameters are listed in Table 3. The R 2 values of the two models were relatively higher for isoorientin, orientin, and vitexin. With R 2 values of 0.9977 and 0.9519, the adsorption behavior of isoorientin and orientin, respectively, on the XAD7HP resin followed the Langmuir equation. The results indicated that these two compounds displayed monolayer adsorption on the resin, suggesting that orientin and its isomer were in contact with the surface layer of the XAD7HP resin. Meanwhile, vitexin and isovitexin followed the Freundlich equation with R 2 = 0.9700 and 0.8418, respectively, indicating the adsorption of these isomeric compounds followed a multilayer process wherein the XAD7HP resin accommodated more than one layer for the adsorption of vitexin and its pair to take place. This situation could be related to the molecular structures of vitexin/isovitexin and orientin/isoorientin [38]. As shown in Figure 1A, the molecular sizes of the vitexin and isovitexin structures were relatively smaller compared with orientin and isoorientin, due to the lack of one hydroxyl group (-OH). This could have reduced the steric hindrance in the interaction between vitexin/isovitexin and the XAD7HP resin, thus favoring a multilayer adsorption process. In the Freundlich equation, the value of R L indicates the isotherm shape, which is either unfavorable (R L > 1), linear (R L = 1), favorable (0 < R L <1), or irreversible (R L = 0) [39]. Hence, the present findings showed that the adsorption of the flavonoid C-glycosides on the XAD7HP resin was favorable. The 1/n value is a measure of the adsorption intensity [38]. A value of 1/n above 2 indicates that adsorption is unlikely to happen [40]. In this study, the 1/n values of the flavonoid C-glycosides were all above 2, suggesting that the XAD7HP resin was a suitable resin to use for absorbing the flavonoid C-glycosides from OPLAH. Dynamic Sorption Properties of the XAD7HP Resin On an open column, information of the breakthrough volume is important in estimating the optimum volume of sample-containing compounds of interest that can be loaded onto the column. The breakthrough point was set at 5% of the inlet concentration [9]. As shown in Figure 5A, the dynamic breakthrough curves on the XAD7HP resin were attained for isoorientin, orientin, vitexin, and isovitexin. The breakthrough volume of isoorientin on the XAD7HP resin was 100 mL, while that of orientin, vitexin, and isovitexin was 30 mL. Meanwhile, the saturation point was defined, at which the exit solute concentration reached 95% of the inlet concentration [9]. The saturation volume of isoorientin and isovitexin was 150 mL, while that of orientin and vitexin was 130 mL. The dynamic desorption curves for isoorientin, orientin, vitexin, and isovitexin on the XAD7HP resin are shown in Figure 5B,C. The results indicate that at 150 mL, the flavonoids could be sufficiently desorbed and eluted off of the XAD7HP resin column. the steric hindrance in the interaction between vitexin/isovitexin and the XAD7HP resin, thus favoring a multilayer adsorption process. In the Freundlich equation, the value of RL indicates the isotherm shape, which is either unfavorable (RL > 1), linear (RL = 1), favorable (0 < RL <1), or irreversible (RL = 0) [39]. Hence, the present findings showed that the adsorption of the flavonoid C-glycosides on the XAD7HP resin was favorable. The 1/n value is a measure of the adsorption intensity [38]. A value of 1/n above 2 indicates that adsorption is unlikely to happen [40]. In this study, the 1/n values of the flavonoid C-glycosides were all above 2, suggesting that the XAD7HP resin was a suitable resin to use for absorbing the flavonoid C-glycosides from OPLAH. Dynamic Sorption Properties of the XAD7HP Resin On an open column, information of the breakthrough volume is important in estimating the optimum volume of sample-containing compounds of interest that can be loaded onto the column. The breakthrough point was set at 5% of the inlet concentration [9]. As shown in Figure 5A, the dynamic breakthrough curves on the XAD7HP resin were attained for isoorientin, orientin, vitexin, and isovitexin. The breakthrough volume of isoorientin on the XAD7HP resin was 100 mL, while that of orientin, vitexin, and isovitexin was 30 mL. Meanwhile, the saturation point was defined, at which the exit solute concentration reached 95% of the inlet concentration [9]. The saturation volume of isoorientin and isovitexin was 150 mL, while that of orientin and vitexin was 130 mL. The dynamic desorption curves for isoorientin, orientin, vitexin, and isovitexin on the XAD7HP resin are shown in Figure 5B,C. The results indicate that at 150 mL, the flavonoids could be sufficiently desorbed and eluted off of the XAD7HP resin column. Comparison between Isocratic and Gradient Elution Modes for Optimal Flavonoid C-Glycoside Enrichment Enrichment of the OPLAH flavonoid C-glycosides was carried out via isocratic and gradient elution modes by using EtOH as a desorbing solvent after considering its low Comparison between Isocratic and Gradient Elution Modes for Optimal Flavonoid C-Glycoside Enrichment Enrichment of the OPLAH flavonoid C-glycosides was carried out via isocratic and gradient elution modes by using EtOH as a desorbing solvent after considering its low cost, ease of removal, and low toxicity [41]. Previous studies have also used EtOH to desorb flavonoid C-glycosides from other various MARs [20,31,42]. The desorbed fractions from the XAD7HP resin were analyzed qualitatively and quantitatively and compared to the original OPLAH. For the isocratic elution mode, a single desorbing solvent system was applied. As shown in Figure 6A, the amount of desorbed flavonoid C-glycosides increased with an increase in the EtOH concentration (from 20% to 95%). Orientin and vitexin were enriched the most when 80% EtOH was used as a single desorbing solvent system, as a further increment to 95% EtOH gave insignificant changes. Their respective isomers, isoorientin, and isovitexin were found in the highest fold at 95% EtOH. Meanwhile, a multiple desorbing solvent system was employed in the gradient elution mode. Figure 6B shows that orientin, isoorientin, vitexin, and isovitexin started to desorb rapidly from 20% to 40% EtOH concentrations and started to decrease as the EtOH concentration increased from 60% to 95%. Thus, the results revealed that the flavonoid C-glycosides found in the OPLAH solution could be desorbed optimally at 95% and 40% for the isocratic and gradient desorption techniques, respectively. The desorption of flavonoid C-glycosides from the XAD7HP resin into the solvent was attributed to the competition between the interaction of intermolecular forces and dissolution into the solvent used [9,42]. from the XAD7HP resin were analyzed qualitatively and quantitatively and compared to the original OPLAH. For the isocratic elution mode, a single desorbing solvent system was applied. As shown in Figure 6A, the amount of desorbed flavonoid C-glycosides increased with an increase in the EtOH concentration (from 20% to 95%). Orientin and vitexin were enriched the most when 80% EtOH was used as a single desorbing solvent system, as a further increment to 95% EtOH gave insignificant changes. Their respective isomers, isoorientin, and isovitexin were found in the highest fold at 95% EtOH. Meanwhile, a multiple desorbing solvent system was employed in the gradient elution mode. Figure 6B shows that orientin, isoorientin, vitexin, and isovitexin started to desorb rapidly from 20% to 40% EtOH concentrations and started to decrease as the EtOH concentration increased from 60% to 95%. Thus, the results revealed that the flavonoid C-glycosides found in the OPLAH solution could be desorbed optimally at 95% and 40% for the isocratic and gradient desorption techniques, respectively. The desorption of flavonoid Cglycosides from the XAD7HP resin into the solvent was attributed to the competition between the interaction of intermolecular forces and dissolution into the solvent used [9,42]. The UHPLC chromatograms of the OPLAH and enriched fractions obtained from the isocratic desorption (95% EtOH) and gradient desorption modes (40% EtOH) are shown in Figure S2A. Comparing the chromatogram of the enriched fractions and OPLAH, it could be observed that some impurities present in the original extract were eliminated, while the relative peak areas of the four major flavonoid C-glycosides were increased by different degrees. The compounds assigned to peaks 1-4 were confirmed by commercial standards and characterized by liquid chromatography tandem mass spectrometry (LC-MS/MS) ( Figure S2B) [10]. The rest of the unassigned peaks have been comprehensively discussed in our previous publications [7,9,12]. Table 4 summarized the quantitative information of the OPLAH and the enriched fractions obtained through the isocratic (with 95% EtOH) and gradient elution modes (with 40% EtOH). The XAD7HP resin was able to increase the TFC from 88.98 mg QCE/g up to 284.18 mg QCE/g dried extract by 3.2-fold. Among the four flavonoid C-glycosides, at the optimum EtOH concentrations, isovitexin was enriched the most, followed by isoorientin, vitexin, and orientin. As illustrated by Figure 6A for the isocratic elution mode, with 95% EtOH, isovitexin was enriched almost 55-fold while isoorientin, vitexin, and orientin were enriched by 11-to 20-fold. A similar trend was also observed in the gradient elution mode, where at 40% EtOH, isovitexin was enriched by 60-fold, isoorientin by 25-fold, vitexin by 20-fold, and orientin by 15-fold ( Figure 6B). The UHPLC chromatograms of the OPLAH and enriched fractions obtained from the isocratic desorption (95% EtOH) and gradient desorption modes (40% EtOH) are shown in Figure S2A. Comparing the chromatogram of the enriched fractions and OPLAH, it could be observed that some impurities present in the original extract were eliminated, while the relative peak areas of the four major flavonoid C-glycosides were increased by different degrees. The compounds assigned to peaks 1-4 were confirmed by commercial standards and characterized by liquid chromatography tandem mass spectrometry (LC-MS/MS) ( Figure S2B) [10]. The rest of the unassigned peaks have been comprehensively discussed in our previous publications [7,9,12]. Table 4 summarized the quantitative information of the OPLAH and the enriched fractions obtained through the isocratic (with 95% EtOH) and gradient elution modes (with 40% EtOH). The XAD7HP resin was able to increase the TFC from 88.98 mg QCE/g up to 284.18 mg QCE/g dried extract by 3.2-fold. Among the four flavonoid C-glycosides, at the optimum EtOH concentrations, isovitexin was enriched the most, followed by isoorientin, vitexin, and orientin. As illustrated by Figure 6A for the isocratic elution mode, with 95% EtOH, isovitexin was enriched almost 55-fold while isoorientin, vitexin, and orientin were enriched by 11-to 20-fold. A similar trend was also observed in the gradient elution mode, where at 40% EtOH, isovitexin was enriched by 60-fold, isoorientin by 25-fold, vitexin by 20-fold, and orientin by 15-fold ( Figure 6B). Antioxidant DPPH and NO Free Radical Scavenging Activities The antioxidant activities of OPLAH, enriched fractions, and the individual flavonoid C-glycosides were tested using DPPH and NO free radical scavenging assays, and the results are shown in Table 4. The DPPH results revealed that the total flavonoid C-glycoside enriched fractions, obtained using the isocratic and gradient desorption methods, exhibited stronger antioxidant activity, with IC 50 values of 69.19 and 70.63 µg/mL, respectively. In comparison with the original OPLAH extract, the IC 50 value was much lower (200 µg/mL). The results for the NO free radical scavenging assay were also similar, as the enriched fractions exhibited significantly improved activity compared with the original extract. This increase in antioxidant activity indicated the substantial contribution of the enriched flavonoid C-glycoside contents to the overall activity. The results highlighted that both the DPPH and NO free radical scavenging assays were in good agreement for evaluating the antioxidant activities in both the original and enriched fractions. Previous studies have also reported the positive correlation between the free radical scavenging activity and the presence of high amounts of phenolic constituents [43,44]. The single flavonoid C-glycoside was also assayed for the free radical scavenging activities. Isoorientin with an IC 50 value of 14.70 µg/mg exhibited superior DPPH free radical scavenging activity. Its isomer, orientin, was moderately active, with an IC 50 value of 57.60 µg/mg, while isovitexin and vitexin were weakly active in comparison. The results are in agreement with previous studies that reported vitexin and isovitexin were poor DPPH free radical scavengers [45,46]. The structural differences of these flavonoid C-glycosides, such as the position of glycosidic linkages and the number or position of hydroxyl groups at play, will have a significant effect on the bioactivity. For example, the presence of a single hydroxyl group on the B ring of both vitexin and isovitexin may be the reason for their lower activity, in comparison with isoorientin and orientin, which have two hydroxyl groups on the same ring ( Figure 1A). Additionally, the weaker activity of isovitexin could be due to steric hindrance associated with glycosylation on C-6, as compared with on C-8 for its isomer, vitexin [47]. Meanwhile, the NO free radical scavenging activity was quite different from the DPPH free radical scavenging activity. All tested apigenin and luteolin C-glycosides had good antioxidant activity by showing a great ability to inhibit nitric oxide and superoxide anion at low concentrations. In contrast to the DPPH assay results, isovitexin and vitexin exhibited strong NO scavenging activity, with IC 50 values of 0.73 and 4.31 µg/mg, respectively, whereas orientin and isoorientin exhibited weaker values. These results reflect the superiority of isovitexin and vitexin in scavenging nitric oxide and superoxide radicals, which have also been similarly reported in the study of Trigonella foenum graecum L. (fenugreek seeds) [47]. Adsorption Mechanisms The efficiency of the adsorption and desorption processes primarily relies on the polarity of the MARs. In this experiment, the tested MARs comprised both nonpolar (XAD4) and moderately polar (XAD7HP and DAX-8) resins. The data obtained showed that moderately polar resins were more appropriate to entrap and release flavonoid Cglycosides from OPL extract. The polarity matching between the extract and resin was related to the multiple interactions between the targeted metabolites and the surface chemistry of the resin [9,19,29,48]. Being a nonpolar resin, XAD4, with its smaller pore size, has low wettability and thus is not well-dispersed in an aqueous solution, which explains the low adsorption and no desorption of polar compounds (flavonoid C-glycosides (29)). Figure 7 displays the possible interactions between the isoorientin, orientin, vitexin, and isovitexin and the moderately polar XAD7HP resin under acidic conditions. The early part of the study indicated that isovitexin showed the highest adsorption and desorption capacities, followed by vitexin, orientin, and isorientin, suggesting multiple interactions, such as electrostatic interaction, intramolecular and intermolecular hydrogen bonding, ion-dipole interactions, cation-Processes 2021, 9, x FOR PEER REVIEW 14 of Adsorption Mechanisms The efficiency of the adsorption and desorption processes primarily relies on the p larity of the MARs. In this experiment, the tested MARs comprised both nonpolar (XAD and moderately polar (XAD7HP and DAX-8) resins. The data obtained showed that mo erately polar resins were more appropriate to entrap and release flavonoid C-glycosid from OPL extract. The polarity matching between the extract and resin was related to t multiple interactions between the targeted metabolites and the surface chemistry of t resin [9,19,29,48]. Being a nonpolar resin, XAD4, with its smaller pore size, has low wet bility and thus is not well-dispersed in an aqueous solution, which explains the low a sorption and no desorption of polar compounds (flavonoid C-glycosides (29)). Figure displays the possible interactions between the isoorientin, orientin, vitexin, and isovitex and the moderately polar XAD7HP resin under acidic conditions. The early part of t study indicated that isovitexin showed the highest adsorption and desorption capaciti followed by vitexin, orientin, and isorientin, suggesting multiple interactions, such electrostatic interaction, intramolecular and intermolecular hydrogen bonding, ion-dipo interactions, cation-ᴫ interaction, and Van der Waals forces of the adsorbent. T XAD7HP resin was more favorable in isovitexin compared with the other compoun ( Figure 7). More specifically, the hydroxyl groups at C-4′, C-5, or C-7 of the flavones ha been reported to be more acidic than the hydroxyl groups attached at other positions [4 Thus, it is highly likely that the electrostatic interactions of the flavonoids on the surfa of the resins could have resulted from the attraction of protons dissociated from the h droxyl groups at these positions. The hydroxyl groups of the flavonoid C-glycosides c also interact with the resin through the formation of intramolecular and intermolecu hydrogen bonds [50]. Furthermore, polar adsorbates can cause ion-dipole interactio with the polar segment of the moderately polar resin. Other than that, adsorption can a be facilitated by cation-ᴫ interaction [29], which can occur between hydronium ions (H3O surrounding the XAD7HP resin and the benzene ring from flavonoids. Lastly, the ma driving force for the sorption process on polymeric XAD7HP resin is the existence of V der Waals forces in an aqueous solvent system. Therefore, based on the high sorption c pacities obtained in the study, the efficient simultaneous sorption of isoorientin, orient vitexin, and isovitexin was suggested to be substantially contributed by multiple inter tions [9,42]. interaction, and Van der Waals forces of the adsorbent. The XAD7HP resin was more favorable in isovitexin compared with the other compounds ( Figure 7). More specifically, the hydroxyl groups at C-4 , C-5, or C-7 of the flavones have been reported to be more acidic than the hydroxyl groups attached at other positions [49]. Thus, it is highly likely that the electrostatic interactions of the flavonoids on the surface of the resins could have resulted from the attraction of protons dissociated from the hydroxyl groups at these positions. The hydroxyl groups of the flavonoid C-glycosides can also interact with the resin through the formation of intramolecular and intermolecular hydrogen bonds [50]. Furthermore, polar adsorbates can cause ion-dipole interactions with the polar segment of the moderately polar resin. Other than that, adsorption can also be facilitated by cation-Processes 2021, 9, x FOR PEER REVIEW 14 of 18 Adsorption Mechanisms The efficiency of the adsorption and desorption processes primarily relies on the polarity of the MARs. In this experiment, the tested MARs comprised both nonpolar (XAD4) and moderately polar (XAD7HP and DAX-8) resins. The data obtained showed that moderately polar resins were more appropriate to entrap and release flavonoid C-glycosides from OPL extract. The polarity matching between the extract and resin was related to the multiple interactions between the targeted metabolites and the surface chemistry of the resin [9,19,29,48]. Being a nonpolar resin, XAD4, with its smaller pore size, has low wettability and thus is not well-dispersed in an aqueous solution, which explains the low adsorption and no desorption of polar compounds (flavonoid C-glycosides (29)). Figure 7 displays the possible interactions between the isoorientin, orientin, vitexin, and isovitexin and the moderately polar XAD7HP resin under acidic conditions. The early part of the study indicated that isovitexin showed the highest adsorption and desorption capacities, followed by vitexin, orientin, and isorientin, suggesting multiple interactions, such as electrostatic interaction, intramolecular and intermolecular hydrogen bonding, ion-dipole interactions, cation-ᴫ interaction, and Van der Waals forces of the adsorbent. The XAD7HP resin was more favorable in isovitexin compared with the other compounds ( Figure 7). More specifically, the hydroxyl groups at C-4′, C-5, or C-7 of the flavones have been reported to be more acidic than the hydroxyl groups attached at other positions [49]. Thus, it is highly likely that the electrostatic interactions of the flavonoids on the surface of the resins could have resulted from the attraction of protons dissociated from the hydroxyl groups at these positions. The hydroxyl groups of the flavonoid C-glycosides can also interact with the resin through the formation of intramolecular and intermolecular hydrogen bonds [50]. Furthermore, polar adsorbates can cause ion-dipole interactions with the polar segment of the moderately polar resin. Other than that, adsorption can also be facilitated by cation-ᴫ interaction [29], which can occur between hydronium ions (H3O + ) surrounding the XAD7HP resin and the benzene ring from flavonoids. Lastly, the main driving force for the sorption process on polymeric XAD7HP resin is the existence of Van der Waals forces in an aqueous solvent system. Therefore, based on the high sorption capacities obtained in the study, the efficient simultaneous sorption of isoorientin, orientin, vitexin, and isovitexin was suggested to be substantially contributed by multiple interactions [9,42]. interaction [29], which can occur between hydronium ions (H 3 O + ) surrounding the XAD7HP resin and the benzene ring from flavonoids. Lastly, the main driving force for the sorption process on polymeric XAD7HP resin is the existence of Van der Waals forces in an aqueous solvent system. Therefore, based on the high sorption capacities obtained in the study, the efficient simultaneous sorption of isoorientin, orientin, vitexin, and isovitexin was suggested to be substantially contributed by multiple interactions [9,42]. Adsorption Mechanisms The efficiency of the adsorption and desorption processes primarily relies on the polarity of the MARs. In this experiment, the tested MARs comprised both nonpolar (XAD4) and moderately polar (XAD7HP and DAX-8) resins. The data obtained showed that moderately polar resins were more appropriate to entrap and release flavonoid C-glycosides from OPL extract. The polarity matching between the extract and resin was related to the multiple interactions between the targeted metabolites and the surface chemistry of the resin [9,19,29,48]. Being a nonpolar resin, XAD4, with its smaller pore size, has low wettability and thus is not well-dispersed in an aqueous solution, which explains the low adsorption and no desorption of polar compounds (flavonoid C-glycosides (29)). Figure 7 displays the possible interactions between the isoorientin, orientin, vitexin, and isovitexin and the moderately polar XAD7HP resin under acidic conditions. The early part of the study indicated that isovitexin showed the highest adsorption and desorption capacities, followed by vitexin, orientin, and isorientin, suggesting multiple interactions, such as electrostatic interaction, intramolecular and intermolecular hydrogen bonding, ion-dipole interactions, cation-ᴫ interaction, and Van der Waals forces of the adsorbent. The XAD7HP resin was more favorable in isovitexin compared with the other compounds ( Figure 7). More specifically, the hydroxyl groups at C-4′, C-5, or C-7 of the flavones have been reported to be more acidic than the hydroxyl groups attached at other positions [49]. Thus, it is highly likely that the electrostatic interactions of the flavonoids on the surface of the resins could have resulted from the attraction of protons dissociated from the hydroxyl groups at these positions. The hydroxyl groups of the flavonoid C-glycosides can also interact with the resin through the formation of intramolecular and intermolecular hydrogen bonds [50]. Furthermore, polar adsorbates can cause ion-dipole interactions with the polar segment of the moderately polar resin. Other than that, adsorption can also be facilitated by cation-ᴫ interaction [29], which can occur between hydronium ions (H3O + ) surrounding the XAD7HP resin and the benzene ring from flavonoids. Lastly, the main driving force for the sorption process on polymeric XAD7HP resin is the existence of Van der Waals forces in an aqueous solvent system. Therefore, based on the high sorption capacities obtained in the study, the efficient simultaneous sorption of isoorientin, orientin, vitexin, and isovitexin was suggested to be substantially contributed by multiple interactions [9,42]. The findings from both the isocratic and gradient elution experiments showed that isovitexin was enriched between 55-and 60-fold, while isoorientin, vitexin, and orientin were enriched by 20-25, 15-20 and 11-15-fold, respectively. This phenomenon could be due to strong hydrogen bonding interactions between the hydroxyl groups attached to the aglycones and the surface of the cross-linked polymeric resins [9,19]. The studied flavonoid C-glycosides shared similar flavone aglycone with differences in the sugar moiety position, as shown in Figure 1A. It was previously reported that a flavonoid molecule with sugar moiety attached to it could reach approximately 2.0 nm in size [50], and with additional hydroxyl groups contributed by glucosides, this collectively enhanced the sorption process to the large pore size and moderately polar resin XAD7HP. Moreover, in between isomeric compounds, isovitexin/vitexin had higher enrichment than isoorientin/orientin. This could be explained by the steric hindrance [48,51,52]. Referring to Figure 1A, vitexin isomers have one hydroxyl group bonded at C-4 (para position) of ring B, whereas orientin isomers have two hydroxyl groups attached at C-3 (meta position) and C-4 (para position) of the same ring. Since the hydroxyl group is one of the para directing groups, attachment of these hydroxyl groups onto the para position lessened the steric hindrance, resulting in less repulsion between the groups and lastly assisting the interaction of these groups with the resin through hydrogen bonds. However, the addition of one hydroxyl group onto the C-3 position of the orientin isomers increased the steric hindrance, as the hydroxyl group was larger than the hydrogen atom, causing congestion that may have slowed down the interaction of the atom with the surface of resins. Conclusions The present study provides experimental data on the enrichment of the total flavonoid C-glycosides content via a process combining acid hydrolysis and adsorption and desorption on MARs. The XAD7HP resin showed the best sorption capacities. The enrichment of the flavonoid C-glycosides content of OPL extract was conducted at optimal conditions, where the leaf extract, prehydrolyzed with acid and adjusted to a pH of 5, was shaken at 298 K for a period of 24 h for static adsorption. The adsorption process of the target flavonoids on the XAD7HP resin could be well-described with the pseudo-second-order kinetic model. The equilibrium experimental data of the adsorption of isoorientin and orientin on the XAD7HP resin at 298 K were well fitted to the Langmuir isotherm model, while those of vitexin and isovitexin were well described by the Freundlich isotherm model. The enriched fractions recovered using the isocratic (with 95% EtOH) and gradient elution modes (with 40% EtOH) produced up to 60-fold flavonoid enrichment with excellent antioxidant free radical scavenging activities. The enriched OPLAH contained isoorientin (247.28-284.18 µg/mg), orientin (104.88-136.19 µg/mg), vitexin (1197.61-1726.11 µg/mg), and isovitexin (13.03-14.61 µg/mg), as compared to OPLAH with isoorientin (2.34 µg/mg), orientin (9.35 µg/mg), vitexin (84.11 µg/mg), and isovitexin (0.25 µg/mg). Additionally, the enriched OPLAH also showed excellent antioxidant free radical scavenging activities compared with OPLAH, with IC 50 values of 6.90-70.63 µg/mL and 44.58-200.00 µg/mL, respectively. Strong hydrogen bonding may explain the efficient enrichment of the target flavonoid C-glycosides. The results indicated the combination of acid treatment and MARs could selectively and effectively enrich flavonoid C-glycosides from OPL. This study presents a simple, rapid, and efficient method for enriching the flavonoid C-glycoside content of oil palm leaf extract, a major type of agriculture waste, which has been underutilized. This method provides several potential applications, such as for further purification of major flavonoid C-glycosides as fine chemicals or pharmaceuticals or the use of the enriched fraction as bioactive ingredients in nutraceutical, cosmeceutical, and other healthcare or personal care products.
13,504.4
2021-04-09T00:00:00.000
[ "Environmental Science", "Chemistry" ]
PROPOSAL OF THE SPATIAL DEPENDENCE EVALUATION FROM THE POWER SEMIVARIOGRAM MODEL In Geostatistics, the use of measurement to describe the spatial dependence of the attribute is of great importance, but only some models (which have second-order stationarity) are considered with such measurement. Thus, this paper aims to propose measurements to assess the degree of spatial dependence in power model adjustment phenomena. From a premise that considers the equivalent sill as the estimated semivariance value that matches the point where the adjusted power model curves intersect, it is possible to build two indexes to evaluate such dependence. The first one, SPD, is obtained from the relation between the equivalent contribution (α) and the equivalent sill (C = C0 + α), and varies from 0 to 100% (based on the calculation of spatial dependence areas). The second one, SDI, beyond the previous relation, considers the equivalent factor of model (FM), which depends on the exponent β that describes the force of spatial dependence in the power model (based on spatial correlation areas). The SDI, for β close to 2, assumes its larger scale, varying from 0 to 66.67%. Both indexes have symmetrical distribution, and allow the classification of spatial dependence in weak, moderate and strong. Introduction In geostatistics applications, in general, the spatial dependence (or spatial autocorrelation) is assessed by the semivariogram study, which is the most important tool for such evaluation (Seidel, Oliveira, 2013;2014a).This method requires expertise and time from the researcher, since it is not always easy to visualize the shape of the semivariogram model that best fits the data. Among the models capable of adjustment by the semivariogram, the most used are the spherical, exponential and Gaussian (Lourenço, Landim, 2005;Seidel, Oliveira, 2013).These models feature sill, in other words, they comply with the second-order stationarity, having four parameters: nugget effect, contribution, sill and range. According to Seidel, Oliveira (2014a), for being a descriptor with plenty of graphic details, the semivariogram generates a lot of information, making it necessary to construct a numerical auxiliary measure of the spatial dependence.Such measure may summarize the entire set of semivariographic information to complement the semivariogram study.Besides that, according to Biondi, Myers, Avery (1994), spatial dependence measures are important to compare phenomena (different spatial dependence scenarios) because they assess the degree of dependence. More recently, Seidel, Oliveira (2014a) have proposed a new measurement to calculate the spatial dependence degree (the spatial dependence index -SDI), which considers the nugget effect (C0), the contribution (C1), the range (a), the factor of model (FM) and the maximum distance (MD) between pairs of sample points.The factor of model (FM), according to Seidel, Oliveira (2014a; Proposal of the... Bull.Geod.Sci, Articles section, Curitiba, v. 23, n°3, p.461 -475, Jul -Sept, 2017.2014b), can be understood as a value that expresses the strength of spatial dependence that the model can achieve. However, when models that do not reach a sill are fitted, as far as it is known, there is no proposed spatial dependence measures, because the ones presented in the literature consider this parameter in their formulations.From this moment, when we refer to models that do not reach sill, we will mention just as models without sill.Thus, to contemplate situations of non-second-order stationarity, we justify the attempt to create spatial dependence measures to semivariograms with fitted models without sill.Therefore, in this study, measures to assess the spatial dependence degree on phenomena with power model adjustment are proposed. Methodology The index named relative nugget effect (NE) (Trangmar, Yost, Uehara, 1985;Cambardella et al., 1994) relates the nugget effect and the sill and is given by the expression: where C0 is the nugget effect and C1 is the contribution.According to Cambardella et al. (1994), the NE(%) can be classified as follows: strong spatial dependence from 0 to 25%, moderate spatial dependence from 25 to 75%, and weak spatial dependence from 75 to 100%. The measure proposed by Biondi, Myers, Avery (1994), in which contribution and sill are related, is denominated spatial dependence (SPD) and is given by: where C0 is the nugget effect and C1 is the contribution.Adapting the classification of Cambardella et al. (1994), the SPD(%) index is defined by: weak spatial dependence from 0 to 25%, moderate spatial dependence from 25 to 75%, and strong spatial dependence from 75 to 100%. It is possible to observe that indexes NE(%) and SPD(%) are complementary, because SPD(%) = 100% -NE(%).Thereby, it was chosen to be used, from that moment, only the SPD index and the adapted classification of Cambardella et al. (1994) for the other descriptions and discussions. Another index, which was created and proposed more recently by Seidel, Oliveira (2014a), is the spatial dependence index (SDI).This index contemplates more parameters of the models and is given by the following equation: where C0 is the nugget effect, C1 is the contribution, a is the range, FM is the factor of model and MD is the maximum distance between pairs of sample points.In the validation study of SDI, Seidel, Oliveira (2014a) considered q = 0.5, generating a denominator q MD equivalent to half the greater distance between sample points. It is possible to observe that the indexes presented previously are applicable only in second-order stationarity models, whose sill has been reached.However, the power model has no such feature, making it impossible the direct indexes application as they are defined. The power model, featuring stationarity only under intrinsic hypothesis, is given as (Olea, 2006) where C0 is the nugget effect, α is the inclination (or slope), β is the power (or exponent), and h is the distance between points.Graphically, the model can be seen in Figure 1a.The only parameter contained in the indexes expressed in the Equations 1, 2 and 3 and set in the power model in the Equation 4, is the nugget effect parameter.That way, from the power model it is necessary to create equivalent parameters that simulate the behavior of the sill, contribution and range parameters, making it possible to apply spatial dependency indexes in this model. Therefore, firstly, a methodology was developed to create equivalent parameters: equivalent sill, equivalent contribution and equivalent range in the power model.For such, some assumptions may be used.A first possible assumption would be to consider that the equivalent sill might be equal to the semivariance value corresponding to the value of the sample variance.Another possible assumption considers that the equivalent sill may be equal to the semivariance value in which the theoretical curves of the power model intersect (see Figure 1b). The first assumption was inspired by the fact that, in models with sill, there is similarity between the sample variance and the sample sill (Trangmar, Yost, Uehara, 1985;Lima, G. et al., 2014;Nagahama et al., 2014;Lima, J. et al., 2014;Jordão et al., 2015).So that, it was considered as possible to extend this idea to the power model, turning the sample variance approximately equal to the value of sample equivalent sill, in other words, the sample variance could be considered as the estimate of the equivalent sill. In this approach, the equality is considered: equivalent sill (C * ) = sample variance (S²).From this equality, the relationship is defined: Thus, understanding that the estimated equivalent contribution is equal to a value W, such as: W = S² - ̂0, it is possible to observe that the applicability of this approach has the restriction that S² must be greater or equal to the estimated nugget effect (S² ≥ ̂0). Furthermore, the estimated equivalent range ( ̂ * ) is the value of the distance (h) with the estimated semivariance [γ(h)] equal to the sample variance (S²), that is, the estimated equivalent range ( ̂ * ) is the value of the distance (h), such as: This assumption, based on equality between the estimated equivalent sill and the sample variance, has weaknesses in its application, because it depends on the occurrence of the S² ≥ ̂0 condition.This condition may not always really occur and does not depend on the spatial behavior of the studied phenomenon, since the value of S² depends only on the sampling distribution of the phenomenon.Thus, in this article, the calculation of indexes from this approach is not developed.Jia et al. (2009) presents the possibility of considering the value of 95% of the highest semivariance obtained in the semivariogram sample as an equivalent value to a sill, in linear and power models.This approach also seems to be arbitrary, because it is not necessarily that this 95% cut of the semivariogram would be the best estimate of an equivalent sill.For this reason, it will not be developed in this article. The second possible approach is more general and always applicable because it depends only on the elements of spatial behavior of the phenomenon under study.This approach is based on the assumption that the equivalent sill can be assessed as the value of the estimated semivariance [γ(h)] that coincides with the point at which the adjusted power model curves intersect.Graphically, the justification for this second approach is illustrated in Figure 1b.On this assumption, the estimated equivalent sill ( ̂ * ) is equal to the value of γ(h) for which h is equal to 1.That means, ̂ * = ̂0 + ̂.From this equality, the relation is defined as: Thereby, the estimated equivalent contribution ( ̂1 * ) is equal to the estimated slope coefficient ( ̂). Beyond that, the estimated equivalent range ( ̂ * ) is equal to 1, what means that the estimated equivalent sill is the value 1, because it ensures: ̂0 + ̂ℎ ̂ = ̂ * .Figure 1b shows that the sum of the estimated nugget effect and the estimated slope coefficient is equal to the estimated equivalent sill.Next, it is demonstrated that ̂ * = 1.  For this equality be true, h = 0 or h = 1.As the curves visually are in h = 0, in the origin of the semivariogram (see Figure 1), then the non-zero result found is h = 1.Thus, The next subsection of the article deals with the construction of the SPD * index, from the approach of estimation of equivalent sill with the intersection of the power model curves.This index is constructed in an attempt to, in cases of power model application, imitate the SPD index proposed by Biondi, Myers, Avery (1994), which is applied in cases of models with sill. The Construction of the SPD * index The SPD * index is calculated based on an adaptation of the concept of spatial dependence areas (Seidel, Oliveira, 2014b;2015).The index is obtained as the ratio between the equivalent observed spatial dependence area (SDA * observed) and the equivalent maximum spatial dependence area (SDA * maximum) as the following expression: where SDA * observed is given as the integral of the difference between equivalent sill and the power model; the SDA * maximum is given as the integral of the difference between equivalent sill and the adapted power model (C0 = 0).Both integrals are defined between zero and equivalent range.Figure 2 shows the equivalent spatial dependence observed and maximum areas.In the next subsection, the proposed construction of the SDI * index is described.This index is built in an attempt to, in cases of power model application, imitate the SDI index proposed by Seidel, Oliveira (2014a), which is applied in cases of models with sill. The Construction of the SDI * index The SDI * index is constructed from an adaptation of the concept of spatial correlation areas (Seidel, Oliveira, 2014a).Here, the SDI * index, built from the calculation of equivalent observed spatial correlation area, can be described by the expression: where SCA * observed is obtained through the integral, defined between zero and equivalent range, of the difference between 1 and the ratio between power model and equivalent sill.And MD is the longest distance between sample points.Figure 3 shows the equivalent observed spatial correlation area. Indexes classification At this stage of the study, it is developed the categorization of indexes to enable the classification of spatial dependence in terms of weak, moderate and strong, based on the classification suggested by Cambardella et al. (1994) for NE(%) index.The intention is to perform the categorization making two cuts in the distribution of index values, the first in the value corresponding to the 1st quartile, and the second to the 3rd quartile, similarly from Cambardella et al. (1994), which proposed cuts in the value 25% (value of the 1st quartile) and value 75% (value of the 3rd quartile) for an index which had distribution of values ranging from 0 to 100%. After this, to show the validity and applicability of the indexes, the real data from articles in which the power model was used, was applied in the study.Then, the indexes were calculated and the spatial dependence was classified. Results and discussion From the premise that the value of the equivalent sill (C * ) is given by C0 + α, wherein a * = 1, the SPD * index calculation is given as follows: Taking a * =1, there is: Thus, there is: where C0 is the nugget effect and α is the slope coefficient (equivalent contribution). This index assumes values from 0 to 100%, analogously to the SPD(%) index given by Biondi, Myers, Avery (1994).Thus, the same way that in SPD(%) index, it is assumed that the SPD * (%) index has symmetric distribution and it can be classified according to the same principle applied to the SPD(%) index, adapting the classification of Cambardella et al. (1994).Therefore, the classification of SPD * (%) is: On the assumption used for the construction of SPD * (C * = C0 + α) and utilizing the methodology of SDI index proposed by Seidel, Oliveira (2014a), it is possible to develop the calculation of SDI * index as follows: * in evidence, there is: Taking a * =1, there is: As a * = 1, it does not make sense to keep the correction factor ( 1 × ), since it does not have the effect of equivalent range in the expression of SDI * .Thereby, there is: Finally, there is: where C0 is the nugget effect, α is the slope coefficient (equivalent contribution) and β is the exponent.The term ) is the equivalent factor of model (FM * ) of the power model.Then, it is observed that the FM * for the power model depends only on the β parameter.Thus, for 0 < β < 1 there is 0 < FM * < 0.500.For β = 1 there is FM * = 0.500.And, for 1 < β < 2 there is 0.500 < FM * < 0.667.This makes the index assume values from 0 to FM * ×100%.For example, in the case of power model with β = 1, the value of SDI * (%) can vary in the range between 0 and 0.500×100%, that is, between 0 and 50%. Differently from the behavior of SPD * , the principle is similar to the distribution of SPD index.Regarding SDI * , it is necessary to make a theoretical study of its distribution.): sequence from 0 to 1, with variation 0.01.As the power model varies the scale of its distribution depending on the value of β, FM * has different possible values.Thus, for each possible value of β, the 101 generated values are multiplied by FM * ×100% to generate the specific distribution of SDI * , corresponding to each possible power model behavior.Figure 4 shows box plots of distributions of the theoretical values for some situations.In Figure 4a there is an example of distribution of the SDI * (%) values for FM * = 0.400 (0 < FM * < 0.500; 0 < β < 1). Figure 4b shows the distribution of SDI * (%) when FM * = 0.500 (β = 1).And the Figure 4c shows the behavior of SDI * (%) for FM * = 0.600 (0.500 < FM * < 0.667; 1 < β < 2).The classification of SDI * (%) is performed based on the distribution of theoretical values, in each power model behavior (variation of β), as a set of data that is desired to categorize into three levels: weak, moderate, and strong spatial dependence.To do this, we calculate the first and third quartile with the intention to categorize the SDI * (%) inspired by the classification of Cambardella et al. (1994), applied to indexes with symmetrical behavior, which has cuts in 25% and 75% corresponding to cuts in the 1st and 3rd quartiles, respectively.Thus, for the SDI * (%), which also has symmetrical behavior as seen in Figure 4, the cuts are also made in the values corresponding to these two quartiles.That way, generalizing to any FM * (any β), the classification of the SDI * (%) is given as: To illustrate this classification, there were taken as an example, the values of the factors of model used for the construction of the box plot in Figure 4.For the FM * =0.400, weak spatial dependence when 0 ≤ SDI * ≤ 10%, moderate spatial dependence when 10% < SDI * ≤ 30% and strong spatial dependence when 30% < SDI * ≤ 40% were taken into consideration.For the FM * =0.500 it was noticed the weak spatial dependence for 0 ≤ SDI * ≤ 12.5%, the moderate spatial dependence for 12.5% < SDI * ≤ 37.5% and the strong spatial dependence for 37.5% < SDI * ≤ 50%.Finally, for the FM * =0.600 weak spatial dependence when 0 ≤ SDI * ≤ 15%, moderate spatial dependence for 15% < SDI * ≤ 45% and strong spatial dependence when 45% < SDI * ≤ 60% were detected. As β varies in the power model, the FM * consequently varies its distribution.This behavior is different from the factors of model to the spherical, exponential and Gaussian models, which are fixed in each model, assuming, respectively, the values 0.375, 0.317 and 0.504 (Seidel, Oliveira, 2014a;2014b;2015).The maximum FM * that can be obtained in the power model is close to 0.667 (when β is close to 2).This value is the highest among the semivariogram models already discussed (spherical, exponential, Gaussian and power). The expression of SDI * index (Equation 9), as generated in this article, can be understood as the product of FM * and the SPD * index (Equation 8), that is, SDI * = FM * ×SPD * .In other words, the SDI * index is analogous to SDI2 index obtained by Seidel, Oliveira (2015), for the spherical, Gaussian and exponential models, from a geometrical perspective of semivariogram. To exemplify the applicability of the SPD * and SDI * indexes, so that researchers can use them in their future studies, the real data obtained from some geosciences and rural sciences articles (Pardo-Igúzquiza, 1998;Makkawi, 2004;Jorge, 2009;Masseran et al., 2012;Shah, Patel, 2012) were taken in order to calculate the indexes and classify the spatial dependence.These articles present power model adjustment in the semivariogram to estimate the spatial dependence.And this application is presented in Table 1.Table 1 shows that it was possible to apply the indexes and their corresponding classifications on real data to show the applicability of the methodology.It was noted strong, moderate and weak spatial dependence classification.It is important to remind users that the two indexes (SPD * and SDI * ) generate the same classification of spatial dependence.However, the SDI * index has the possibility to evaluate the force of spatial dependence because this index considers the factor of model in its expression. Conclusion Two new indexes for measuring the spatial dependence when using the power semivariogram model are proposed and justified from geostatistical arguments: the SPD * and SDI * indexes. The SPD * has symmetric distribution, holding scale of values ranging from 0 to 100%.The classification of Cambardella et al. (1994) can be applied to this index. The SDI * also features symmetrical distribution.However, its scope depends on the value of FM * and consequently on the β parameter.This index can be rated from 1st (0.25×FM * ×100) and 3rd quartiles (0.75×FM * ×100). Both indexes generate the same spatial dependence classification.However, the use of SDI * index allows the evaluation of the strength of spatial dependence, as regards the factor of model. For both indexes, the spatial dependence classification can be made considering the levels: weak, moderate and strong spatial dependence. This study was performed as a proposal to index creation for the evaluation of spatial dependence in models that do not reach sill, comparing with already existing indexes in the literature for second-order stationarity models.Thus, as a preliminary work, it is necessary more researches and applications about this topic, making it possible further comparisons, verifying the applicability and reliability of the proposed indexes. To find the value of h where the curves of model intersect, it is only necessary to equal two equations of them.Next, it is possible to check for any case of two arbitrary β (0 < 1 ≤ 1 e 1 ≤ 2 < 2), with 1 ≠ 2 : Table 1 : Estimates of the power model parameters, SPD * , FM * , SDI * and spatial dependence classification as exemplification in real data.II Post monsoon season in India; III Wind Speed in East Malaysia; IV Wind Speed in Peninsular Malaysia; V Spatial dimension of shallow groundwater; VI Piezometric levels; VII Rainfall in Malaga (Spain); VIII Piezometric heads; IX Soil erosion in Botucatu-SP; 1 Shah, Patel (2012); 2 Masseran et al. (2012); IPre monsoon season in India;
4,995.2
2017-09-25T00:00:00.000
[ "Geography", "Mathematics" ]
Neutron stars in the Witten-Sakai-Sugimoto model We utilize the top-down holographic QCD model, the Witten-Sakai-Sugimoto model, in a hybrid setting with the SLy4, soft chiral EFT and stiff chiral EFT equations of state to describe neutron stars with high precision. In particular, we employ a calibration that bootstraps the nuclear matter by fitting the Kaluza-Klein scale and the ’t Hooft coupling such that the physical saturation density and physical symmetry energy are achieved. We obtain static stable neutron star mass-radius data via the Tolman-Oppenheimer-Volkov equations that yield sufficiently large maximal masses of neutron stars to be compatible with the recently observed PSR-J0952-0607 data as well as all other known radius and tidal deformation constraints. Introduction Neutron matter at densities present in the cores of neutron stars is difficult to study for two reasons: first of all, nucleons at low energy are described dominantly by the strong force, which is hard to tackle due to the nonperturbativity of QCD at strong coupling.The other reason making the study difficult, is that phenomenological models that can capture nuclear physics at low densities are practically impossible to extract reliably to large densities due to the bottom-up approach of such models with a large number of effective couplings that all, in principle, depend on the density.Although perturbative QCD at extremely large densities is able to predict reliably an equation of state [1], such densities are roughly an order of magnitude larger than what is needed for neutron stars. Holographic QCD models, thanks to their ability to capture a large class of nonperturbative phenomena, have recently been employed with some success to provide predictions for nuclear matter at large densities, and hence represent a valuable tool for the firstprinciples computation of the equation of state of nuclear matter in regimes typical for the core of neutron stars.Holographic QCD models are a branch of AdS/CFT models developed after Maldacena [2] and Witten's work [3] which are focused on the strong sector, viz.Quantum Chromodynamics and they come generally in two kinds: top-down models derived from string theory constructions, or bottom-up models attempting to mimic the strong gluodynamics of QCD with an appropriate 5-dimensional curved background.In this work, we shall concentrate on the top-down model known as the Witten-Sakai-Sugimoto (WSS) model [3][4][5], which is especially predictive because it contains only two adjustable parameters, the Kaluza-Klein scale (M KK ) and the 't Hooft coupling λ = g 2 N c .Usually these parameters are fitted to the meson sector of the theory, reproducing the pion decay constant and the rho-meson mass [4]: when doing so, the model performs rather poorly in the baryonic sector, while still providing some deep qualitative insights.To overcome this shortcoming, and to try to be as quantitatively precise as the model's approximations allow, in this paper we shall employ a different calibration of the model, which we will justify by the fact that we are working exclusively with baryons in a homogeneous configuration, see below for further discussion on the justification. Holographic QCD models have been employed earlier in the literature in order to describe neutron stars, in the WSS model with taking the symmetry energy into account [6], in V-QCD [7] which is a highly customized phenomenological version of holographic QCD based on the Veneziano limit -i.e. the limit of a large number of color as well as a large number of quark flavors, in a D3/D7 model [8][9][10], as well as in a hard-wall model [11] (see Ref. [12,13] for nice reviews on the topic).Baryons in the WSS model are instantons in the 4-dimensional subspace spanned by the spatial and holographic directions of the 5-dimensional curved AdS-like spacetime and are initially best understood via the BPST flat-instanton approximation [14], which is later shown to be valid only in the large 't Hooft coupling limit [15].In the context of neutron stars it was shown that the instanton in the pointlike approximation fails to describe tidal deformabilities and baryon masses at the same time [16].Using instead the homogeneous Ansatz for nuclear matter has previously been quite successful in describing neutron stars, at least barring changing the meson fit of the WSS model to one more appropriate for baryonic matter at finite densities [6].Although the work [6] holographically takes the symmetry energy [17], charge neutrality and β-equilibrium into account and dynamically determines the crust of the star, their approach has been plagued with unrealistically large symmetry energies, a factor of 30 bigger than its phenomenological value, which in turn shows up as large deviations of physical neutron stars with respect to isospin symmetric neutron stars, for example in the mass-radius data.Such a large overestimation of the symmetry energy can be traced back to the combination of two factors: on one hand one of the gauge fields (the Abelian spatial component) is taken to vanish, an approximation reliable in the large λ limit, but not upon extrapolation to finite λ, while another overall factor of N 2 c compared to our result arises from a different choice in the definition of the large-N c isospin number for the nucleon states. In a recent paper [18], the present authors have employed a conceptually simpler approach to computing the symmetry energy in the WSS model in the approximation of using the homogeneous Ansatz and have shown that it can obtain realistic values, if the model is calibrated with a smaller Kaluza-Klein scale and a larger 't Hooft coupling, than the usual meson fit.Not only is the WSS model able to reproduce the ∼ 32MeV symmetry energy at saturation density, but the slope and second derivatives of the symmetry energy as a function of the density can be made to be consistent with all current experimental constraints -coming both from nuclear physics as well as from neutron star data. In this paper, we use this result of a realistic symmetry energy as a function of the density and to ensure this is the case, we choose to bootstrap the model by setting the saturation energy to the physically accepted one (∼ 0.16 fm −3 ), which fixes the Kaluza-Klein scale and then adjust the 't Hooft coupling to obtain the correct value of the symmetry energy at saturation density.It turns out that this calibration scheme gives values of the couplings very close to those for which the symmetry energy passes all current experimental bounds as a function of the density.As the final ingredient in obtaining realistic neutron stars, requiring a physical equation of state from very low to very high (medium) densities, we take the hybrid approach in this paper as already introduced in Ref. [8], by patching/gluing a very reliable equation of state from nuclear physics at low densities together with our equation of state obtained from the WSS model in our calibration. This paper is organized as follows.In Sec. 2 we introduce the WSS model as well as our notation.We review the static homogeneous Ansatz in Sec. 3, and include time dependence in Sec. 4. In Sec. 5, we review the relations for obtaining charge neutrality, β-equilibrium and the corresponding thermodynamic relations.We then describe our fit and the hybrid equation of state in Sec.6 and present the results of the neutron star mass-radius diagrams in Sec. 7. We conclude the paper in Sec. 8 with a discussion and outlook. Witten-Sakai-Sugimoto model The holographic model we choose to employ for the description of nuclear matter at high density is the Witten-Sakai-Sugimoto model [4,5], a top-down model of holographic QCD based on the engineering of a couple of stacks of N f D8/D8-branes on the supergravity background sourced by N c D4-branes.We will work in the configuration of antipodal branes and in the confined geometry phase, so that the flavor branes will extend all the way down to the tip of the cigar subspace, spanning all the holographic direction. In absence of a mass term for quarks (which we will neglect), the theory after dimensional reduction to five dimensions is given by a Dirac-Born-Infeld action supplemented by a Chern-Simons term.Replacing the Dirac-Born-Infeld action at quadratic order in the field strength, results in a Yang-Mills theory in curved space, with the addition of the topological Chern-Simons term, so that the full action within these approximations read with the parameter κ and the warp factors k(z), h(z) given by where λ is the 't Hooft coupling and N c is the number of colors.The U(2) gauge field 1-form A is split into SU(2) and U(1) factors as the field strength is given by F = dA + A ∧ A, τ a are the standard Pauli spin matrices, and the spacetime indices are denoted as α, β, . . .= 0, M ; M, N, . . .= i, z ; i, j, . . .= 1, 2, 3. The Chern-Simons 5-form ω 5 is given by where the powers of forms are understood by the wedge product.The U(N f ) gauge field describes flavor degrees of freedom: In the present work we will assume the presence of two light flavors, thus setting N f = 2 and neglecting contributions from the presence of the strange quark, leaving their inclusion for future improvements.With the notation (3) we can rewrite the Chen-Simons term as We will assume the gauge fields A i , A i to vanish at z = ±∞, hence we drop the total derivative term.The top-down nature of the model enables it to only rely on two free parameters to be fitted: the 't Hooft coupling λ and the Kaluza-Klein scale M KK .Note that the scale does not appear in any of our expressions when doing calculations within the model: it is the only mass scale of the model, and we can freely choose to work in units of M KK , and then restore the correct power of M KK to obtain the results in physical units. 3 Holographic nuclear matter: static homogeneous Ansatz In the context of holographic QCD, baryons are described as topological solitons of the flavor fields, and the Witten-Sakai-Sugimoto model is no exception [15].The Witten-Sakai-Sugimoto soliton can be approximated as a BPST instanton [14,19] in the limit of large λ: its instanton number is then identified as the baryon number, allowing for the description of nuclei on top of single baryons. To exactly describe infinite nuclear matter in this model is to solve a many-soliton problem in five-dimensional curved space time: this task is too hard to be carried out for any practical application, so what is done in most cases is to rely on some combination of approximations and educated guesses.A possibility is to arrange individual solitons in a lattice [20], minimizing the free energy density to determine the elementary cell shape and size (hence the density): this kind of approach is generally more reliable at low density, when the individual baryons are well separated, and approximations made to compute the interactions between solitons are well under control. At densities around nuclear saturation and above, another possible approximation exists: since nucleons are tightly packed, we can approximate their spatial distribution in three-dimensional space to be homogeneous, forming a uniform distribution of continuous nuclear matter.In this process every information about individual baryons is lost, in favor of intensive quantities such as the baryonic and isospin densities.Despite such a configuration not existing under assumptions of homogeneity and regularity of the gauge fields as shown in Ref. [21], it turns out that it is still possible to define a homogeneous Ansatz by modifying these assumptions, either by enforcing homogeneity at the level of the field strengths [22], or by introducing a discontinuity in the homogeneous gauge fields [23].For the purpose of this work, we will follow the second route, introducing a discontinuity in the SU(2) gauge field that will source a finite baryon density.The great simplification that this approach produces lies in the reduction of complicated sets of PDEs to more manageable sets of ODEs, the only remaining variable being the holographic coordinate z (and time, whose inclusion we will discuss in the next section).The homogeneous Ansatz in the static approximation is given by the gauge field configuration with a 0 = a 0 (z), H = H(z) being functions of only the holographic coordinate z. The function H(z) encodes baryonic density, as it can be thought of as the spaceaveraged many-soliton distribution.It does so in a nontrivial way if we allow it to be discontinuous: for simplicity we will assume a discontinuity is present in the function H(z) at z = 0, which is the coordinate at which single solitons sit to minimize energy, however it is possible to consider configurations with more discontinuities located at finite z = ±z 0 (see Ref. [24] for the treatment in both the Witten-Sakai-Sugimoto and the VQCD models).The baryon density d is given as We see immediately that a continuous function H(z) would not be able to describe nuclear matter at any finite density: it has to be an odd, discontinuous function of z.We choose the function H(z) to be vanishing at the UV boundary z = ±∞, so that we are left with an IR boundary condition for H(z) reading From now on, we will always perform integrals over z on the "positive z" half of the connected branes, accounting for the symmetry of the integrands with an overall factor of two.Note that despite the discontinuity in H(z), the field strengths F a M N are still continuous, even functions of z (though in general not differentiable at z = 0). The asymptotic UV value assumed by the function a 0 (z) is mapped into the baryonic chemical potential as per the standard holographic dictionary: we can then impose boundary conditions and numerically solve the equations of motion With the normalization chosen for the asymptotics of a 0 , the baryon number chemical potential µ B is given by For every value of the parameter µ, the corresponding thermodynamic equilibrium value of the density d(µ) is given by minimizing the free energy Ω, holographically dual to the on-shell action as Ω = −S on-shell .The baryonic phase is favored over the vacuum when Ω(d, µ) ≤ 0, a condition that is satisfied for d(µ) ≥ d 0 (µ onset ).The value d 0 is the model prediction for nuclear saturation density of symmetric matter, and will play an important role in our choice for the fit of the free parameters λ, M KK . Isospin asymmetry: time-dependent homogeneous Ansatz The time-independent configuration introduced in the previous section only accounts for symmetric baryonic matter.This can be understood by thinking about the single-baryon: a single static soliton is a classical object, that carries no information about the nucleon states.To have a spectrum including isospin states we have to perform moduli space quantization [14]: an arbitrary rotation in SU(2)-space that corresponds to a zeromode is introduced by performing the transformation τ i → aτ i a −1 .The orientation matrix a is then promoted to a time-dependent operator a(t), in terms of which it is possible to compute an effective quantum mechanical Hamiltonian that will give the single-baryon spectrum.The same procedure can be applied to the homogeneous Ansatz, as described in detail in Ref. [18].In particular, the homogeneous Ansatz is similar in its structure to the BPST instanton, but simpler in that it lacks the position and size moduli.We then follow the same procedure in the treatment of rotational moduli a(t) ∈ SU(2), and we define the angular velocity χ i as: The introduction of the SU(2) moduli and their time dependence turns on new components of the gauge field, so that the new homogeneous Ansatz for the iso-rotating configuration is given by The classical action obtained from the time-dependent homogeneous Ansatz following the prescription introduced in Ref. [25] to choose the correct Chern-Simons term, is from which the equations of motion for the fields H(z), a 0 (z), L(z), G(z) can be derived: The equations of motion have to be supplemented with adequate boundary conditions: following Ref.[25] we choose them so to cancel the infrared localized terms in the variation of the action, which amounts to require: The field A 0 encodes via the holographic dictionary the information on the isospin density n I and chemical potential µ I .This can be understood as follows: G(z) can be thought of as a holographic profile for the angular velocity χ i , which can be traded for the corresponding angular momentum and then canonically quantized in the moduli space approximation.The resulting operator is identified with both spin and isospin operators (an artifact of the spherical symmetry of the setup).When we do this, the resulting quantum Hamiltonian operator is given by with V being the three-dimensional (infinite) volume, I 2 is the squared isospin operator, and Λ and U given by They are respectively the moment of inertia of a rigid rotor and its internal energy density, which in our physical setup corresponds to the energy density of symmetric nuclear matter. From this Hamiltonian operator, we obtain the energy per nucleon of an isospin state |i, i 3 ⟩, from which we can read off the symmetry-energy parameter of infinite nuclear matter S N (d): with β = (N − Z)/A being the isospin asymmetry parameter.We will also need the energy density since our final aim is to obtain an equation of state for isospin asymmetric nuclear matter in β-equilibrium with leptons.From the expressions above, the energy density of isospin-asymmetric matter is obtained from U, Λ as The isospin chemical potential µ I can be obtained from the expression above by differentiating with respect to n I , so that Note that we have defined n I from the isospin quantum number i 3 , which assume values ±1/2 for the ground states.We choose to identify the positive eigenvalue to correspond to the proton, and the negative one to the neutron, following conventions from nuclear physics.With this choices, if the nuclear matter is rich in neutrons (as we expect it to be following beta equilibrium and charge neutrality), we will have n I < 0 and correspondingly also the isospin chemical potential µ I will be negative due to Eq. ( 28). β-equilibrated matter The presence of the symmetry energy favors isospin-symmetric matter: the neutron rich matter that composes the core neutron stars is the result of the presence of negatively charged leptons that impose electric charge neutrality to the system.The leptons with their (anti)neutrinos, the neutrons and the protons are in equilibrium with respect to β-decay.This condition, together with charge neutrality, is sufficient to calculate the fractions of each particle species at any given density.As discussed in Ref. [18] with the same normalizations used in this work, the imposition of β-equilibrium (together with the decoupling of the neutrinos) and electric charge neutrality lead respectively to the conditions where µ X is the chemical potential of the particle species X and we accounted for the presence of electrons (e) and muons (µ) in equilibrium with protons (P ) and neutrons (N ), while neglecting the presence of the heavier taus (τ ). The leptonic number densities n ℓ are taken as originating from the free energy of a (massive) Fermi gas: from which follows We will approximate the electrons to be massless, while keeping the muons massive with m phys µ = 105.66MeV,while the parameter appearing in the equations is the corresponding value in units of M KK , m µ = m phys µ M −1 KK . 1 Inserting Eq. ( 32) into the charge-neutrality condition (30) and using the β-equilibrium condition (29), we obtain an implicit solution for the isospin density, n I , as a function of the baryon density d (note that with the present conventions, n I is always negative in neutron rich matter, and that Λ is always positive by construction, as appropriate for its interpretation as a moment of inertia): From n I (d) it is then possible to compute the density of every particle species as To build an equation of state for the matter we just described, we need now to compute the total energy density and pressure.The building blocks are the energy densities U and pressure P 0 of symmetric nuclear matter, the contributions E I and P I due to isospin asymmetry, and the contributions E ℓ , P ℓ of leptons. We already know an expression for U from Eq. ( 25), which we can use to compute the pressure P 0 using µ B and d For the contributions arising from isospin asymmetry, we make use of the known relations ( 28), ( 27): Finally, for the leptons we can use the fact that in a homogeneous system P = −Ω holds, so that we can compute P ℓ directly from Eq. ( 31), from which the energy density follows where we have made use of charge neutrality and β-equilibrium.As a result, we can finally write the total energy density and pressure as with every quantity involved being a function of only the baryon density d. Nuclear matter fit and hybrid equation of state It is well known that the most common fit of the Witten-Sakai-Sugimoto model (M KK = 949MeV, λ = 16.6)performs very well in the mesonic sector, but that its quantitative reliability when employed to compute quantities of the baryonic sector is somewhat lacking (in particular the masses of the baryons are largely overestimated, together with the binding energies of nuclei).When we move to the dense nuclear matter described by the homogeneous Ansatz, the situation gets worse and contact with phenomenology is lost.For example, the saturation density for λ = 16.6 is found to be d λ=16.6 0 = 0.00385M 3 KK , while the symmetry energy at saturation is of order S λ=16.6 N (d λ=16.6 0 ) ≃ 0.1M KK : when the value M KK = 949MeV is plugged into these expressions, the results are respectively Figure 1: The curve M KK (λ) that fits the saturation density of symmetric nuclear matter to its phenomenological value of d phys 0 = 0.16 fm −3 .The two dots marked on the plot correspond to the two fits we employed, corresponding to the upper and lower bounds on the symmetry energy of 32.8 MeV and 30.6 MeV, respectively.d 0 = 0.43 fm −3 and S N (d 0 ) = 97MeV.If we ignore these issues and try to build the core of neutron stars with this fit, we unsurprisingly find that they never reach 1.5M ⊙ , and their radius can at most be slightly larger than 7km.Realistic values for the highest mass for a stable star and for the average radii have been obtained with a fully holographic equation of state (though with the phenomenological input of the surface tension of the domain wall between nuclear matter and vacuum) from the Witten-Sakai-Sugimoto model in Ref. [6]: to obtain a realistic-looking neutron star while keeping M KK = 949MeV, it is necessary to decrease λ to around the value λ = 10, with the results being highly sensitive to changes around this value.Realistic masses and radii for neutron stars as functions of the model parameters λ, M KK within this setup have been further explored in Ref. [26].However, even producing stars with adequate radii and masses, nuclear matter inside the stars of Ref. [6] fails to be as rich in neutrons as we expect in neutron stars: this is due to the extremely high value of the symmetry energy, which pushes matter towards isospin symmetric configurations, and remains true even if we adopt the lower values of symmetry energies found in Ref. [18], as long as we adopt fits defined in the "QCD window" introduced in Ref. [26].Our aim is to describe realistic neutron star cores and connect the equation of state from holography with a phenomenologically accurate equation of state at lower energy densities: within this approach, it makes little sense to employ a fit that overestimates physical quantities that are crucial in the formation of the structure of neutron stars by a factor of roughly three.Moreover, the fit is done on single-particle properties, but information on individual particles is lost when employing the homogeneous Ansatz, which does not connect smoothly to the finite particle number setup.Right panel: the hybrid stiff-WSS EOS compared with model independent bounds from Ref. [27], derived from pQCD and shown with a dashed green line.EOS lying within the dark blue lines have an upper-bound on the speed of sound at the conformal value c 2 s < 1/3.The pink line represents the limiting case in which the speed of sound is constant along the EOS. We hence propose another choice of fit for the free parameters of the Witten-Sakai-Sugimoto model, expected to give precise results when applied to highly dense matter, especially in the context of the homogeneous Ansatz.The first quantity we fit is the nuclear saturation density for symmetric matter: phenomenologically this value lies at about 0.16 fm −3 .The value of the model-derived saturation density in units of M KK does not depend on M KK itself, so we only need to compute the value d 0 (λ) for a wide range of values of λ, then impose the condition This defines the function M KK (λ) we plot in figure 1.The second choice we employ to eliminate any further freedom is to have a realistic symmetry energy at saturation density. The phenomenologically accepted values range between 30.6 MeV and 32.8 MeV: we fit the model to both values, so to produce an "error bar" for our predictions.As a result, the two fits for which we will present results correspond to Using these fits we build the equation of state for holographic matter, which we assume to be the appropriate description at densities above saturation: to describe lower-density regions of the neutron star, namely the crust, we would need a more refined construction, possibly deviating from the homogeneous approximation, for example nuclear pasta phases. A similar construction was performed in Ref. [6], where a crust was built from the mixed phase of lumps of nuclear matter in β-equilibrium, immersed in a gas of leptons, with the surface tension of the domain wall, separating the phases, as the parameter to regulate the onset of the transition from homogeneous matter to the mixed phase.Here we follow another possible approach, that of a "hybrid" equation of state: within this framework, we forego the attempt of building the full equation of state from a single model, and instead patch together equations of state coming from different models, each derived within the domain of applicability of the models themselves.In particular, equations of state for low-density nuclear matter are already developed with a higher precision than the one we can hope to achieve within the holographic model: we then choose to employ a set of three different low energy equations of state: a soft and a stiff one, taken as the two limiting scenarios from chiral EFT interactions as described in Ref. [28], and the SLy4 equation of state tabulated in Ref. [29,30], which we then match at larger densities with the holographic equation of state described in the previous section: the criteria we require for the matching are that the pressure and baryon number density are continuous at the junction. The density at which the patching is performed is weakly dependent on the fit choice and on the low-density equation of state employed, in the range of d P ∈ (1.032d 0 , 1.059d 0 ), with lower values for the soft equation of state, intermediate values around 1.045d 0 for the SLy4, and the upper bound for the stiff equation of state.By doing this, we obtain the equations of state shown in Fig. 2, where they are compared with currently established bounds. From the equations of state, we can numerically compute the speed of sound: the resulting plot is shown in Fig. 3, and it shows the presence of a discontinuity at the density corresponding to the junction between the two equations of state from different models.The holographic equation of state is highly stiff as it rapidly crosses the conformal value of c 2 s = 1/3: this is a feature it shares with many equations of state derived from holography, but even for this category it achieves very high sound speeds, with the highest speed present in stable neutron stars ranging in the interval c 2 s ∈ (0.652, 0.660), depending on the choice of fit (the two star markers in Fig. 3 pinpoint these values).Note also that the stability of the stars is what limits the speed of sound to these values: the equation of state extrapolated to higher densities produces higher velocities, which however are never reached in stable stars. We can also test our choices of fit against the particle populations: using Eqs.( 34) to (37) we can compute the number densities of each species of particles, and compare them with those of the SLy4 equation of state at the junction point.The results are presented in the two panels of Fig. 4: we can see from the right panel that, while at the junction point the SLy4 populations to not fall precisely within the predictions of our two fit choices, the mismatch is quite small, and the results from the holographic model are well in the correct ballpark.From this result alone it would seem that a favored fit would lie closer to that of Eq. ( 46) corresponding to S N (d 0 ) = 32.8MeV, a tendency that will be reinforced in the next section by comparing the results for the tidal deformabilities of neutron stars.Before applying the computed equations of state to neutron stars, let us comment on the shortcomings of the approach and approximations we employed.The first noticeable feature shared by the equations of state plotted in Fig. 2 is the discontinuity in the energy density E, represented by the horizontal segment that connects the low-density branches to the holographic part: we regard this unexpected phase transition as an artifact due to the failure of the model, the Ansatz and the fit choice, to reproduce the correct onset value of the baryonic chemical potential, giving the value of 1345.5 MeV (1409.9MeV) for λ = 57.76(λ = 67.55).A more sophisticated construction of the transition between the two equations of states could in principle ameliorate the problem: at lower densities, just around saturation, we expect the homogeneous approximation to lose reliability, while configurations where baryons can individually be resolved become more realistic (e.g.lattice configurations).Connected to this issue, is the sharp discontinuity in the speed of sound, as manifest in Fig. 3.In the same spirit, we should not regard the transition between the two equations of state as being exhaustively described by our construction since at the patching density d P the properties of nuclear matter abruptly change behavior to that of the high-density regime: again, we can expect that introducing at least one intermediate description could provide a more realistic speed of sound around these densities.As an example, in Ref. [31] nuclear matter built from an instanton Ansatz leads to a speed of sound at saturation density of roughly c 2 s (d 0 ) ≃ 0.025, a value close to the ones reached by the three phenomenological equations of state we employed at low density: this can be indicative that the instantonic description does indeed capture more reliably the physics of nuclear matter in this regime, though we have to keep in mind that the brane configuration in Ref. [31] is different from the one we have employed.The idea that the homogeneous approximation is to be held responsible for this sharp jump in the speed of sound is also strengthened by the observation that a similar behavior is shared between different constructions that employ the same description: in particular, it is present in Ref. [6], which still uses the WSS model, but contains a holographic EOS also for the crust of the star, and it is present in VQCD as can be noted in the review [12]. Lastly, we should comment on the high-density regime: we argued that the homogeneous Ansatz becomes increasingly reliable as the density increases.While this is intuitively true, it still does not take into account other possible effects that can lead to different physics.In particular, we need to address two shortcomings of our approach that manifest themselves in the high-density limit. The first one is that in the present work we have not included the possibility of a phase transition to quark matter: as pointed out in Ref. [32], it is expected that matter in the heaviest neutron stars exhibits a deconfined phase, and this behavior is reflected in the speed of sound approaching values close to the conformal limit c 2 s = 1/3 already around the TOV densities.Our choice of not exploring the possibility of this phase is mostly due to practical reasons: in the WSS model with antipodal flavor branes, the inclusion of quark matter is only possible in the deconfined geometry (i.e. the black brane horizon sources the quark baryonic charge), which requires (in the absence of backreaction of the flavor branes on the geometry) high temperatures.In principle, including backreaction could lead to such a phase even in the low-temperature regime, since a high baryon density can also induce the formation of a horizon in the bulk, which in turn would source a non-topological baryon number, to be identified as generated by individual quarks.However, the full backreaction of Witten's background is extremely challenging, and its development is far from achieved (though the first order in Nc has been developed, at least in the "smeared" configuration of the flavor branes, see Ref. [33]).Another possibility would be to abandon the antipodal branes configuration, and then adopt the decompactified limit as in Ref. [31,34,35]: in this way, at the price of changing the branes' UV boundary condition (hence the dual UV theory) and introducing the dynamics of the embedding of the flavor branes to the equations of motion, it is possible to describe quark matter in the low-temperature regime [34], and possibly even quarkyonic matter [35].We are leaving the investigation of these possibilities to future developments. The second limitation we need to point out is that, even if we include the high-density regime, we cannot expect it to converge to the correct asymptotic behavior of perturbative QCD: this is a fundamental shortcoming of the model, that fails at reproducing asymptotic freedom.As a result, even including a phase transition at high density, there is little hope that the equation of state from Fig. 2 connects with the pQCD limit.Despite this limitation, we stress that the part we computed of the equation of state, does not violate the constraints set by Ref. [27], as shown in the right panel of Fig. 2. To establish the comparison, we plot the same bounds as in Ref. [27], together with our hybrid stiff-WSS EOS, as appropriate for the bounds (chosen in the latter reference), obtained by matching the high-energy pQCD EOS with the low-energy CEFT stiff EOS at d = 1.1d 0 .Our EOS is found to only lay below the lower bound in a very small range of values around the hybridization density: this however is simply due to the choice performed in Ref. [27] to take the low-density limit to be identified with d = 1.1d 0 , and interpolate between that particular value and the high-density one from pQCD, while for us the critical density of hybridization is around d stiff P = 1.06d 0 .Choosing our d stiff P as a low density limit would result in new bounds which establish the hybrid EOS to be consistent with pQCD bounds at least up to TOV densities.However, as already pointed out, we expect the WSS model to exhibit corrections to the computed behavior around saturation density, so we leave a complete analysis around these regions to future works, when a more quantitatively trustable approach will be available. Neutron stars We finally want to derive properties of neutron stars built with the equation of state we just constructed: to do so, we solve numerically the Tolman-Oppenheimer-Volkov (TOV) equations for a range of central pressures (P 0 ).Each solution will provide a neutron star with central density P 0 , radius R(P 0 ), mass M (P 0 ) and tidal deformability Λ(P 0 ).The TOV equations are given by and describe static neutron stars, hence we are neglecting the effects of rotation in this analysis.We supplement the equations with the third one From the solution of Eqs. ( 47)-(48) we produce relations between mass and radius, and employing Eq. ( 49) with the boundary condition y(0) = 2, we obtain the relation between mass and tidal deformability Λ, with the definition In the above equation, c = GM R is the compactness of the star, while k 2 is the tidal Love number, defined by: Figure 5: The mass-radius diagram of neutron stars in our model.In red (green), the curves corresponding to neutron stars built from the WSS equation of state hybridized with the stiff (soft) CEFT one.In purple, the curves corresponding to neutron stars built from the WSS equation of state hybridized with SLy4.For every color, the solid (dashed) line represent the fit that achieves S N = 32.8MeV (S N = 30.6MeV) at saturation density d 0 = 0.16 fm −3 .The blue horizontal error bars are calculated from 208Pb neutron skin thickness [36], while the gray-shaded region is from Ref. [37].The light green shaded area represents the measured mass of PSR J0952-0607 and represents a lower bound for the maximal achievable mass by a stable (static) neutron star.5 for the two different fits.The color coding follows the same conventions as in Fig. 5.The blue bound is from Ref. [36], whereas the red bound is from Ref. [38]. In particular the mass-radius curve represents the best test currently available for neutron stars models: our results for this relation are provided in Fig. 5, where we plot in purple the data for the stars generated by the SLy4-WSS equation of state, and in red (green) the data for the ones generated by the stiff (soft) CEFT-WSS equation.Remarkably, all the hybrid equations of state, derived from fitting the WSS model to saturation density and symmetry energy, result in neutron stars satisfying every constraint in the mass-radius curve, and achieve rather high maximum masses for stable stars, compatible with the highest mass measured to date, i.e. 2.35 ± 0.17M ⊙ of PSR J0952-06072 .In Fig. 6 we plot the tidal deformability against the mass of the star: the tidal deformability Λ 1.4 for a neutron star of mass 1.4M ⊙ ends up being in good agreement with currently established bounds in the case of the fit corresponding to S N (d 0 ) = 32.8MeV for all three low-density EOSs, while the other fit choice is either at tension with or slightly outside the bound from one of the references adopted (despite both choices being in good agreement with the more loose bound from Ref. [38]); another hint (together with the proton fraction at the junction density) that possibly the favored fit is one closer to Eq. ( 46).The SLy4-WSS equation of state can be seen as one of intermediate stiffness, between the soft and stiff CEFT ones: in particular, since we used the extremal upper and lower bounds dictated by CEFT, and since the junction density is determined to be lower than the value of 1.1d 0 employed in Ref. [28] as the density at which the equation of state changes to a polytropic piecewise expansion, we conclude that these two curves also represent boundaries of a region in the M -R plane that encompass all possible hybridization of any choice of low-density equation of state, hybridized with our particular WSS one.Thus, we conclude that the consistency of the M -R curves with current observations is a robust feature of our holographic hybrid equation of state: of course the situation can change once we adopt refinements to the hybridization process (such as ones mentioned in the previous section), so we restrict ourselves to only remark that, in the presence of these quite crude approximations, the WSS model can provide an EOS for densities above saturation that succeeds in reproducing neutron stars phenomenology. Conclusion In this paper, we have computed neutron star mass-radius curves for stable static stars in the Witten-Sakai-Sugimoto model.Due to the approximation of using the homogeneous Ansatz, we acknowledge that the results are not expected to be trustable at very low densities below saturation density.We have thus constructed a hybrid equation of state by patching the SLy4 EOS and two other EOSs, namely the stiff and soft limit coming from CEFT, together with that of the holographic WSS model, taking into account the symmetry energy, charge neutrality and β-equilibrium.The calibration of the model in this paper, has been chosen by fixing the saturation density to the physical one; this fixed the mass scale M KK of the model and furthermore fixed the symmetry energy to the phenomenologically accepted value at saturation density, by adjusting the 't Hooft coupling.We have calculated everything for two fits, corresponding to the upper (solid lines) and lower (dashed lines) value of the error bar on the symmetry energy.The results we find are unexpectedly good for a top-down holographic QCD model.In particular, we find a large maximal mass between 2.26M ⊙ and 2.35M ⊙ , which is hard to achieve for many nuclear physics and phenomenologically driven models.In addition, we pass the current mass-radius constraints as well as the constraints on the tidal deformability (the latter only for one of the fits). The are many small improvements that could be made, in order to further refine the results and hence to achieve precision neutron star phenomenology from a top-down holographic QCD (WSS) based model.In particular, we have not included the strange quark (s) in the model, but since we also have not included quark masses, those should be incorporated too, in order to achieve a physical model.Furthermore, we have considered the hybrid model of patching together the SLy4 with our holographic WSS results, but the low-density equation of state could in principle be derived from the WSS model by taking into account pasta phases, which in turn would require many more computations such as wall tensions etc.Another refinement of the model would be to take into account higher-order corrections due to the isospin asymmetry, which would correspond to (χ • χ) 2 terms.In principle, the WSS model itself also predicts more accurate higher-order corrections, one example is to consider the full Dirac-Born-Infeld action instead of the leading Yang-Mills term and there are also corrections in 1/N c and 1/λ, although computing those may be unpractical. On top of these minor refinements, there are also more substantial improvements we can make, that involve changing considerably our construction.These are aimed at removing qualitative flaws, which we briefly review here, while at the same time providing proposals to remove or at least reduce each of the problems: • The WSS model only has two free parameters in this configuration: this means that if we fit saturation density and symmetry energy, the chemical potential at the baryon onset is determined.It happens that these choices and the configuration of the homogeneous Ansatz largely overestimate the value of the chemical potential at said onset, introducing the horizontal segment in the EOSs we constructed.We can try to ameliorate this issue by refining the description of nuclear matter as the density decreases towards saturation, for example putting solitonic baryons on a lattice.At the same time, we should change the values of the fit parameters to accommodate for the changes introduced.It is not clear if these two effects will lower the value of the chemical potential, but it seems natural to consider a configuration with resolved individual baryons at such densities: it is after all exactly at the boundary of the region within which we can deem the homogeneous approximation to be reliable. • The speed of sound exhibits a sharp discontinuity at the junction density: this is not a realistic feature, especially since the transition density is very close to saturation.Around saturation density, the speed of sound is expected to be considerably lower than the values we obtain: following the same reasoning as for the previous point, it is reasonable to attribute this unphysical behavior to the inadequacy of the homogeneous Ansatz at saturation density.The introduction of an intermediate lattice phase could help with this issue, even though it is highly unlikely that we can reach a continuous speed of sound with the hybrid approach. • The speed of sound keeps growing monotonically until the TOV densities and beyond for a wide range of densities.It is expected that at the TOV scale, matter in neutron stars exhibit deconfinement, hence being in a quark phase.This is a possibility we did not explore in this work, and is certainly an interesting direction for future work: in particular, the TOV masses we find are in the allowed region, but rather large for a static object, and introducing the effects of a high-density quark phase could potentially lower the TOV mass.The introduction of such a phase requires a major change in the setup of the model: in particular, since the introduction of the backreaction from the baryons on the geometry seems an extremely difficult problem to solve, the quark phase can be modeled instead in the non-antipodal setup of the flavor branes, working in the deconfined geometry in the decompactified limit.It is not clear to us at the moment if such a substantial change can be performed while preserving the nice agreement with observations that we have obtained in the present work. We keep these three points as a guideline for future works. Figure 2 : Figure2: Left panel: in red (green) the low-density equation of state obtained from the upper (lower) bound derived from CEFT, in purple the low-density SLy4 equation of state, in black the branches of the equations of state computed from the WSS model, the grayshaded area marks currently accepted bounds for the EOS of nuclear matter constrained by neutron stars data.The solid (dashed) line represents the fit that achieves S N = 32.8MeV (S N = 30.6MeV) at saturation density d 0 = 0.16 fm −3 .The star markers in the plot represent the pressure and density of the heaviest stable static neutron stars allowed by that particular equation of state (the left-most star marker corresponds to the solid line).Right panel: the hybrid stiff-WSS EOS compared with model independent bounds from Ref.[27], derived from pQCD and shown with a dashed green line.EOS lying within the dark blue lines have an upper-bound on the speed of sound at the conformal value c 2 s < 1/3.The pink line represents the limiting case in which the speed of sound is constant along the EOS. 2 Figure 3 : Figure 3: The speed of sound for the three hybrid equations of state presented in Fig. 2 as function of density in units of the saturation density d 0 .The star markers represent the speed of sound reached inside the heaviest stable static neutron stars allowed by the corresponding equation of state. Figure 4 : Figure4: Particle fractions as function of the density (in units of the saturation density), with the curves from top to bottom corresponding to neutrons (dark red), protons (purple), electrons (yellow) and muons (green).The solid and dashed lines are for the two different fits, see the caption of Fig.2.The left panel covers the entire range relevant for even our heaviest neutron stars, whereas the right panel zooms in around the densities where we have patched the SLy4 and the WSS equation of state together. Figure 6 : Figure6: Tidal deformability for the hybrid neutron stars of Fig.5for the two different fits.The color coding follows the same conventions as in Fig.5.The blue bound is from Ref.[36], whereas the red bound is from Ref.[38].
11,186
2023-07-21T00:00:00.000
[ "Physics" ]
Curriculum Data Association Organization and Knowledge Management Method for Unstructured Learning Resources — To solve the problems in the curriculum of unstructured learning resources, data association organizations and knowledge management methods were proposed. Firstly, according to functions and requirements, the associated course data system was designed. Then, the table conversion test and the storage index test were performed. Finally, the merger test of the entity was carried out. Results showed that the data association organization and knowledge management methods effectively solved the problem of the curriculum of unstructured learning resources. In summary, the online learning environment provides conditions for unstructured learning resources Introduction The online course is implemented through the network under the guidance of curriculum theory, learning theory and teaching theory.It is the sum of teaching content and teaching activities in an online learning environment designed to achieve the curriculum objectives of a subject area.The online course consists of six components: teaching content, learning resources, teaching strategies, learning support, learning evaluation and teaching & learning activities.Learning resources reflect the advantages of network and multimedia teaching as one of the basic elements of the online course.This is an important part of the online course design.Learning resources in online courses can be divided into structured learning resources and unstructured learning resources according to their degree of structure.Structured learning resources refer to learning materials that have been carefully designed by teachers and organized according to a predetermined structure.Typically, it has a defined source, good structure and stable content.Linear or hierarchical knowledge organization methods such as lesson plans, handouts, supplementary materials, exercises and test questions are used.The sources of unstructured learning resources are uncertain, the structure is ambiguous, the content is dynamically changing and stability is not good.The so-called "unstructured" does not mean that such learning resources have no structure.In fact, the internal contents of all learning resources and other learning resources are correlated.Therefore, it has various structures that are easy to express or not easy to express.In general, linear structures and tree structures are easy to express and regular.This type of resource is also easy to acquire and store.The mesh structure is complex and irregular and the stability is bad.This type of resource is weakly related and difficult to acquire and store in a structured way such as a relational database.Therefore, the situation of such structural ambiguity is summarized in "unstructured".The unstructured learning resources are driven by the Web2.0 network application model.Unstructured learning resources are mainly embedded in social networks based on Web 2.0, such as blogs, wikis, forums, etc.As a provider of unstructured learning resources, online teachers and online learning partners are included in the scope of unstructured resource research. The autonomy of the e-learning environment and the wider interactivity provide conditions for the development of learning resources, especially unstructured learning resources.However, these resources are all generated by people.Based on this perspective, the connotation of unstructured learning resources is human intellectual resources.The manifestation of this kind of resources is materialized, nonmaterialized, explicit and implicit.Its own internal structural features and the order of transmission of knowledge present a non-linear "unstructured" representation.It is dynamic and easy to share, spread and develop.Stability and controllability are bad. State of the Art At present, more research has been done on structured learning resources at home and abroad.Anshari et al. [1] believes that based on the perspective of human resources, any materialized and non-materialized learning resources were ultimately a summary and expression of human hidden empirical resources or intellectual resources.Cruz-Benito et al. [2] stated that in the online learning environment, learners communicate with other members of the community through tools such as Email, BBS, Blog, and Wiki.In this process, learners continue to generate new ideas, methods, and solutions to problems.These programs have no fixed form and structure, and they were dynamically changed and updated.These ideas, methods, and solutions can all be called resources.This process of generating ideas (solving problems) can be seen as a process of visualization of hidden intellectual resources.These resources were preserved and disseminated through visualization and were used by other members to create more available resources.In addition, Fathurrohman et al. [3] believes that online teachers often provide many external links based on learning objectives and content in the online course, so that learners can learn more about the relevant information.Learners also often add some link resources and recommend relevant learning materials to each other.Due to the constant participation of people, the breadth of learning resources has been greatly expanded.A complex network structure was formed which presents a state of seemingly ambiguous structure. Linked data and semantic technologies have some applied research in education.Gupta et al. [4] believes that open data based on Semantic Web and ontology technology has become one of the most important ways to publish high-quality associated semantic data.It was widely used in intelligent services such as semantic search and personalized recommendation.Based on big data, Huda et al. [5] studied the innovative environment of online learning resources.Unstructured documents were labeled as structured data with semantics which allows both machines and users to understand their meaning and work together.People can directly access digital resources through mechanisms.Riley et al. [6] studied educational tools based on semantic technology, semantic tools and services those are actually used in higher education in the UK.Based on the study, a three-stage development route was proposed.The creation of connected data across higher education institutions has been gradually realized, so that resources such as education, teaching materials, and curriculum materials were shared among institutional alliances.Thus, the ontology of education was built and the application of ontology-based data analysis and educational perceptual reasoning was implemented.By summarizing, Sanati-Mehrizy et al. [7] found that the OREChem project was funded by Microsoft and was cooperated by Cambridge University, Cornell University, Indiana State University, Los Alamos National Laboratory, Pennsylvania State University, Queensland University, and Southampton University.The project mainly included the following contents: Grid computing was used to create new associated data resources which were developed and integrated into standard ontology for chemical knowledge representation.The core purpose of the project was to design and implement an interoperable architecture based on semantic Web rules.Chemical researchers can share and reuse distributed institutional warehousing, databases and Web services.Connections between different disciplines were also supported.Wang et al. [8] proposed using Semantic Web technology to solve the problems of RLM (Resource List Management) tools.Existing ontology was used to describe resources uniformly.Yeh et al. [9] used the explanatory structure model to explore the design and benefit analysis of professional courses.The principle of associated data was used to improve data interoperability.Existing patterns and ontology were used to describe relationships.Students and teachers were encouraged to enrich the semantics of the data in order to support context-aware recommendation functions.The system not only realizes the unified description of learning resources, but also enriches the semantic description of resources.This was implemented at the University of Plymouth in September 2008. In summary, associated data has influential applications in some areas.However, this has not made much progress in the field of e-learning.This study innovatively proposes the construction of Linked Course Data (LCD) and links with other associated knowledge data.Linked course data was designed to increase the efficiency and interest of learners by allowing more users to discover more potential data knowledge.The key was to build the ontology based on LCD.Semantic Web technology was used to manage knowledge and provide knowledge services for massively correlated data.First, the relevant algorithms were introduced.Then, the operating environment of the system was discussed.Finally, the associated course dataset system was tested.Results showed that the proposed system was feasible. High-level method The High-level method is as follows: First, the corpus is scanned and all relationships declared as owl:sameAs are separated. Second, these relationships are loaded into an index in memory.These relationships are the passing and symmetric semantics of owl:sameAs. Third, for each equivalent class in the index, a generic term is chosen.Fourth, the corpus is scanned again.The terminology of the subject or object location in the rdf:type triple is regulated.Therefore, only a small subset of the index corpus-the owl:sameAs statement is needed.These data were merged by two scans. Union-find algorithm To perform a transitive symmetric closure in memory, the traditional union-find algorithm is used to calculate equipartition partitions.In the process, first, the equivalent elements are stored in a normal data set, so that each element is only contained in one data set.Second, a graph is used to provide a lookup function for querying which collections an element belongs to.Third, when new equivalent elements are discovered, their collections will be merged. Semantic association data technology With the development of Internet technology, a large number of data resources are emerging and information becomes more complex.How to obtain useful knowledge from massive information has become an urgent problem to be solved.The data model based on semantic association is an effective way to solve this problem.It mainly includes two aspects.The first one is semantic association data model, that is, data expression and group based on semantic association.The second is more effective and intelligent detection mechanism on the basis of semantic association data model.Quarantine results are extended to the knowledge entities most related to the query request semantics and the results are sorted reasonably according to the global evaluation value of resources and the query association degree.The model can effectively support reasoning and extend the results to semantically related entities, at the same time it can effectively support the evaluation of knowledge entities and prevent the return of a large number of disorderly results. Literature research method Literature related to semantic technology of associated data is collected and identified.By sorting out the collected and identified data, the situation related to this study is summarized and understood, the current situation and existing problems of previous studies are explored, the purpose, objectives and requirements of the study are defined, the focus of the study is identified and the research plan is formulated.Through the comparison and analysis of the relevant literature, the current situation and development trend of semantic association technology resource integration and knowledge retrieval can be understood.The shortcomings of previous studies are summarized and lessons are drawn from existing research results to determine the research direction and ideas of this topic and select appropriate research objects.The method of literature research runs through the whole process. Experiment method In this study, two experiments: table conversion experiment and storage index experiment are carried out to verify the effectiveness of semantic association technology in resource integration and knowledge retrieval. 4 Result Analysis and Discussion Design of associated course data system The development environment of the system is divided into two: • The hardware environment The jena development kit is used to build and manipulate ontology in the system.Client: The operating system above IE6.0 is installed. The essence of the associated course data set is the knowledge point (concept) and the semantic relationship between knowledge.This system is mainly to give users a special knowledge to overview.The main idea of the system is to read the associated course data (knowledge ontology) of the constructed computer interface course.The program code is used to read and display the relationship between knowledge points and knowledge in the data set.In Figure 1, circles represent knowledge points and arrows indicate cognitive order relationships before and after knowledge points.Subsequent knowledge points of the knowledge point semiconductor tube and MOS (metal-oxide-semiconductor) tube are gate circuits, flip-flops and decoders.The preorder knowledge points of the ROM (Read-Only Memory) cells and the RAM (Random Access Memory) cells are gate circuits, flip-flops, and decoders.These give the user an overview of the knowledge and helps the user to learn better.Figure 1 also shows the display of some of the associated course data (in the form of knowledge ontology) built into the browser during the research process.The most valuable aspect of the associated course data set is the association.The interface also presents the pre-order knowledge point in personal computer of the knowledge point CPU and the owl:sameAs link of the knowledge point with other data sets (e.g.dbpedia, freebase).Users can click on the link to further learn the multifaceted knowledge layer extension. The system also provides retrieval of knowledge points.It is used for users to retrieve the knowledge points and related content that they want to know in the system quickly and actively, instead of passively accepting knowledge learning.The idea of the retrieval sub-system is to generate a SPARQL query based on the search content typed by the user.In the background ontology, knowledge points and relationships are retrieved and presented.The retrieval sub-system is not a search engine but a specific knowledge points related content in the knowledge point class in the SPARQL-based retrieval related course data.The summary of the knowledge point and the list of pre-order knowledge points can be found. The associated course data is for upper-level applications.For example, the navigation map of the knowledge system can generate different knowledge point structure maps for different learners. In the knowledge navigation browser interface, if the user inputs a knowledge point in the knowledge navigation search box, the system will access the associated course data set in the background and automatically generate a navigation map corresponding to the knowledge point.ROM unit is a pre-order knowledge point of a ROM memory chip.The ROM memory chip is the pre-order knowledge point of the Pentium memory sub-system.That is to say, if the learners want to learn the Pentium memory sub-system, they need to learn the knowledge points such as Read Only Memory (ROM) unit and ROM memory chip as the preliminary knowledge.A series of learning paths help learners to conduct in-depth learning to form theoretical and systematic knowledge, which can help learners to build and migrate knowledge.Therefore, one-sided and shallow or inadequate learning is avoided. Table conversion experiment For the sake of accuracy, the experiment was divided into two parts: • The first part extracted 15 tables from Google Squared and Wikipedia. • The second part of the experiment used the relevant tables of the course data of this project group.The simple table in English is preferred. A total of 52 columns were assigned from the class label to the column header.The table cell is linked to the entity for a total of 611 entities.Table 1 shows an overview of the data set.Table 2 shows the four categories of columns and entities.In Tables 1 and 2, columns and entities are distributed among four categories of people, locations, organizations, and others (movies, songs, nationalities, etc.).Manual evaluation is used to evaluate the correctness of the class labels predicted by this method.Class tags predicted from the DBpedia ontology are primarily evaluated. When evaluating the algorithm for assigning class labels to columns for the first time, the system's class tag ranking list is compared to the evaluator's class tag ranking.As shown in Table 3, 80.76% of the columns in the Mean Average Precision (MAP) are greater than 0, which means that at least one related tag is ranked in the top three in the system ranking table.In 75% of the columns, the recall rate of the algorithm is greater than or equal to 0.6.This high recall rate indicates a high match between the first three tags of the system and the first three tags of the evaluator.Finally, the rationality of the predictive class label based on the manual assessment is evaluated.In a given column, there may be a more accurate class label.The evaluator needs to determine the reasonableness of the forecasting class.For example, a column named City, a person might judge dbpedia-owl:City as the most appropriate class.Since dbpediaowl:Populated Place and dbpedia-owl:Place are acceptable and other classes are unacceptable (for example, dbpedia-owl:Thing).The evaluator will think that the 76.92% predicted class label is correct.Figures 2 and 3 show the accuracy of each of the four categories.For assigning class labels, such as organizations and other types of data, moderate accuracy rates are favored.In the knowledge base, these types of entities have data sparsity.For the evaluation of table cells is linked to entities, the cells in the 611 tables are first manually tagged to the corresponding Wikipedia/DBpedia page.This is compared to the system generated links.The results show that 66.12% of the predicted table cells are correctly linked. Fig. 2. Accuracy of classes in four categories As shown in Figures 2 and 3 from the point of view of accuracy, there is the highest precision (83.05%) on the link Persons followed by the link to Places (80.3%).There was modest success in the connection (61.90%) but the accuracy of connecting other types of data, such as: movies, nationality, songs, types of business and industry was only 29.22%.In the knowledge base, these types of entities have data sparsity. The data set has 24 entities that do not exist in the knowledge base.In all 24 cases, the system was able to correctly predict that the table cells should be linked to "empty". This study did a preliminary assessment to determine the relationship between the columns.First, the manual evaluator assesses the relationship between the columns in a table and at the same time, the system also makes judgments on the relationship between the columns.Then, the two results are compared.The results show that in the five tables for the evaluation, the system can identify 25% of the correct intercolumn relationships. Storage index experiment PostgreSQL and MonetDB are used to execute this indexing scheme.PostgreSQL is implemented according to the horizontal partitioning specification.MonetDB has been extended because there is no built-in functionality for horizontal partitioning.It is a widely used and efficient column-oriented database.MonetDB supports partitions of the same size on the currently implemented partition.On MonetDB, the LCDDB is implemented.The triple table is used as input to calculate statistics.N partitions are created and inserted into their corresponding partition table.For example, in the sIndex_2000 table, all triples with a subjectID value between 1001-2000 are selected and inserted into the sIndex_2000 table.Similarly, in all tables oIndex and pIndex, each predicate is created with a predicate table.The triple of the predicate is inserted into the corresponding table .The system test was tested with the LUBM (Lehigh University Benchmark), YAGO, and BSBM (Berlin SPARQL Benchmark) data sets. First, about the LUBM data set.The LUBM data set is a publicly available benchmark set for testing the query performance of RDF (Resource Description Framework) data storage systems.Corresponding tools are given to generate relevant basic data sets based on the university field including universities, professors, students, courses and many other aspects.Based on the data generation tool given by the LUBM data set, data of different sizes is generated.Therefore, under different data sets, the storage system's SPARQL query performance is tested.Currently, this test data set is very popular.The performance of a storage system that supports SPARQL queries is tested.For functional and performance testing, the tools provided by LUBM were used to generate data of 200 universities including 27,629,308 tuples and 18 different predicates, which occupied 3.2 GB of disk space.Here, the system also uses 1000 resources to divide the sIndex and oIndex index tables.sIndex has 6576 tables, and oIndex has 6576 tables.These data are loaded into MonetDB and 3.4GB is occupied. Second, about the YAGO data set.YAGO is a real-world data set that contains information extracted from Wikipedia.93 different predicates of the 93193669 triads are included which occupies 3.1 GB of disk space.The number of non-repeating subjects far exceeds the number of objects.Every 2,000 resources are used for division.In addition, every 1000 resources are used to divide the oIndex index.sIndex has 20,608 tables, oIndex has 16,977 tables, and pIndex has 93 tables.After the mapping process of the string id, the data is loaded into MonetDB, which takes up 5.3 GB of disk space. Third, about the BSBM data set, which is a benchmark data set for the ecommerce field.It consists of a series of products, product descriptions, suppliers and reviews.For experimental purposes, the system produced 1000 products, including data for 356,477 triples.There are 40 different predicates that occupy 805 MB of disk space.The number of objects and subjects is almost equal.Therefore, 1000 is used to divide the table of subjects and objects of resources.sIndex has 1008 tables, oIndex has 1008 tables and pIndex has 40 tables.The data is then loaded into MonetDB, which occupies 1GB of disk space. The hardware environment and software environment of the test are introduced respectively.Hardware environment: The hardware environment used for the test of a memory size of 1GB.The disk size is 270GB.There are 8 machines with Intel Xeon CPUs, running at 2.5GHz.Software environment: The operating system installed on the machine is Red Hat Enterprise LinuxAS release 4, x86_64.The kernel version is Linux LOG 2.6.9-42.To facilitate viewing of results and comparisons, Java was used to develop the interface to the test system.Sockets are used to query the transmission of statements and result sets.Apache Tomcat/6.0.29 was used for deployment, which provides remote access for most of the tests.On the server, the command line is used to query and test performance. Some RDF triple storage systems and indexing schemes are studied.From these storage architectures, the DBMS-based triple storage SW-store with the best performance and the best-performing triple storage RDF-3X based on the file system were compared.For the sake of fairness, SW-store and MonetDB are re-implemented for establishing LCDDB (Linked Course Data Database) indexes. These systems were compared to the associated course database LCDDB of this study, namely LCDDB (MonetDB) and LCDDB (PostgreSQL).In addition, it is also compared with the Triple The experimental results on the LUBM data set were analyzed.LUBM was used as a benchmark to evaluate query performance.LUBM provides 14 queries, which covers most types of queries.However, some queries use reasoning, which is not considered for the time being.Therefore, these queries are modified.Seven queries were compared.Figure 4 shows the LUBM query time.It can be seen that among the seven queries, four executed queries are better than RDF-3X.All implementations are better than SW-store.The query set of the RDF-3X is roughly the same as the LCDDB, which is five times faster than the SW-store query time.The experimental results on the YAGO data set were analyzed.RDF-3X is used to query the same set of YAGO and the base graph mode BGP query is not included.This is compared to the indexing scheme.Figure 5 shows the query performance of different queries on the YAGO data set.Three of the six queries performed better than the RDF-3X.In addition, all queries performed better than SW-store. The experimental results on the data set were analyzed.Figure 6 shows the experimental results on the BSBM data set.MonetDB is used to implement storage.All six queries perform better than RDF-3X and SW-Store.The query results for the BSBM cold buffer show that the LCDDB is twice as fast as the RDF-3X and SW-Store.Table 5 shows the run time of filter and range queries based on regular expressions.Current storage systems cannot handle these types of SPARQL queries.Table 5 shows the overall results, which calculates the geometric mean of all queries for each data set. Entity merger experiment Using this method, experiments were performed on a data set with 111.8 million triads.These tuples are crawled by the crawler from open RDF/XML web files.The crawler grabs in a breadth-first manner.From all locations of the RDF data, the URI is extracted.URI queues are assigned to different paid domain names. The system extracts 11.93 million original owl:sameAs statements.An equivalent class of 2.16 million was formed, which contained 5.75 million terms (6.24% is a URI).There are only 4156 blank nodes.Figure 9 shows the distribution size of the equivalence class.Among them, 1.6 million (74.1%) equivalence classes contain at least two equal identifiers.Experiments have shown that there is a significant increase in the number of equivalence classes by extending the merge method.This shows that the method of extended reasoning is effective to some extent.The system generated 2.82 million equivalence classes from the data, which was 1.31 times higher than the baseline method.14.86 million terms were involved.Among them, 9.03 million are empty nodes and 5.83 million are URIs.It can be seen that the average number of equivalence classes has increased to 5.26 entities.The maximum number of equivalence classes becomes 33,052. Conclusion A system for correlating course data is designed.Table conversion and storage indexes are introduced.The experimental scheme and results of data integration (entity consolidation) are discussed.Compared with the traditional e-learning system, the related course data system has made some progress.The system can present knowledge and associated knowledge links.This leads learners from one source of knowledge to another.In terms of table conversion, the proposed class label prediction in the four-step method and the evaluation of the table cell link to the entity have a higher recall rate.However, the accuracy of the relationship between the columns is not high, which requires manual intervention and subsequent in-depth research.In terms of storage indexing, experiments were conducted on large data sets.The results show that the proposed method is superior to the previously used or Fig. 1 . Fig. 1.Example of associated course data in a browser Fig. 3 . Fig. 3. Accuracy of entity links in four categories Table 1 . Overview of the data set Table 2 . Four categories of column and entity distribution Table 3 . Example table Table 4 . Example table table (MonetDB).LCDDB (MonetDB) is the implementation of the MonetDB version of the LCDDB and LCDDB (PostgreSQL) is the PostgreSQL implementation.The Triple table (MonetDB) is the execution of an ordinary three column three tuple table on MonetDB.The LCDDB is compared to the Triple table (MonetDB).The Triple table (MonetDB) has a longer query time. Table 5 . Run time of filter and range query on BSBM (unit: second)
5,889.8
2020-03-27T00:00:00.000
[ "Computer Science" ]
The correction of Inelastic Neutron Scattering data of organic samples using the Average Functional Group Approximation . The use of the Average Functional Group Approximation for self-shielding corrections at inelastic neutron spectrometers is discussed. By taking triptindane as a case study, we use the above-mentioned approximation to simulate a synthetic dynamic structure factor as measured on an indirect-geometry spectrometer, as well as the related total scattering cross section as a function of incident neutron energy and sample temperature, and the transmission spectra depending on the sample thickness. These quantities, obtained in a consistent way from the Average Functional Group Approximation, are used to calculate the energy-dependent self-shielding correction a ff ecting the sample under investigation. The impact on the intensities of low-energy vibrational modes is discussed, showing that at typical experimental conditions the sample-dependent attenuation factor is about 15% higher compared to the correction at higher energies. Introduction Inelastic Neutron Scattering (INS) and Quasi-Elastic Neutron Scattering (QENS) are well-established experimental techniques allowing to study the atomic-scale dynamics of condensed-matter systems [1][2][3].The investigation of hydrogenous compounds through these techniques has found a naturally successful application, since hydrogen ( 1 H) has the largest bound scattering cross section (82.03 barn) among the elements of the periodic table [4]. Neutron data from hydrogen-containing organic systems collected in INS and, to a lesser extent, QENS experiments, can be subject to non-trivial and sample-dependent attenuation corrections related to the total scattering cross section at thermal neutron energy, also referred to as Thermal neutron Cross Section (TCS).These act as attenuation functions whose intensity can change up to a factor of 4 in the neutron energy range between units and hundreds of meV.As the TCS depends upon temperature as well as upon the structure and dynamics of the system -i.e., the ultimate objectives of the experiment itself -the correction of the experimental spectra becomes, in principle, a very challenging task.The significant change of the TCS affects the INS peak intensities in an energy-dependent way, therefore complicating the accurate comparison with stateof-the-art computer simulations as well as other experiments at different experimental conditions. Knowing TCSs of the materials of interest is a challenging task, following the dependencies on the molecular structure, dynamics, and temperature.In addition, it is quite uncommon to have the possibility to obtain them from experimental measurements over a broad range of neutron energies, since specific instrumentation (e.g. the VESUVIO beamline at ISIS Neutron and Muon Source, UK [5]) is required.For this reason, experimentally obtained and tabulated TCS are available for just few systems.In order to overcome the impossibility of direct measurements, it is possible to retrieve TCS by calculations from simplified models [6,7], from molecular dynamics [8], as well as from ab initio calculations [9][10][11][12][13]. In this context, a wide experimental campaign of measurements has been started at the VESUVIO spectrometer [14][15][16] to investigate TCS of alcohols [17], organic systems [18], water [18,19] or neutron moderators [20][21][22][23].In particular, in Ref. [24], a new approach for the modelling of TCSs was developed, referred to the Average Functional Group Approximation (AFGA), specifically designed for organic hydrogen-containing systems and based on the incoherent approximation.Within this model, the TCS of large macromolecules and polymers can be estimated, for which a rigorous phonon-based calculation would be prohibitive. TCS information can be used to correct INS measurements, as presented in [25,26], via a post-processing procedure referred to as InsCorNorm.This procedure, requiring the knowledge of the energy-dependent TCS of the sample under investigation, allows the correction of the measurements for the experimental conditions, so as to make easier the comparison of the INS experimental spectra with single-molecule or periodic DFT calculations.Moreover, the INSCorNorm algorithm allows to express the experimental INS data on an absolute intensity scale, removing any arbitrary procedure, once the value of the average kinetic energy is known. Here, we show how the INSCorNorm algorithm in Ref. [25] can be easily adapted so as to take as an in- put an AFGA-simulated TCS [24] when the direct measurement of the experimental one is not possible.In Figure 1 the workflow of the InsCorNorm algorithm is integrated with the AFGA method that allows to obtain TCSs of compounds when transmission measurements are not available.With respect to the other methods integrated in the standard INSCorNorm algorithm (namely, Free Hydrogen Gas Model and the model by Capelli and Romanelli [18], see Figure 1), the addition of AFGA Model provides a sample-specific TCS for organic and biological compounds.We take as an example of application triptindane, a molecule at the base of a series of compounds of interest for the industry of hydrogen storage and catalysis [27,28]. Materials and Methods The triptindane molecule (C 23 H 18 ) was rationalised as composed of 3 CH 2 aliphatic groups and 12 aromatic CH groups.The average hydrogen-projected Vibrational Densities of States (VDoSs), g H , of a given hydrogen atom in the molecule was calculated as where g CH2 and g CH,aro correspond to the average hydrogen-projected VDoS of a single hydrogen atom in a CH 2 group or aromatic CH group, respectively.The VDoS of specific functional groups as calculated within the AFGA were taken from Ref. [24] (see Supplementary Information therein), and g H was used to calculate the TCS of the molecule according to the AFGA model.Moreover, we pushed the model a step forward to calculate the double differential scattering cross section as follows.Using such VDoSs as an input and using the multi-phonon expansion (in particular Eqs.3-7 in Ref. [24]), we calculated the synthetic dynamic scattering function S (Q, ∆E), as a function of the neutron momentum and energy transfers, Q and ∆E, at a sample temperature of 20 K. We considered an indirect-geometry neutron spectrometer, as well as a scattering angle and final energy equal to those available for the forward-scattering detectors on the TOSCA spectrometer at ISIS [29].The results, reported in Figure 2, is compared to the experimental data recently measured on TOSCA [30] obtained form the INS TOSCA database [31].The comparison shows an overall qualitative agreement of the two spectra -a very good result from a model aimed at accurately reproducing total cross sections rather than double-differential ones.However, an interesting feature of the synthetic spectrum is that the low-energy modes seem to be clearly more intense compared to the experimental ones.At the end of our discussion we will relate such difference, at least partially, to the effect of sample self-shielding.It should be noted that a difference in the vibrational intensities, especially at low frequencies, can have an impact on the calculation of the Mean Square Displacement (MSD) and, consequently, on the estimation of the Debye Waller factor that is used in the calculation of the TCS within the multi-phonon expansion.In the present case, at a temperature of 20K, the average MSD from Eq. 1 and the AFGA model is 0.0647 Å 2 for hydrogen atoms.While we could not find an experimental value to compare this result with, we note that in the case of biphenyl investigated in Ref. [32] at room temperature, the relative difference between the calculated and experimental MSDs was found lower than 20%, and the difference should decrease significantly at base temperatures.Similarly, as anticipated, the total scattering cross section, σ(E) for triptindane per formula unit was calculated using the AFGA approximation included within the NCrystal module [34], and it is reported in Figure 3 for sample temperatures of both 20 K, similar to the one used for experiments on TOSCA, and 300 K.The lowtemperature cross section is a clear example of how such function changes drastically as a function of the incident neutron energy, moving from the sum of free scattering cross sections, of hydrogens and carbons in the molecule, at epithermal neutron energies, to the (approximate) sum of bound scattering cross sections, of the same elements, in the cold-neutron energy limit.In the case of the roomtemperature sample, it is also worth noting how the lowenergy limit of σ(E) deviates significantly from the sum of bound scattering cross sections, showing the temperature dependence of the total cross section at enegies below tens of meV.Finally, sample transmission spectra, T (E), as a function of the incident neutron energy, E, were obtained from the AFGA total cross sections assuming values of the sample thickness, d, of 1 mm and 2 mm, and a bulk sample density of 1.2 g/ml.The results were computed via the Beer-Lambert law, with n the number density, and are shown in Figure 4. Results Once the AFGA-based synthetic values of σ(E), S (Q, ∆E), and T (E) were generated, they could be used as trial input files for the INSCorNorm algorithm, so as to calculate the self-shielding correction, f (θ, E, E f ).As discussed in detail in Ref. [25], the self-shielding correction for a flat-geometry sample and a forward-scattering angle θ f can be approximated as [35] f T (E) with The computed self-shielding corrections for a sample at 20 K and thickness values of 1 mm (blue line) and 2 mm (gray line) are reported in Figure 5.In both cases, the correction causes a marked attenuation of the INS data, enhanced as the sample thickness increases and as the transmission decreases, as one can expect.As a function of the neutronenergy transfer on an indirect-geometry spectrometer, the correction is relatively flat above ca.200 meV, while it decreases quickly as the energy decreases below ca.150 meV.Therefore, one should take into consideration two main effects from the sample self-shielding.On the one hand, there is a significant -yet constant -contribution to the attenuation of the INS spectra related to the value of T (θ = θ f , E f ).As the final energy on indirect-geometry instruments is generally around few meV, one can appreciate that the transmission (total cross section) will be approximately at its minimum (maximum) within the dynamic range probed by the instrument.However, being constant in nature, such contribution does not spoil the comparison of the intensities of the experimental VDoS spectra with the ones from computer simulations and models.On the other hand, there is a contribution related to T (θ = 0, E), related to the incident neutron energy, that may change drastically over the dynamic range available on indirectgeometry INS spectrometers and that is responsible for the drop in the self-shielding correction below ca.150 meV.In order to visualise the effect of the self-shielding correction on INS spectra, Figure 6 shows the same synthetic spectrum reported in Figure 2 shielding correction for a 1-mm-thick sample at 20 K (blue line with circles).As already mentioned in the previous section, the effect of the attenuation becomes particularly relevant to the intensities of the low-energy modes, which undergo a suppression of about 15% in addition to the almost-constant suppression affecting the intensities at higher neutron energies. Conclusions In this work we have provided a worked example on how to implement the Average Functional Group Approximation within the INSCorNorm algorithm to obtain an accurate self-shielding correction of organic samples for which an experimental transmission spectrum is not available. We have quantified the effects of self-shielding in the case of tripdindane at 20 K, corresponding, in the case of 1mm-thick sample, to an additional 15% suppression of the low-energy vibrations to be added to the ca.30% suppression affecting the vibrations above ca.200 meV.Moreover, we have discussed how these corrections change depending on the sample thickness and temperature. In conclusion, the inclusion of accurate sample selfshielding corrections for organic samples, as those available within the AFGA model, is a simple-yet-needed step for the comparison of experimental and theoretical vibrational densities of states, beyond the already successful interpretation of the vibrational energies, towards a quantitative assessment of the vibrational intensities as well. Figure 1 . Figure 1.Workflow highlighting the addition of the AFGA method in the INSCorNorm algorithm.Adapted from [25]. Figure 2 . Figure 2. INS spectra of triptindane from Ref[30] (red dashed line) and synthetic spectrum obtained using the AFGA model (green solid line) as a function of the neutron energy transfer.The latter is vertically shifted for the sake of clarity.The figure also shows the structure of the molecule, from [33]. Figure 3 . Figure 3.Total cross section per formula unit of triptindane obtained from the AFGA model, corresponding to a sample temperature of 20 K (solid blue line) and 300 K (dashed blue line) as a function of the incident neutron energy. Figure 4 . Figure 4. Sample transmission of a sample of triptindane, as obtained from the AFGA model, corresponding to a sample temperature of 20 K (solid lines) and 300 K (dashed lines), and to a sample thickness of 1 mm (blue lines) and 2 mm (gray lines) as a function of the incident neutron energy. Figure 5 .Figure 6 . Figure 5. Self shielding correction, as a function of the neutron energy transfer, corresponding to a sample of triptindane, as obtained from the AFGA model, at a temperature of 20 K and for a sample thickness of 1 mm (blue line) and 2 mm (gray line).
3,007.2
2022-01-01T00:00:00.000
[ "Physics" ]
Oral delivery of glutathione: antioxidant function, barriers and strategies Glutathione (GSH) is a tripeptide with potent antioxidant activity, which is involved in numerous basic biological processes and has been used for interventions in various degenerative diseases. However, oral delivery of GSH remains challenging, similarly to that of other protein and peptide drugs, because the physicochemical barriers in the gastrointestinal (GI) tract lead to low oral bioavailability. Although several approaches have been explored to improve delivery, such as co-administration with penetration enhancers and enzymatic inhibitors, or encapsulation into nanoparticles, microemulsions and liposomes, appropriate formulations with clinical therapeutic effects remain to be developed. This review discusses approaches explored to developing an oral GSH delivery system that could provide protection against proteolytic degradation in the GI tract and enhance molecular absorption across the epithelial membrane. This system may be beneficial for the design and development of an oral formulation of GSH in the future. INTRODUCTION Glutathione (GSH) is a tripeptide containing the amino acids glutamic acid, cysteine and glycine [1]. Endogenous GSH is a potent antioxidant involved in many basic biological processes, including protein and DNA synthesis, cell proliferation, and oxidation/reduction signaling [2]. In the past decade, GSH has been used for various medical interventions in degenerative diseases such as Alzheimer's disease and Parkinson's disease [3][4][5][6][7]. Peptides, such as GSH, are chemical compounds composed of 2-50 amino acids linked by peptide bonds [8,9]. Endogenous peptides are involved in many physiological processes, acting as hormones, neurotransmitters, growth factors, ion-channel ligands or antiinfective agents [10,11]. Their unique pharmacological profiles and intrinsic properties have made peptides excellent drug candidates that are better tolerated and have lower toxicity than traditional "small molecule" drugs (<500 Da). Their highly selective receptor-binding properties ensure good clinical efficacy [12,13]. In recent years, interest in pharmaceutical research and formulation development of peptide therapeutics has increased. By 2018, 7000 naturally occurring peptides had been identified. More than 60 peptide drugs had been approved by authorities across the United States of America, Japan and Europe. In addition, approximately 150 peptides were in clinical trials, and more than 500 were in preclinical trials at the time [11,14]. Over the years, peptide-related patents have shown financial potential in the pharmaceutical industry and have led to remarkable profits. For example, Lupron TM Depot, a synthesized peptide drug used primarily to treat endometriosis and prostate cancer, achieved global sales of US $2.3 billion for Abbott Laboratories in 2011 [11,15]. Moreover, the global peptide drug market has been predicted to grow further, at a rate of 9-10% per annum [16]. In principle, peptides could have great value in medicinal applications; however their use has been severely restricted by physical and chemical barriers in the gastrointestinal (GI) tract after oral administration, thus resulting in low oral bioavailability [17]. The GI tract serves as a physical barrier including the intestinal epithelial membranes, tight junctions, unstirred water layer and efflux systems, which restrict peptide transport across the intestinal epithelium. Chemical barriers include the extremely acidic environment in the stomach and various GI-tract proteases, which catalyze hydrolytic or enzymatic degradation of the peptide drug. These barriers have made peptides unsuitable for administration as conventional oral formulations [18][19][20]. Most peptide drugs are therefore currently marketed as parenteral injections. Unfortunately, injections' invasive nature, and the associated pain and potential tissue damage have made these formulations uncommonly chosen for patient use. Therefore, strategies for developing peptide formulations with enhanced oral bioavailability have received attention among scientists worldwide, who are exploring novel groundbreaking approaches to deliver peptide drugs in the most convenient and patient-friendly manner. Currently only 13 oralform peptide drugs (tablets, capsules or solution) have been approved by the US Food and Drug Administration (FDA; Table 1) [21]. In the past two decades, numerous articles have reported novel technologies for oral peptide delivery. Strategies to increase oral bioavailability have included adding enzymatic inhibitors and/or penetration enhancers in the formulation, or chemical modification of peptides to form analogs and pro-drugs. Many innovative techniques using nanocarrier systems have also been evaluated, including polymeric nanoparticles, solid lipid nanoparticles, liposomes and niosomes [22][23][24]. As a peptide, GSH faces the same challenges as all other peptides in oral-formulation development. Although some studies have suggested promising potential for enhancing GSH oral bioavailability through different strategies [22,23,25], further evaluation remains essential. In this review, we highlight the physiological roles and molecular properties of GSH; the enzymatic and physical barriers of GSH uptake and transport across intestinal epithelial membranes in the GI tract; and the strategies used to enhance oral bioavailability of GSH and other peptide therapeutics (including using enzymatic inhibitors, permeation enhancers, chemical modification and formulation approaches). RATIONALE FOR THERAPEUTIC GSH Oxidative stress is a biological imbalance between the plasma concentration of reactive oxygen species (ROS) and the systemic ability to scavenge ROS and repair the resulting damage to proteins, lipids and DNA [26]. ROS (also referred to as free radicals) [27] are highly reactive, and include the superoxide radical ·O 2 -, the peroxide O 2 2-, the hydroxyl radical ·OH and nitric oxide ·NO [28]. The outer shell of these molecules has one or more unpaired electrons, thus making ROS highly unstable and prone to reacting with various organic substances, including lipids, proteins and DNA [29,30]. Releasing free radicals is a mechanism used by the human immune system to destroy the structures of invading pathogenic microorganisms [29]. However, chronic accumulation of free radicals in vivo can be harmful by causing oxidative stress, which has been demonstrated to be responsible for the development of degenerative diseases [31]. Two pathways result in high concentrations of free radicals in the human body: the first is the accumulation of free radicals that are produced endogenously via normal cell metabolism, whereas the second is build-up through environmental factors such as pollution, cigarette smoking, radiation or medication use [32]. The human body has defense mechanisms to counteract oxidative stress by either endogenously producing antioxidants such as GSH or exogenously acquiring them through food and/or supplements. During the process, antioxidants act as free-radical removers that neutralize oxidants/free radicals, thereby protecting cells, repairing cellular damage, promoting enhanced immune function and decreasing the risk of subsequent diseases [33]. As an antioxidant, the reduced form of GSH is readily oxidized into GSH disulfide by free radicals and/or reactive oxidative species, owing to its cysteine residue. The intracellular balance of both forms of GSH determines the antioxidative state and capacity of cells [34]. Cellular structure and discovery of GSH GSH (N-(N-L-γ-glutamyl-L-cysteinyl) glycine) is a tripeptide composed of γ-glutamic acid, cysteine and glycine [35,36] (Figure 1). It was first discovered by Rey-Paihade in 1888 from extracts of yeast and many animal tissues including skeletal muscle, liver, intestine, brain and fresh egg white [37]. In 1929, Pirie and Pinhey reported the molecular structure of GSH as a tripeptide, as confirmed by Harington and Mead in 1935 after successful chemical synthesis based on N-carbobenzoxycysteine and glycine ethyl ester [37]. Since then, the molecular structure of GSH has been well established: a γ-carboxyl peptide bond links the carboxyl group of the glutamate side chain with cysteine, and a normal peptide linkage bonds cysteine's carboxyl group to glycine [35,36]. Although exogenous GSH can come from many sources, endogenous GSH is produced mainly in the liver during normal cellular metabolism, and is abundant in the cytoplasm, nuclei and mitochondria in all living cells [32,[37][38][39]. Endogenous GSH is synthesized via two steps: the first step is the formation of γ-glutamylcysteine from glutamate and cysteine, catalyzed by glutamate-cysteine ligase. The second step is the formation of GSH from the reaction between γ-glutamylcysteine generated in the first step and glycine, catalyzed by GSH synthetase [40]. GSH is hydrophilic and is quickly degraded in vivo, with an elimination half-life of 10 min in the human body [35] and 2-3 h in rat liver [41]; consequently, GSH has extremely low bioavailability via the oral route. Cellular GSH is degraded via hydrolysis catalyzed by γ-glutamyl-transpeptidase, which breaks the peptide bond linking glutamate and cysteine, thus generating glutamate and cysteinylglycine, which are further degraded into cysteine and glycine by dipeptidases [41]. Roles of GSH in preventing degenerative diseases GSH has been well studied and accepted as a potent antioxidant participating in numerous basic cellular processes, such as protein synthesis, DNA synthesis and repair, cell proliferation and redox signaling [42]. Additionally, it plays a major role in detoxifying various electrophilic compounds such as heavy metals [43,44]. Naturally, glutathione exists in two forms in living cells: the thiol-reduced form (L-GSH) and the disulfide-oxidized form (GSSG). In healthy cells, L-GSH is the predominant form, accounting for >98% of total GSH [45,46], and is stored mostly in the cytosol (80-85%) and mitochondria (10-15%), whereas a small amount is stored in the endoplasmic reticulum [45]. The ratio of GSSG to L-GSH in cells represents the oxidative stress level [45]: the higher the ratio of GSSG to L-GSH in cells, the greater the oxidative stress. The antioxidant function of GSH is generally accepted to be associated with its scavenging activity toward free radicals that accumulate during oxidative stress [47,48], thereby protecting living cells by neutralizing excessive ROS from oxidative damage [1]. Deficiency in GSH can cause excess oxidative stress and cellular dysfunction, thus leading to various degenerative and chronic diseases including cancers, cardiovascular diseases, neurodegenerative diseases (Parkinson's disease and Alzheimer's disease) and glaucoma [3][4][5]. Studies have demonstrated that most human degenerative diseases, as well as the general human aging process, involve deleterious free radical reactions, which are typically caused by ROS [49,50]. For example, the cardiovascular condition atherosclerosis involves the build-up of fatty deposits on the endothelium of blood vessels whose structure has been damaged by ROS [49,50]. In cancerous diseases, the first mutagenic event is typically caused by ROS reactions. Interestingly, oxidative processes also help metastasized cancer cells attach to tissues [49,50]. Finally, the eye has a high concentration of unsaturated lipids, and, owing to its poor clearance mechanisms, is defenseless against oxidative damage [49,50], thus leading to various age-related eye diseases. Unfortunately, most of these disorders have no cure. Consequently, preventive strategies are applied, such as health supplements, including GSH, which can slow degenerative processes. Mechanism of GSH as an antioxidant The mechanism of GSH as an antioxidant can be explained by cellular oxidation and reduction (redox) reactions between the sulfhydryl group of the molecule and GSH-related enzymes [38]. GSH contains a functional sulfhydryl group (also known as a thiol group) on its cysteine moiety, consisting of sulfur bonded to a hydrogen atom. GSH's primary antioxidative role is to maintain the redox state of sulfhydryl groups of important proteins by forming a disulfide bridge, which protects the structures of those important proteins. (Figure 2). GSH protects the body against oxidative stress both directly and indirectly. In direct protection, GSH, in a process to catalyzed by glutathione peroxidase, scavenges ROS such as hydrogen peroxide (H 2 O 2 ) by donating an electron, thus forming GSSG and water [41]. Indirectly, GSH is involved in producing other critical cellular antioxidants, such as vitamins C or E, as the electron source [2,51]. Evidence of GSH's medicinal function In the past two decades, interest in studying GSH as a therapeutic agent has increased. Many clinical trials and in vitro studies have been performed to evaluate the clinical value of GSH by using various administration routes (intravenous, nasal, pulmonary and oral). Some encouraging results have been reported. For example, Cascinu et al. performed a randomized double-blind placebo-controlled trial of GSH injection [50,52] in 50 patients with advanced gastric cancer who were receiving weekly cisplatin treatment. In the treatment group, GSH was given intravenously at a dose of 1.5 mg/m 2 in normal saline solution immediately before cisplatin administration. By the 9 th week of the study, no patients treated with GSH showed signs of neuropathy, whereas 16 of 18 patients in the control group did. By the 15 th week, only 4 of 24 patients in the GSH group had developed neurotoxic symptoms. Cascinu conducted a similar study in 52 patients who received a GSH infusion at 1,500 mg/m 2 over 15 min before treatment with oxaliplatin or saline [53,54]. Clinical and electrophysiologic assessment was performed at baseline and after 4, 8 and 12 cycles of treatment. At the 4 th cycle, 7 of 26 patients showed clinical signs of neuropathy (grade 1 or 2) in the GSH group, compared with 11 of 26 in the placebo group. After eight cycles, 9 of 21 patients in the GSH group experienced grade 1 or 2 neuropathy, compared with 15 of 19 in the placebo group. In terms of grade 3 or 4 neurotoxicity, zero cases were observed in the GSH group, compared with five in the placebo group. Only 18 patients completed 12 cycles of treatment for various reasons, among whom only 3 of 10 patients in the GSH group developed neuropathy (grade 2 to 4), compared with 8 of 8 in the placebo group. Therefore, both studies concluded that GSH may aid in preventing drug-induced neuropathy in platinum treatment without affecting the drug's chemotherapeutic activity (both cisplatin and oxaliplatin) [53,54]. Although various routes have been studied for GSHcontaining formulations, e.g., injections as anticancer agents [53,55] and eye drops for glaucoma treatment [56], an oral formulation of GSH has long been the most desirable administration route, owing to the low cost of production and excellent patient compliance. Recent studies have suggested that oral administration of GSH may enhance both blood and tissue GSH levels in rats and also lead to GSH restoration in the intestinal mucosa under oxidative stress conditions [40,57]. Oral GSH supplements have been suggested to provide a therapeutic strategy for the treatment of diseases caused by abnormalities in ROS levels in tissues. Importance of oral formulation of GSH Although no single GSH-containing preparation has been approved by the FDA as a therapeutic agent, many GSH supplements have already been available on the current market in different forms, such as injections [58], lozenges [59], oral sprays [60], oral liquids [61] and oral capsules [62]. Moreover, several studies have been performed to evaluate the therapeutic potential of GSH in different formulations, such as injections (for prevention of drug-induced neurotoxicity as discussed previously), eye drops (for glaucoma treatment) [56], dermal preparations [63] and oral formulations. GSH supplements may potentially provide a therapeutic strategy for diseases caused by abnormalities in tissue ROS levels. Among all formulations being studied, oral formulations have long been the most desirable strategy for researchers, because of their low cost of production, excellent patient compliance, convenience of storage and transport, and good shelf life. However, the primary challenge in oral formulation of GSH is its extremely low bioavailability, owing to the physical and enzymatic GI barriers. Therefore, this review focuses on investigative strategies that may improve GSH oral bioavailability by using various pharmaceutical modifications. BARRIERS TO ORAL DELIVERY OF PEPTIDES Orally administered peptides face several barriers in the GI tract. The GI tract's predominant functions are to digest food; absorb essential nutrients, electrolytes, and fluids; and excrete waste. Simultaneously, the GI tract serves as a physicochemical barrier protecting the human body from systemic invasion of toxins, antigens and pathogens [64]. To be absorbed into the blood stream, intact drug molecules, including peptides and proteins, must diffuse either between or through the intestinal epithelial cells. This process is hampered by physical and biochemical barriers in GI tract [18]. The epithelial membranes in the GI tract act as physical barriers that selectively allow the transportation of drug molecules. The phospholipid bilayer structure of the epithelial membrane restricts the transcellular transport of hydrophilic macromolecules (e.g., peptides and protein drugs), whereas the tight junctions are responsible for limiting paracellular transport [64]. Because absorption is a slow process that does not occur readily, peptide and protein drugs remain vulnerable in the GI tract, and their enzymatic degradation occurs at multiple sites along the GI tract, including the brush border, lumen and intracellular environment. Physical barriers The physical barriers to oral delivery comprise the cell lining itself in the GI tract, which includes the intestinal cell membranes and tight junctions between neighboring cells, as well as the unstirred water layer and efflux systems, which play important roles in regulating drug absorption [64]. Intestinal epithelial cell membrane. The anatomic structures of the intestine have been well described in the literature [65]; here, only the functional details associated with barriers to drug transportation and absorption are discussed. The intestinal wall primarily comprises a monolayer of column-like epithelial cells, with goblet cells, enterocytes, endocrine cells and Paneth cells interspersed in the architecture [65,66]. Drug transportation and absorption after oral administration may depend on the physiochemical properties of the bioactive molecules, including their size, charge, lipophilicity, hydrogen-bonding potential and solution conformation, which are constrained by Lipinski's rule of five [67]. The phospholipid bilayer structure of epithelial cell membranes allows cells to have semi-permeable properties, thus enabling lipophilic drug molecules to be absorbed transcellularly via passive diffusion ( Figure 3B). However, hydrophilic or highly charged molecules and macromolecules, such as peptides and protein drugs, in principle, face great difficulties in transcellular absorption unless they are recognized and transported via a carrier-mediated pathway or endocytosis ( Figure 3C and 3E, respectively). Although the size of molecules is recognized as the fundamental limitation to the oral absorption of peptide and protein drugs, some successes in the development of oral dosage forms of polypeptides have been achieved, for instance, cyclosporin A and desmopressin [68]. This unstirred water layer acts as an essential physical barrier to drug absorption, and its thickness it is controlled by the rate of mucus secretion and the rate of layer shedding. While this water layer is continually being turned over, drug molecules must move upstream through this structure to reach the epithelial surface [70]. Additionally, complexation/binding interactions between the diffusing drug molecules and mucins are involved in the barrier function [71]. Tight junctions. Tight junctions are dense, hydrophobic intercellular structures that facilitate the paracellular pathway of GI drug absorption [71]. From the apical to the basolateral epithelial membrane, junctional complexes are divided into three layers: apical tight junctions (zonula occludens), underlying adherens junctions (zonula adherens) and basal desmosomes (macula adherens) [72]. Tight junctions form a continuous intercellular barrier among adjacent epithelial cells, thus creating a selective channel for solute movement across the epithelial membrane. The selectivity of tight junctions is regulated predominantly by claudins, a family of transmembrane proteins. They continuously seal the spaces between neighboring epithelial cells on the apical side and hence create a physical barrier for drug absorption [73]. Tight junctions primarily regulate the absorption of hydrophilic molecules across the epithelial membrane. Transport efficiency through this paracellular pathway is determined by the molecular size and polarity of the substances absorbed [74]. Tight junctions permit the intercellular diffusion of small hydrophilic molecules (e.g., ions, nutrients, and certain small drugs) while preventing large hydrophilic molecules (e.g., peptide and protein drugs) from passing through [17]. Tight junctions are generally considered dynamic structures that can be affected by certain chemicals such as Ca 2+ chelators, surfactants and cationic polymers, thus increasing their permeability [17]. With the absence of peptideolytic and proteolytic activities in the paracellular transportation, the formulation design of peptide and protein drugs for oral applications via this route has drawn increasing attention from scientists. Efflux systems. Efflux systems are also considered an essential part of the physical barrier in the GI tract. Efflux systems consist of a protein transporter functioning via an intracellular pathway, and are responsible for the poor oral bioavailability of certain compounds, particularly peptides and proteins [75][76][77] (Figure 3D). P-glycoprotein (P-gp), one efflux system, is located on the apical side of the epithelial cell membrane, where it actively pumps drug molecules from inside the epithelial cells back into the intestinal lumen [78]. P-gp was first discovered in cancer cells [79], and it has since been found in high levels in healthy tissues, such as cells of the intestine, liver, kidney, blood-brain barrier and placenta [79,80]. Biochemical barriers The biochemical barriers to oral peptide drug delivery systems include the acidic gastric environment and the presence of various metabolizing enzymes and luminal bacteria [81]. The pH of intestinal fluid varies considerably along the GI tract; consequently, the mechanism of pH-dependent hydrolysis varies in different parts of the intestine. The enzymatic barrier is the primary obstacle to oral peptide delivery. Enzymes catalyzing proteolysis or peptidolysis are located at specific sites of the GI tract. For example, pepsin is located in the stomach, and elastase, carboxypeptidases A and B, chymotrypsin and trypsin are located in the intestines, after secretion by the pancreas. Owing to the wide distribution of digestive enzymes, enzymatic degradation can occur at multiple sites throughout the GI tract. Meanwhile, degradation can also occur at multiple linkages of the peptide backbone [23]. Microorganisms in the colon secrete enzymes responsible for reactions including decarboxylation, deglucuronidation, amide hydrolysis and dihydroxylation, and the reduction of double bonds and esters [81]. Under the specific conditions of the GI tract, protein molecules are broken into polypeptides, and polypeptides are further broken into smaller units, such as bi-or tri-peptides, and/or single amino acids via peptidolysis, before being transported across the GI-tract membrane into the bloodstream [82][83][84]. These smaller units are the essential components that facilitate several crucial biological processes including DNA synthesis. Unfortunately, the same mechanisms pose challenges to the oral delivery of peptide drugs, because of their chemical and structural similarities to ingested proteins. PROSPECTIVE PHARMACEUTICAL STRATEGIES FOR IMPROVING GSH BIOAVAILABILITY The physicochemical properties of GSH as a peptide drug have severely restricted the clinical development of oral dosage forms, owing to the limited membrane permeability and proneness to enzymatic degradation, primarily in the jejunum [85]. Studies have suggested that the thiol group of GSH is susceptible to γ-glutamyltranspeptidase in the jejunum and is oxidized to GSSH [85], thus resulting in loss of its antioxidant activity. Therefore, strategies focusing on improving GSH's physicochemical profiles and stability in the GI tract could potentially lead to a breakthrough in formulation development, enabling enhanced oral bioavailability. These strategies include chemical modifications, formulation approaches and nanocarrier technologies. Chemical modification strategies Chemical modification is an approach to modify the native structure of a peptide or a protein drug to enhance its stability and absorption across the epithelial membrane [86,87]. Application of prodrugs is one such strategy. A prodrug is defined as a biologically inactive derivative that is metabolized in the body and converted into a pharmacologically active drug. A prodrug protects the parent drug from enzymatic and/or chemical degradation in the GI tract, thus increasing its permeability across the biological membrane and the subsequent restoration of its pharmacological activity by systemic enzymatic cleavage before it reaches its site of action [23,88]. For example, one study has reported that a prodrug of GSH (L-cysteine-glutathione mixed disulfide), compared with a saline control, has better bioavailability in mice after oral administration, thus protecting mice against the hepatic toxicity of acetaminophen. Application of an analog is another effective method for improving a parent drug's therapeutic effect. S-allyl glutathione (SAG) is an analog of GSH obtained through modifying the thiol group of GSH with an allyl group [89]. The S-allyl group has been demonstrated to have an anticancer effect by inhibiting topoisomerase activity, thus resulting in cell cycle arrest and cell death [90,91]. One study has investigated the anticancer effects of SAG by using SAG-containing selenium nanoparticles. [89]. After a 12-h treatment with the formulation, SAG was released from the nanoparticles effectively into a hepatocarcinoma cell line in both acidic (pH 5.3) and neutral (pH 7.4) conditions, with release rates of 72% and 67%, respectively. Moreover, the SAG antiproliferation effect was improved by selenium nanoparticles: the required concentration of SAG to achieve an anticancer effect was lower than that of SAG alone in vitro. Absorption enhancers Absorption enhancers are a group of functional additives incorporated into formulations to improve the permeability of drugs across biological membranes. This approach has long been investigated and applied in the development of oral formulations for protein and peptide drugs [92,93]. Their mechanisms include chemically opening tight junctions, decreasing mucous viscosity and changing intestinal membrane fluidity [93,94]. Commonly used absorption enhancers and candidate drugs whose absorption has been enhanced are listed in Table 2. Chitosan, a nontoxic biocompatible polymer, is a commonly used absorption enhancer for peptide drug formulations [103]. A study conducted by Liu et al. has illustrated that chitosan enhances the permeation and absorption of cyclosporine A, an immunosuppressive agent, across the intestinal membrane in vivo in rats [95]. To overcome the limitation of the solubility of chitosan in a neutral pH environment (as in intestinal tract), the use of chitosan derivatives has led to more effective intestinal absorption enhancement. For example, trimethyl chitosan chloride considerably increases the intestinal permeability of peptide analogs. Chitosan and its derivatives reversibly widen tight junctions, thus enhancing the biological penetration of peptide drugs [95]. Surfactants and detergents are another group of absorption enhancers that reversibly disrupt the phospholipid structure of the membrane, and consequently open tight junctions. Examples include dodecyl sulfate; sodium caprate; and long-chain acylcarnitine, fatty acids and glycerides [17]. Interestingly, GSH has been used as an absorption enhancer in some studies. One study has reported that GSH significantly enhances the permeability of sodium fluorescein across the intestinal epithelium, owing to the disruption of membrane integrity. That study has reported a significant increase in the permeability of guinea pig mucosa to sodium fluorescein in vitro with increasing concentrations of GSH from 0.1% to 0.4%, compared with control medium without GSH [104]. Permeation enhancement has also been observed for GSH used in combination with polycarbophil cysteine. These results have been further confirmed by another study using sodium caprate, a widely recognized absorption enhancer, for comparison [105]. Despite the favorable function of absorption enhancement, important disadvantages of these absorption enhancers have also been reported: these compounds may themselves penetrate biological membranes and cause systemic toxicity. In addition, the disruption of the epithelial membrane structure might potentially have prolonged effects and compromise biological functions [106,107]. Enzymatic inhibitors Oral peptide drugs are degraded by various proteases in the GI tract, such as trypsin, chymotrypsin, peptidases, and other proteolytic enzymes. Enzymatic inhibitors are molecules that bind these enzymes and decrease their activity [23]. In a promising approach, concomitant administration of enzyme inhibitors has been found to restrict the metabolism of proteins and peptides, thus increasing the availability of intact peptide drug molecules for absorption across the intestinal membrane [93]. Aprotinin (a small protein with a molecular weight of 6500 Da) is a competitive enzyme inhibitor of several serine proteases, such as trypsin and chymotrypsin [108]. It has been used as an enzyme inhibitor in various studies investigating protein and peptide drug absorption across Citric acid Insulin [96,97] Cyclodextrins Limaprost [98] Glycerides DuP 532 [99] Lauroyl carnitine chloride Insulin [100] Sodium lauryl sulfate (SLS/sodium dodecyl sulfate) Cefazolin [101] Sodium N-[8-(2-hydroxybenzoyl) aminocaprylate] Semaglutide [102] the intestinal membrane. One study has revealed that insulin-containing microemulsions concomitantly orally administered with aprotinin, compared with those without aprotinin, significantly decrease plasma glucose levels between 90-120 min after administration in both nondiabetic and diabetic rat models [109]. Pechenkin et al. have investigated the effects of several protease inhibitors (aprotinin, soybean derived Bowman-Birk inhibitor and Kunitz soybean trypsin inhibitor) on oral delivery of insulin. Insulin is well protected from proteolytic degradation (triggered by trypsin and chymotrypsin) when encapsulated with these enzyme inhibitors, as compared with unadulterated insulin solutions, in vitro [110]. Bacitracin, a cyclic polypeptide antibiotic with a molecular weight of 1422.7 Da, is another enzyme inhibitor that effectively inhibits various proteases, including trypsin, pepsin and aminopeptidase [111]. An in vitro study has reported that bacitracin, camostat mesilate and sodium glycocholate decrease insulin degradation in rat large intestinal homogenate [112]. Although no published studies have described the use of an enzymatic inhibitor with GSH, the process would be expected to follow the same approaches applied to other oral peptide drugs. The limitations of using enzyme inhibitors in peptide drug delivery include systemic toxicity, digestive disorders and pancreatic islet cell hyperplasia [113], which must be carefully considered for formulation development. Formulation approaches The properties of chemical materials change when their particle sizes approach atomic size, because the increase in the ratio of surface area to volume may cause nanoscale particles to exhibit optical, physical and chemical properties significantly different from those of larger particles [114]. Nano-sized carriers offer many advantages in protein and peptide delivery, including high physical and chemical stability, high drug loading capacity, capability of incorporation of both hydrophilic and hydrophobic drugs, and enhanced bioavailability with sustained-release properties [22]. In addition, nano-carriers can be designed as formulations with various administration routes, e.g., oral, nasal, dermal, pulmonary and parenteral routes [22]. Numerous forms of nanocarriers have been widely studied. This review discusses microemulsions, nanoparticles, liposomes, niosomes and proniosomes. The transport mechanisms of these various formulation approaches over the barriers are illustrated in Figure 4. The mechanisms, advantages and limitations of these strategies are also summarized in Table 3. Microemulsions. A microemulsion is defined as a dispersion of oil, water and surfactant (with co-surfactant). It is a spontaneously forming liquid mixture that is transparent, optically isotropic and thermodynamically stable, with droplet sizes ranging from 10 to 200 nm [122]. Three types of microemulsions exist, according to the internal and external phase: oil-inwater, water-in-oil and bicontinuous [23] (Figure 5). Compared with colloidal systems, such as suspensions, microemulsions have several advantages as drug carriers, such as improved drug solubility, longer shelf life, enhanced bioavailability and ease of preparation [123]. Therefore, formulation designs using microemulsions for oral peptide and protein drug delivery have received great interest. For example, Çilek et al. have developed a lecithinbased microemulsion formulation of recombinant human insulin with aprotinin for oral administration, aiming to examine the hypoglycemic effects in non-diabetic and streptozotocin-induced diabetic rats [115]. After oral administration, the insulin-containing microemulsion (with or without aprotinin), compared with unformulated oral insulin solution, has been found to decrease plasma glucose levels by approximately 30%, with effects lasting for approximately 90 min. In a study conducted by Wen et al., microemulsions applied as a GSH delivery system, compared with a colloidal emulsion system and GSH alone, have been found to achieve sustained-release profiles of GSH. In vitro profiles from the study indicated that the microemulsion might have provided sustained release of GSH after oral administration, thus suggesting the promise of this oral delivery system with enhanced GSH bioavailability [124]. A microemulsion oral solution of cyclosporin, Neoral, has been approved by the FDA. This product is used to prevent organ rejection after transplantation (of the liver, kidneys and heart) and for treatment of rheumatoid arthritis and psoriasis [116]. Research has demonstrated that microemulsions may serve as promising drug carriers for oral protein and peptide drug delivery, and therefore can be considered in designing oral formulations for GSH. Despite being promising delivery systems, microemulsions may be concerning because of their potential toxicity due to the high surfactant concentration. This toxicity must be addressed in designing oral delivery systems [125]. Nanoparticles. Nanoparticles have been extensively studied as peptide and protein drug delivery systems in the past decade. They are defined as colloidal particles (consisting of biodegradable or nonbiodegradable polymers) [126]. The advantages of using nanoparticles as peptide drug delivery systems include their long circulation half-life in vivo and low hepatic filtration, thus enhancing stability and bioavailability [126]. The small size of nanoparticles allows for higher cellular uptake of peptide drug molecules, thus improving drug absorption across biological membranes. Furthermore, the use of nanoparticles as drug carriers may result in fast drug release because of the increased surface area corresponding to the small particle size [127]. However, limitations of such formulations have been reported, including cytotoxicity due to altered regulatory function of endothelial cells [128]. These challenges must be overcome before nanoparticles are applied as peptide drug carriers [129]. Many studies have evaluated the potential of using nanoparticles as delivery systems for oral GSH formulations. To examine GSH's therapeutic effects on intestinal diseases caused by oxidative stress, Bertoni et al. have developed solid lipid nano-scaled particles (250-355 µm) loaded with GSH [26]. They have found that the encapsulation capacity of GSH is as high as of 20% w/w, and GSH's physicochemical properties are effectively retained during the process. Moreover, varying the composition of the formulation can modulate the release of GSH: the more hydrophobic the lipid contained in the particles, the longer the GSH release time in intestinal fluids. The authors have concluded that these GSH containing formulations co-administered with another antioxidant (catalase) show excellent radical scavenging activity by decreasing intracellular ROS levels, and display superior antioxidant activities to rescue H 2 O 2 oxidation in vitro [26]. Another study by Alobaidy has investigated the effects of chitosan-formulated nanoparticles on the oral bioavailability of GSH. In that study, GSH-loaded nanoparticles have shown a rapid and prolonged release profile of GSH after oral administration comparable to the profile of subcutaneously administered GSH in vivo in rats. The effect is dose-dependent, and the plasma concentration of GSH in rats is proportional to the GSH dose uploaded in the nanoparticles [117]. A separate study has reported that the release of GSH from nanoparticles (composed of basil seed gum loaded with GSH) is pH dependent. In vitro studies have indicated faster and more comprehensive GSH release in pH 6.8 (mimicking the intestinal environment) than pH 1.2 (mimicking the stomach environment) [130]. Liposomes. Liposomes are spherical particles composed of an aqueous core surrounded by one or more phospholipid bilayers, generally with sizes ranging from 20 nm to 10 µm [23,131,132]. Liposomes can entrap both hydrophilic drugs (in the aqueous core) and hydrophobic drugs (in the lipid bilayers) (Figure 6a) [133,134]. Liposomes can be categorized into multilamellar vesicles and unilamellar vesicles, which can be further classified into small unilamellar vesicles and large unilamellar vesicles. A unilamellar liposome has a single phospholipid bilayer, whereas a multilamellar vesicle has an oniontype structure [135]. The use of liposomes for oral peptide and protein drug delivery has been investigated for many years, owing to their unique advantages, including favorable biocompatibility, protection of drug molecules from the harsh environment of the GI tract, and enhanced cellular uptake and transport. However, disadvantages such as high manufacturing cost, formulation instability and time consumption remain challenging during formulation development [134,136,137]. Many studies have investigated liposomes for the oral delivery of GSH. A clinical study of 12 healthy adults has revealed that oral administration of liposomal GSH supplements significantly increases GSH levels in the body. Compared with those at baseline, the GSH levels increased by 40% in whole blood, 25% in erythrocytes, 28% in plasma (1 week after administration) and 100% in peripheral blood mononuclear cells (2 weeks after administration). Elevated immune function markers and decreased oxidative stress were also observed [40]. Another study has examined the effects of a proliposome formulation on the oral bioavailability of GSH and formulation stability. The structure of GSH was found to be maintained in the proliposome formulation. Compared with commercially available capsules and pure GSH, the proliposomes prepared in this study displayed a more than one-fold increase in the oral bioavailability of GSH in rats. Moreover, no significant changes in particle size and zeta-potential of the formulation were observed. Hence, the authors concluded that proliposome formulations might be applied as a novel delivery system for oral administration of GSH with enhanced oral bioavailability and stability [118]. Niosomes. Niosomes are nano-structured vesicles with a size ranging from 10 nm to 3 µm [138], produced from surfactants and cholesterol in an aqueous medium (Figure 6b) [22,139]. Their structures are similar to those of liposomes, as small or large unilamellar or multilamellar vesicles, and they are prepared through similar production procedures [140,141]. As drug delivery systems, niosomes can accommodate both hydrophilic and hydrophobic drugs, and generate sufficient surface areas to facilitate targeted drug delivery to the site of therapeutic action, thereby increasing drug efficacy and decreasing adverse effects. In addition to possessing all the advantages of liposomes [142], niosomes have unique features to overcome the limitations associated with liposomes, such as difficulties in scaling up production, the high cost of organic materials used for preparation and low physical stability [22]. Niosomes have been extensively studied as drug delivery systems, and their applications have been widely used in various pharmaceutical fields such as topical, oral, parental and transdermal application [22]. Owing to their biodegradability, biocompatibility, non-immunogenicity and low cost with respect to those of liposomes, niosomes have increasingly drawn attention as nanocarriers in GSH oral delivery [142]. For example, a recently published study has evaluated GSH-loaded niosomes' hepatic protection, hepatic cell uptake and GSH bioavailability [120]. The study has reported that after oral administration, GSH-containing niosomes, compared with the pure GSH solution, significantly restored rat liver damage (induced by CCl 4 administered intraperitoneally; P < 0.05). The GSH content in liver tissues was 15.90 µg/g for the GSH-containing niosome group and at 9.91 µg/g for the GSH solution group, whereas the baseline in damaged liver was 8.15 µg/g. Stability studies indicated no significant change in particle size, zeta-potential, polydispersity index and encapsulation efficiency after storage for 4 weeks at room temperature or 4°C. This formulation exhibited GSH-mediated protective effects against stomach environment (pH 1.2) with 35.5% drug release at pH 1.2 compared with 45% at pH 6.8 (mimic small intestine) after 6 h incubation in vitro. This pH-sensitive drug release profile of GSH-containing niosomes has been demonstrated by another study indicating this nanocarrier's non-toxic effect to cells in vitro, even at high concentrations of GSH (400 µg/mL) [119]. The observed anti-cancer effects and sustained protein alteration effects observed in this study lasted for 48 h [119]. Therefore, niosomes may serve as future drug carriers for oral GSH delivery for therapeutic purposes. Proniosomes. A proniosome is a dry, free-flowing granular product that is hydrated after contacting aqueous media, thus forming a niosome dispersion immediately before use [143]. Proniosomes provide all the advantages of niosomes, such as better chemical stability and lower preparation cost than liposomes. Additionally, pronisomes exhibit better physical stability than niosomes because of their dry nature. however, niosome suspensions face problems that must be addressed during storage, including aggregation and fusion of vesicles, and leaking and hydrolysis of entrapped drug molecules. [144][145][146]. Consequently, with their prolonged shelf lives, proniosome formulations may provide convenience in transportation, storage and distribution for large-scale pharmaceutical production. Furthermore, owing to their dry state, proniosomes could be further processed into granules, tablets or capsules, thus providing a better approach in unit dosing design than the liquid dosage form of niosomes [147]. Studies have investigated proniosome preparation and the physicochemical characteristics, including particle size/distribution analysis and drug release profiles [147,148]. Compared with those of conventional niosomes, niosome dispersions derived from proniosomes are easier to prepare without requiring long agitation times. In addition, proniosome dispersions tend to display better profiles in particle size uniformity. Meanwhile, their drug entrapment efficiency and in vitro drug release profiles remain unchanged [144][145][146]. Another study has evaluated the effects of proniosmes on the oral absorption of vinpocetine, a poorly water-soluble drug. Drug-entrapped niosomes have significantly higher permeability in in vitro study using segments from different intestinal regions of rats than unformulated vinpocetine suspension. Similar increases in absorption have been observed in vivo after oral administration of niosome formulations to rabbits. The proniosomes have displayed sustained physical and chemical stability. Data showed that there were no significant changes in terms of particle size and drug entrapment efficiency compared to the ones of 6 months ago [121]. Therefore, proniosomes may be an ideal nanocarrier to deliver protein and peptide drugs with low bioavailability and poor stability in the GI tract. CONCLUSION The main challenge in the oral delivery of protein and peptide drugs is their enzymatic degradation in the GI tract, which is the main cause of their low oral bioavailability. To address this problem, many approaches have been studied by scientists, including chemical intervention, absorption enhancers, enzymatic inhibitors and formulation strategies, e.g., microemulsions, nanoparticles, liposomes and niosomes. In general, every strategy has its own advantages and limitations as oral drug carriers. Therefore, the best approach for a sufficient oral delivery system for protein and peptide drugs as well as GSH would be a comprehensive formulation combined with multiple strategies depending on the physicochemical characteristics of the drug molecules. On the basis of this review, niosomes might be the ideal drug carrier for GSH oral delivery, owing to their the unique advantages and cost-effectiveness. However, further study is needed to examine the feasibility of this approach in terms of the pharmacokinetic and pharmacodynamic profiles.
9,417.8
2022-05-10T00:00:00.000
[ "Medicine", "Chemistry" ]
Personality Classification from Online Text using Machine Learning Approach Personality refer to the distinctive set of characteristics of a person that effect their habits, behaviour’s, attitude and pattern of thoughts. Text available on Social Networking sites provide an opportunity to recognize individual’s personality traits automatically. In this proposed work, Machine Learning Technique, XGBoost classifier is used to predict four personality traits based on MyersBriggs Type Indicator (MBTI) model, namely Introversion-Extroversion(I-E), iNtuitionSensing(N-S), Feeling-Thinking(F-T) and Judging-Perceiving(J-P) from input text. Publically available benchmark dataset from Kaggle is used in experiments. The skewness of the dataset is the main issue associated with the prior work, which is minimized by applying Re-sampling technique namely random over-sampling, resulting in better performance. For more exploration of the personality from text, pre-processing techniques including tokenization, word stemming, stop words elimination and feature selection using TF IDF are also exploited. This work provides the basis for developing a personality identification system which could assist organization for recruiting and selecting appropriate personnel and to improve their business by knowing the personality and preferences of their customers. The results obtained by all classifiers across all personality traits is good enough, however, the performance of XGBoost classifier is outstanding by achieving more than 99% precision and accuracy for different traits. Keywords—Personality recognition; re-sampling; machine learning; XGBoost; class imbalanced; MBTI; social networks I. INTRODUCTION Personality of a person encircles every aspect of life. It describes the pattern of thinking, feeling and characteristics that predict and describe an individual's behaviour and also influences daily life activities including emotions, preference, motives and health [1]. The increasing use of Social Networking Sites, such as Twitter and Facebook have propelled the online community to share ideas, sentiments, opinions, and emotions with each other; reflecting their attitude, behaviour and personality. Obviously, a solid connection exists between individual's temperament and the behaviour they show on social networks in the form of comments or tweets [2]. Nowadays personality recognition from social networking sites has attracted the attention of researchers for developing automatic personality recognition systems. The core philosophy of such applications is based on the different personality models, like Big Five Factor Personality Model [3], Myers-Briggs Type Indicator (MBTI) [4], and DiSC Assessment [5]. The existing works on personality recognition from social media text is based on supervised machine learning techniques applied on benchmarks dataset [6], [7], [8]. However, the major issue associated with the aforementioned studies is the skewness of the datasets, i.e. presence of imbalanced classes with respect to different personality traits. This issue mainly contributes to the performance degradation of personality recognition system. To address the aforementioned issue different techniques are available for minimizing the skewness of the dataset, like Over-sampling, Under-sampling and hybrid-sampling [9]. Such techniques, when applied on the imbalanced datasets in different domain, have shown promising performance in terms of improved accuracy, recall, precision, and F1-score [10]. www.ijacsa.thesai.org RQ.3: What is the efficiency of the proposed technique with respect to other baseline methods? C. Aims and Objective 1) Aim: The aim of this work is to classify the personality traits of a user from the input text by applying supervised machine learning technique namely XGBoost classifier on the benchmark dataset of MBTI personality. This work is the enhancement of the prior work performed by [6]. 2) Objectives a) Applying machine learning technique namely XGBoost classifier for personality traits recognition from the input text. b) Applying re-sampling technique on the imbalanced classes of personality traits for improving the performance of proposed system. c) Evaluating the performance of proposed model with respect to other machine learning techniques and base line methods. D. Significance of Study Personality is distinctive way of thinking, behaving and feeling. Personality plays a key role in someone's orientation in various things like books, social media sites, music and movies [12]. The proposed work on personality recognition is an enhancement of the work performed by [6]. Proposed work is significant due to the following reasons: (i) performance of the existing study is not efficient due to skewness, which will be addressed in this proposed work by applying re-sampling technique on the imbalanced dataset, (ii) proposed work also provide a basis for developing state of the art applications for personality recognition, which could assist organization for recruiting and selecting appropriate personnel and to improve their business by taking into account the personality and preferences of their customers. II. RELATED WORK A review of literature pertaining to personality recognition from text is presented here in this section. The literature studies of this work is categorized into four sub groups, namely, i) Supervised learning techniques, ii) Un-supervised machine learning techniques, iii) Semi-supervised machine learning techniques and, iv) Deep learning techniques. A. Supervised Learning Technique These supervised learning algorithms are comprised of unlabeled data/ variables which is to be determined from labelled data, also called independent variables. The studies given below are based on supervised learning methodologies. A system is proposed by [6] for analysing social media posts/ tweets of a person and produce personality profile accordingly. The work mainly emphasizes on data collection, pre-processing methods and machine learning algorithm for prediction. The feature vectors are constructed using different feature selection techniques such as Emolex, LIWC and TF/IDF, etc. The obtained feature vectors are used during training and testing of different kinds of machine learning algorithms, like Neural Net, Naïve Bayes and SVM. However, SVM with all feature vectors achieved best accuracy across all dimensions of Myers-Briggs Type Indicator (MBTI) types. Further enhancement can be made by incorporating more state of the art techniques. MBTI dataset, introduced in [7] for personality prediction, which is derived from Reddit social media network. A rich set of features are extracted, and benchmark models are evaluated for personality prediction. The classification is performed using SVM, Logistic Regression, and (MLP). The classifier using all linguistic features together outperformed across all MBTI dimensions. However, further experimentation is required on more models for achieving more robust results. The major limitation is that the number of words in the posts are very large, which sometimes don't predict the personality accurately. To predict personality from tweets, [8] proposed a model using 1.2 Million tweets, which are annotated with MBTI type for personality and gender prediction. Logistic regression model is used to predict four dimensions of MBTI. Binary word n-gram is used as a feature selection. This work showed improvement in I-E and T-F dimensions but no improvements in S-N and even slightly drop for P-J. In terms of personality prediction, linguistic features produce far better results. Incorporating enhanced dataset may improve performance. A system was developed to recognize user personality using Big Five Factor personality model from tweets posted in English and Indonesian language [13]. Different classifiers are applied on the MyPersonality dataset. The accuracy achieved by Naive Bayes(NB) is 60%, which is better than the accuracy of KNN (58%) and SVM (59%).Although this work did not improve the accuracy of previous research (61%) yet achieved the goal of predicting the personality from Twitter-based messages. Using extended dataset and implementing semantic approach, may improve the results. Personality assessment/ classification system based on Big5 Model was proposed for Bahasa Indonesian tweets [14]. Assessment is made on user's words choice. The machine learning classifiers, namely, SVM and XGBoost, are implemented on different parameters like existence of (n_gram minimum and n_gram weighted), removal of stop words and using LDA. XGBoost performed better than the SVM under www.ijacsa.thesai.org the same data and same parameter setting. Limited dataset of only 359 instances for training and testing is the main drawback of their work. Automatic identification of Big Five Factor Personality Model was proposed by [15] using individual status text from Facebook. Various techniques like Multinomial NB, Logestic Regression (LR) and SMO for SVM are used for personality classification. However, MNB outperformed other methods. Incorporating feature selection and more classifiers, may enhance the performance. Personality profiling based on different social networks such as Twitter, Instagram and Foursquare performed by [16]. Multisource large dataset, namely NUS-MSS, is utilized for three different geographical regions. The data is evaluated for an average accuracy using different machine learning classifiers. When the different data sources are concatenated in one feature vector, the classification performance is improved by more than 17%. Available dataset may be enriched from multi (SNS) by user's cross posting for better performance. The performance of different ML classifiers are analysed to assess the student's personality based on their Twitter profiles by considering only Extraversion trait of Big 5 [17]. Different machine learning algorithms like Naïve Bayes, Simple logistic, SMO, JRip, OneR, ZeroR, J48, Random Forest, Random Tree, and AdaBoostM1, are applied in WEKA platform. The efficiency of the classifiers is evaluated in terms of correctly classified instances, time taken, and F-Measures, etc. OneR algorithm of rules classifier show best performance among all, producing 84% classification accuracy. In future, all dimensions of Big5 can be considered for evaluation to get more insight. The performance of different classifier is evaluated by [18] using MBTI model to predict user's personality from the online text. Various ML classifiers, namely Naïve Bayes, SVM, LR and Random Forest, are used for estimation. Logistic Regression received a 66.5% accuracy for all MBTI types, which is further improved by parameter tuning. Results may further be improved by using XGBoost algorithm, which remained winner of most Kaggle and other data science competitions. The oversampling and undersampling techniques are compared by [11] for imbalance dataset. Classification perform poorly when applied on imbalanced classes of dataset. There are three approaches (data level, algorithmic level and hybrid) that are widely used for solving class imbalance problem. Data level method is experimented in this study and result of Oversampling method (SMOTE) is better than under-sampling technique (RUS). More re-sampling techniques need to be evaluated in future. Authors in [19] briefly discussed and explained the early research for the classification of personality from text, carried out on various social networking sites, such as Twitter, Blogger, Facebook and YouTube on the available datasets. The methods, features, tools and results are also evaluated. Unavailability of datasets, lack of identification of features in certain languages, and difficulty in identifying the requisite pre-processing methods, are the issues to be tackled. These issues can be addressed by developing methods for non-English language, introducing more accurate machine learning algorithms, implementing other personality models, and including more feature selection for pre-processing of data. Twitter user's profiles are used for accurate classification of their personality traits using Big5 model [20]. Total 50 subjects with 2000 tweets per user are assessed for prediction. Users content are analysed using two psycholinguistic tools, namely LIWC and MRC. The performance evaluation is carried out using two regression models, namely ZeroR and GP. Results for "openness" and "agreeableness" traits are similar as that of previous work, but less efficient results are shown for other traits. Extended dataset may improve the results. A connection has been established between the users of Twitter and their personality traits based on Big5 model [21]. Due to inaccessibility of original tweets, user's personality is predicted on three parameters that are publicly available in their profiles, namely (i) followers, (ii), following, and (iii) listed count. Regression analysis is performed using M5 rules with 10-fold cross validation. RMSE of predicted values against observed values is also measured. Results show that based on three counts, user's personality can be predicted accurately. TwiSTy, a novel corpus of tweets for gender and personality prediction has been presented by [22] using MBTI type Indicator. It covers six languages, namely Dutch, German, French, Italian, Portuguese and Spanish. Linear SVM is used as classifier and results are also tested on Logistic Regression. Binary features for character and word (n-gram) are utilized. It outperformed for gender prediction. For personality prediction, it outperformed other techniques for two dimensions: I-E and T-F, but for S-N and J-P, this model did not show improvement. In future, the model can be trained enough to predict all four dimensions of MBTI efficiently. The Table I represents the summaries of above cited studies for classification and prediction of user's personality using Supervised Machine Learning strategies. B. Unsupervised Learning Approach Unsupervised learning classifiers are using only unlabeled training data (Dependent Variables) without any equivalent output variables to be predicted or estimated. The Twitter data was annotated by [23] for 12 different linguistic features and established a correlation between user's personality and writing style with different cross-region users and different devices. Users with more than one tweets are considered for evaluation. It was observed that Twitter users are secure, unbiased and introvert as compared to the users posting from iPhone, blackberry, ubersocial and Facebook platforms. More Twitter data for classification may enhance the efficiency of personality identification model. www.ijacsa.thesai.org Multisource large dataset, namely NUS-MSS, is utilized for personality profiling. Supervised By concatenating different data sources in one feature vector, the classification performance is improved by more than 17%. In future the available dataset may be enriched from multi (SNS) by user's cross posting for better performance. Lower accuracy is due using traditional classifiers. Deep learning approach will definitely improve the performance. 10 Kaur and Gosain (2018) [11] Comparing of oversampling and undersampling techniques for imbalance dataset. Decision tree algorithm C4.5 is used. Result of Over-sampling method (SMOTE) is better than under-sampling technique (RUS). More re-sampling techniques need to be evaluated in future. 11 Ong et al. (2017a) [19] Classification of personality from text, carried out using various social networking sites. Survey paper using supervised learning approcah Best result among all was attained by twitter with 91.9% accuracy using words frequency. Unavailability of datasets, and lack of identification of features in certain languages, are the issues to be tackled. In future methods for non-English language may need to be developed. In future, the model can be trained enough to predict all four dimensions of MBTI efficiently. The purpose of the study carried out by [24], is to scrutinize the group-based personality identification by utilizing unsupervised trait learning methodology. Adawalk technique is utilized in this survey. The outcomes portray that while considering Micro-Ƒ1 score, the achievement of adawalk is exceptional with somewhat 7% for ԝiki, 3% for Ƈora, and 8% for BlogCaṯlog. While utilizing SoCE personality corpus, 97.74% Macro-Ƒ1 score was achieved by this approach. The drawback of this work is that it entirely depends on TƑ -IDƑ strategy, additionally the created content systems are not an impersonation of genuine social and interpersonal network like retweeting systems. Large and increased dataset will definitely enhance the performance of the proposed work in future. An unsupervised personality classification strategy was accomplished by [25] to highlight the matter that to how extent different personalities collaborate and behave on social media site Twitter. Linguistic and statistical characteristics are utilized by this work and then tested on data corpus elucidated with personality model using human judgment. System investigation anticipate that psychoneurotic users comments more than secure ones and tend to develop longer chain of interaction. An Unsupervised Machine learning methodology, namely, Ḳ-Meańs was accomplished by [26] to recognize the network visitors' trait and personality. This proposed work is based on the quantifiable contents of the website. The obtained results portray that this strategy can be utilized to predict website and network visitors' personality traits, more accurately. Proposed system may be enhanced in future by adding more elements associated with websites and a greater number of websites for the better performance. Author in [27] proposed a personality identification system using unsupervised approach based on Big-5 personality model. Different social media network sites are used for extraction and classification of user's traits. Linguistic features are exploited to build personality model. The system predict personality for an input text and achieved reasonable results. However, extended annotated corpus can boost the system's performance. [28] that requires eight times fewer data to predict individual's Big Five personality traits. GloVe Model is used as Word embedding to extract the words from user tweets. Firstly, the model is trained and then tested on given tweets. Further, the data is tested on three other combinations: (i) GloVe with RR, (ii) LIWC with GP, and (iii) 3-Gram with GP, and the proposed model performed better with an average correlation of 0.33 over the Big-5 traits, which is far better than the baseline method. Findings of this method are based on English Twitter data, which may be extended to other languages. Similarly, the performance of the model can be examined with small number of tweets. The Table II illustrates the concise detail of above cited studies regarding user's personality and traits identification from textual data using un-supervised machine learning approach. C. Semi-Supervised Learning Approach The studies carried out by using the combination of linguistic and lexicon features, supervised machine learning methodologies and different feature selection algorithms are known as semi-supervised ML approaches. The following studies have utilized the semi-supervised and hybrid strategy. Multilingual predictive model was proposed by [29], which identified user's personality traits, age and gender, based on their tweets. SGD classifier with n-gram features, is used for age and gender classification, while LIWC with regressor model (ERCC) is used for personality prediction. An average accuracy of 68.5% has been achieved for recognition of user's attributes in four different languages. However, author profiling can be enhanced by performing experiments in multiple languages. A technique was devised to detect MBTI type personality traits from social media (Twitter) in Bahasa Indonesian language [30]. Among 142 respondents, 97 users are selected with an average 2500 tweets per user. WEKA is used for building classification and training set. Three approaches are used for prediction from training set. i) Machine Learning, ii) Lexicon-based, and iii) linguistic Rules driven. Among all, Naïve Bayes outperformed the comparing methods in terms of better accuracy and time. Its accuracy for I/E trait is 80% while for S/N, T/F and J/P, its accuracy is 60%. Lower accuracy on the part of linguistic rule-driven and lexicon-based, are due to limited corpus in Bhasha Indonesia. It is observed that by increasing the training data set, accuracy may get improved. A technique proposed for personality prediction from social media-based text using word count [31]. It works for both MBTI and Big5 personality models using 8 different languages. Four kinds of labelled corpus both for Big5 and BMTI are used for conducting the experiments. In each corpus, 1000 most frequently used words are selected. Prediction accuracy for "openness" trait of Big5 is higher across all corpus, while for MBTI, prediction accuracy for S/N dimension is greater than other dichotomies. Using only word count for prediction is the main drawback of the proposed system, which may be covered by introducing different features selection and ML algorithms. Detail of the above quoted studies regarding personality classification using Semi-supervised Machine Learning Approach are presented in Table III. D. Deep Learning Strategy Deep learning is a subcategory of machine learning (ML) in artificial intelligence (AI), where machines may acquire knowledge and get experience by training without user's interaction to make decisions. Based on experiences and learning from unlabeled and unstructured corpus, deep learning performs tasks repeatedly and get improvement and tweaking in results after each iteration. The studies given below are in summarized form, showing the prior work performed in Deep learning. A deep learning classifier was developed, which takes text/tweet as input and predict MBTI type of the author using MBTI dataset [32]. After applying different pre-processing techniques embedding layer is used, where all lemmatized words are mapped to form a dictionary. Different RNN layers are investigated, but LSTM performed better than GRU and simple RNN. While classifying user, its accuracy is 0.028 (.676 × .62 × .778 × .637), which is not good. The predictive efficiency of this work may be improved by increasing the number of posts per user. As the model is tested on real life example of Donald trump's 30,000 tweets, which correctly predict his actual MBTI type personality. A model proposed by [33] that takes snippet of post or text as input and classify it into different personality traits, such as (INFP, ENTP, and ISJF, etc.). Different classification methods like Softmax as baseline, SVM, Naïve Bayes, and deep learning, are implemented for performance evaluation. SVM outperformed NB and softmax with 34% train 33% test accuracy, while Deep learning model shows more improvement with 40% train and 38% test accuracy. However, the accuracy is still low as it doesn't even achieve 50 percent. Personality classification system is proposed by [34], to recognize the traits from online text using deep learning methodology. AttRCNN model was suggested for this study utilizing hierarchical approach, which is capable of learning complex and hidden semantic characteristics of user's textual contents. Results produced are very effective, proving that using deep and complex semantic features are far better than the baseline features. A deep learning model was suggested by [1] to classify personality traits using Big Five personality model based on essay dataset. Convolutional Neural Network (CNN) is used for this work to detect personality traits from input essay. Different pre-processing techniques like word n-grams, sentence, word and document level filtration and extracting different features are performed for personality traits classification. "OPN" traits achieved higher accuracy of 62.68% by using different configuration of features and among all five traits. In future, more features need to be incorporated and LSTM recurrent network may be applied for better results. Table IV represents the outline of the works regarding automatic personality recognition system using Deep learning methodology. The working procedure of this proposed system are as follows: (i) Data acquisition and re-sampling, (ii) Pre-Processing and feature selection, (iii) Text-based Personality classification using MBTI model, (iv)Applying XGBoost for personality classification, (v) Comparing the efficiency of XGBoost with other classifiers, (vi) Applying different evaluation metrics. A. Dataset Collection and Re-sampling The publically available benchmark dataset is acquired from Kaggle [6]. This data set is comprised of 8675 rows, where every row represents a unique user. Each user's last 50 social media posts are included along with that user's MBTI personality type (e.g. ENTP, ISJF). As a result, a labelled data set comprising of a total 422845 records, is obtained in the form of excerpt of text along with user's MBTI type. Table V describes the detail of acquired dataset. 1) Re-Sampling: As pointed out by [6], the original dataset is totally skewed and unevenly distributed among all four dichotomies, described as follows: I/E Trait: I=6664 and, E= 1996, S/N Trait: S= 7466 and N= 1194, T/F Trait: T= 4685 and F= 3975, J/P Trait: J= 5231 and P= 3429. Whenever, an algorithm is applied on skewed and unbalanced classified dataset, the outcome always diverge toward the sizeable class and the smaller classes are bypassed for prediction. This drawback of classification is known as class imbalance problem (CIP) [11]. Therefore, this sparsity is balancedby re-sampling technique [11]. As mentioned earlier, two traits are highly imbalanced, Data Level Re-sampling approach for class balancing is used [9]. This bridged the gap between each dichotomy traits and resulted in the efficient and predictable performance of the proposed system. 2) Data Level Re-Sampling Approach: Data manipulation sampling approaches focus on rescaling the training datasets for balancing all class instances. Two popular techniques of class resizing are over-sampling and under-sampling. At the data level, the most famous methodologies are Oversampling and under sampling procedures. Oversampling is the way toward expanding the number of classes into the minority class. The least difficult oversampling is random oversampling, which basically duplicate minority instances to enhance the imbalance proportion.This duplication of minority class enhancement really improved the performance of machine learning classifier for efficient personality traits prediction [11]. Under samplingapproach is used to level class distribution by indiscriminately removing or deleting majority class instances. This process is continued till the majority and minority class occurrences are balanced out. As illustrated in Fig. 2, the data level sampling-based methodologies including over-sampling and under-sampling have gotten exceptionalconsiderations to counter the impact of imbalanced datasets [35]. 3) Training and Testing Data: In this proposed system, the data is divided into Training, Testing and Validation dataset. Mostly two datasets are required, one for building the model while the other dataset is needed to measure the performance of the model. Here training and validation are used for building the model, while Testing step is used to measure the performance of the proposed model [36]. Table VI shows the sample tweets from training dataset, while Table VII represents the sample of test data tweets. At the point when the dataset is divided into training data, validating data and testing data, it utilizes just a portion of dataset and it is clear that training on minor data instances the model won't behave better and overrate the testing error rate of algorithm to set on the whole dataset. To address this problem a cross-validation technique will be used. 4) Cross-validation: It is a statistical methodology that perform splitting of data into subgroups, training on the subset of data and utilize the other subset of data to assess the model's authentication. Cross validation comprises of the following steps:  Split the dataset into two subsets.  Reserve one subset data.  Train the model on the other subset of data.  Using the reserve subset of data for validation (test) purpose, if the model exhibits better on validation set, it shows the effectiveness of the proposed model. Cross validation is utilized for the algorithm's predictive performance estimation. a) K fold cross validation: This strategy includes haphazardly partitioning the data into k subsets of almost even size. The initial fold is reserved for testing and all the remaining k-1 subsets of data are used for training the model. This process is continued until each Cross-validation fold (of k iteration) have been used as the testing set. This procedure is repeated kth times; therefore, the Mean Square Error also obtained k times (from Mean Square Error-1 to kth Mean Square Error). So, k-fold Cross Validation error is calculated by taking mean of the Mean Square Error over Kfolds. Fig. 3, explain the working procedure of K-Fold cross validation. B. Preprocessing and Feature Selection Different pre-processing techniques and various feature selection are exploited, for more exploration of the personality from text. These techniques include tokenization, removal of URLs, User mentions and Hash tag, word stemming, stop words elimination and feature selection using TF IDF [28] and [32]. 1) Preprocessing: The following preprocessing steps on mbti_kaggle dataset are applied before classification, acquired from the [37] work. a) Tokenization: Tokenization is the procedure where words are divided into the small fractions of text. For this reason, Python-based NLTK tokenizer is utilized. b) Dropping Stop Word: Stop words don't reveal any idea or information. A python code is executed to take out these words utilizing a pre-defined words inventory. For instance, "the", "is", "an" and so on are called stop words. c) Word stemming: It is a text normalization technique. Word stemming is used to reduce the inflection in words to their root form. Stem words are produced by eliminating the pre-fix or suffix used with root words. 2) Feature Selection: The following feature selection steps are accomplished using different machine learning classifiers. a) CountVectorizer: Using machine learning algorithms, it cannot execute text or document directly, rather it may firs be converted into matrix of numbers. This conversion of text document into numbers vector is called tokens. The count vector is a well-known encoding technique to make word vector for a given document. CountVectorizer takes what's known as the Bag of Words approach. Each message or document is divided into tokens and the number of times every token happens in a message is counted. CountVectorizer perform the following tasks:  It tokenizes the whole text document.  It constitutes a dictionary of defined words. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 3, 2020 469 | P a g e www.ijacsa.thesai.org  It encodes the new document using known word vocabulary. b) Term Frequency: It represents the weight of a word that how much a word or term occurs in a document. c) Inverse document Frequency: It is also a weighting scheme that describe the common word representation in the whole document. d) Term Frequency Inverse Document Frequency: The TF-IDF score is useful in adjusting the weight between most regular or general words and less ordinarily utilized words. Term frequency figures the frequency of every token in the tweet however, this frequency is balanced by frequency of that token in the entire dataset. TF-IDF value shows the significance of a token in a tweet of whole dataset [38]. This measure is significant in light of the fact that it describes the significance of a term, rather than the customary frequency sum [39]. Feature engineering module pseudocode is illustrated in the following Algorithm 2. C. Text-based Personality Classification Using MBTI Model In this proposed work, supervised learning approach is used for personality prediction. The model will take snippet of post or text as an input and will predict and produce personality trait (I-E, N-S, T-F, J-P) according to the scanned text. Mayers-Briggs Type Indicator is used for classification and prediction [4]. This model categorize an individual into 16 different personality types based on four dimensions, namely, (i) Attitude →Extroversion vs Introversion: this dimension defines that how an individual focuses their energy and attention, whether get motivated externally from other people's judgement and perception, or motivated by their inner thoughts, (ii) Information →Sensing vs iNtuition (S/N): this aspect illustrates that how people perceive information and observant(S), relying on their five senses and solid observation, while intuitive type individuals prefer creativity over constancy and believe in their guts, (iii) Decision →Thinking vs Feeling (T/F): a person with Thinking aspect, always exhibit logical behaviour in their decisions, while feeling individuals are empathic and give priority to emotions over logic, (iv) Tactics →Judging vs Perceiving (J/P): this dichotomy describes an individual approach towards work, decision-making and planning. Judging ones are highly organized in their thoughts. They prefer planning over spontaneity. Perceiving individuals have spontaneous and instinctive nature. They keep all their options open and good at improvising opportunities [40]. D. Working Procedure of the System for Personality Traits Prediction As depicted in Fig. 4, first, the proposed model is trained by giving both labelled data (MBTI type) and text (in the form of tweets). After training the model, it is evaluated for efficiency. For better prediction, the dataset will be split into three phases (training phase, validating phase and testing phase). The validating step will reduce overfitting of data. The mbti_kaggle dataset is available in two columns, namely, (i) type and (ii) posts. By type it means 16 MBTI personality types, such as INTP, ENTJ and INFJ, etc. As we are interested in MBTI traits rather than types, therefore we through python coding added four new columns to the original dataset for the purpose of traits determination. As a result, the new modified dataset will look like as given bellow in Table VIII. Classification of input text into personality traits Personality Traits: ["I_E", "S-N", "F-T", "J-P"] ML-Classifier: [ E. Applying XGBoost for Personality Classification XGBoost belongs to the family of Gradient Boosting. It is used to handle classification and regression issues that make a prediction/ forecast from a set of weak decision trees. Although work has been performed on personality assessment using supervised machine learning approaches [13,17]. Here state of the art Algorithm XGBoost with optimized parameters is used for MBTI personality assessment [41]. XGBoost classifier is good on producing better accuracy as compared to other machine learning algorithms [41,42]. The proposed work is the first attempt to predict personality from text using XGBoost as classifier and MBTI as personality model. F. Comparing the Efficiency of XGBoost with other Classifiers The overall prediction performance and efficiency of the proposed system has examined by applying other supervised machine learning classifiers. This comparison illustrates a true picture of the performance of this proposed classifier, namely XGBoost, as compared to the other machine learning algorithms and baseline methods regarding personality prediction capability from the input text [13]. G. Evaluation Metrics The evaluation metrics, such as accuracy, precision, recall and f-measure, describe the performance of a model. Therefore, different evaluation metrics has been used to check the overall efficiency of predictive model. This chapter presents a set of results which are produced from the proposed system by systematically answering the raised research questions. A. Answer to RQ.1 To answer to RQ1: "How to apply supervised machine learning technique, namely XGBoost classifier for classifying personality traits from the input text?", the supervised machine learning technique, XGBoost classifier is applied to predict MBTI personality traits from excerpt of text. Fine-tuned parameter setting for XGBoost is presented in Table IX. www.ijacsa.thesai.org B. Answer to RQ.2 While addressing RQ2: "How to apply a class balancing technique on the imbalanced classes of personality traits for performance improvement and What is the efficiency of the proposed technique w.r.t other machine learning techniques?", An imbalanced dataset is considered first. Imbalanced dataset can be defined as a distribution problem arises in classification where the number of instances in each class is not equally divided. Whenever, an algorithm is applied on skewed and unbalanced classified dataset, the outcome always diverge toward the sizeable class and the smaller classes are bypassed for prediction. This drawback of classification is known as class imbalance problem [11]. Therefore, it is attempted to balance this sparsity by resampling technique [11]. As two traits are highly imbalanced, therefore Data Level Re-sampling approach is used for class balancing [9]. Max-depth = 10 It represents the size (depth) of each decision tree in the model. Over fitting can be controlled using this parameter. Gamma = 10 Its purpose is to control complexity. It represents that how much loss has to be reduced. It prevents overfittings. In this section the overall comparison of predicting personality traits is presented using all evaluation metrics to determine the performance of different classifiers. Results are reported in Table XII. Different classifiers are applied over same mbti_kaggle dataset using Re-sampling technique and without Re-sampling technique. Results reported in Table XII depict that XGBoost obtained the highest score using all four-evaluation metrics and across all the MBTI personality dimensions, when imbalance dataset is experimented. However, Naïve Bayes and Random Forest on imbalance dataset, performed poorly. So, it is concluded from this experiment that applying classifiers on skewed data is not producing good results. On the other hand, when different classifiers are tested over resampled dataset, an improved result is obtained for all dimensions over all classifiers. www.ijacsa.thesai.org The most accurate and precise algorithm for this proposed work is XGBoost. It got excellent results for all traits using all metrics. XGBoost obtained maximum accuracy (99.92%) for S/N trait. Its results are highest for all four dimensions and across all metrics. 1) Why our Class balancing technique is better: By applying class balancing technique results for all evaluation metrics and for all four personality traits are high and better than base line work. In this dataset two dimensions I/E and S/N are highly imbalanced, therefore a class balance technique is used for better prediction performance. XGBoost classifier has proven to be very good for classification problems. The results obtained using XGBoost is very balance in respect to all personality traits C. Answer to RQ. 3 To answer RQ3: "What is the efficiency of the proposed technique with respect to other baseline methods." This proposed model is compared with two baseline methods [6,7]. Classification performed by [6] for personality prediction using same mbti_kaggle dataset by applying three classifiers namely, (i) SVM, (ii) MLP and (iii) Naïve Bayes and got accuracy upto 88.4%. Due to imbalance data the result of [6] is not up to the mark. The results show that SVM in collaboration with LIWC and TF-IDF feature vectors gave accurate prediction score for all four traits, while MLP with all features Vectors got maximum accuracy score for S/N trait (90.45%) however its result for J/P trait is lower. Naïve bays also perform well for I/E and S/N traits but its performance for T/F and J/P is very poor. The reason behind better accuracy for I/E and S/N dimensions and least performance for T/F and J/P is due to class imbalance problem. A very large dataset MBTI9k acquired from reddit is used for personality prediction [7]. The emphasis of this work is to extract features and linguistic properties of different words and then these features are used to train various machine leaning models such as Logistic Regression, SVM and MLP. Classifiers using integration of all features together (LR_all and MLP_all) obtained better results for all traits. The overall worst results using all classifiers obtained for the T/F dichotomy. The major limitation of this work is that the number of words in each post are very large, which lead to a little bit lower performance on the part of all classifiers. 1) Proposed Work: In this proposed system, the same dataset is used as experimented by [6], However re-sampling technique is applied over it, and hence obtained results in respect of all personality traits are very good, especially XGBoost achieved the best score across all dimensions and all traits as compared to previous work. It is observed that the mbti_kaggle dataset is very skewed, therefore when oversampling technique is applied the output is far better than all previous works. Up to 99% accuracy for I/E and S/N traits are achieved using XGBoost classifier, while Bharadwaj [6], got 88% maximum accuracy for S/N trait. Similarly, for T/F and J/P proposed work results are promising and obtained 94.55% accuracy for T/F and 95.53% accuracy for J/P dimension using XGBoost. While in previous work MLP classifier achieved accuracy of 54.1% for T/F and 61.8% for J/P dimension. Therefore, it is clear that by using resampling technique excellent and improved results are obtained for all four dimensions. The results reported in Table XIII, describe the comparison of proposed work with the baseline method. 2) XGBoost with Outstanding Performance: XGBoost belongs to the family of Gradient Boosting is a machine learning technique used for classification and regression problems that produces a prediction from an ensemble of weak decision trees. The main reason of using this algorithm is its accuracy, speed, efficiency, and feasibility. It's a linear model and a tree learning algorithm that does parallel computations on a single machine. It also has extra features for doing cross validation and computing feature importance. www.ijacsa.thesai.org V. CONCLUSION AND FUTURE WORK The central theme of this study is the application of different machine learning techniques on the benchmark, MBTI personality dataset namely mbti_kaggle to classify the text into different personality traits such as Introversion-Extroversion(I-E), iNtuition-Sensing(N-S), Feeling-Thinking(F-T) and Judging-Perceiving(J-P). The Mayers-Briggs Type Indicator (MBTI) model is used for text classification and personality traits recognition [4]. After applying class balancing techniques on the imbalanced classes, different machine learning classifiers, namely, KNN, Decision Tree, Random Forest, MLP, Logistic Regression (LR), SVM, XGBoost, MNB and Stochastic Gradient Descent (SGD) are experimented to identify the personality traits. Evaluation metrics, such as accuracy, precision, recall and Ƒscore, are used to analyze and examine the overall efficiency of the predictive model. The obtained results show that score achieved by all classifiers across all personality traits is good enough, however, the performance of XGBoost classifier is outstanding. We got more than 99% precision and accuracy forI/E and S/N traits and obtained all about 95% accuracy for T/F and J/P dimensions. However, KNN classifier resulted in overall lower performance. A. Constraints or Limitations 1) MBTI model is examined for personality traits classification, however, others personality models such as Big Five Factor (BFF) and DiSC personality Assessment models, are not experimented and investigated. 2) The textual data used in the proposed work for personality assessment is comprised of only English language, and the contents of other languages are not experimented. 3) Simple over-sampling and under sampling techniques are used to balance and level the skewness of dataset. 4) The dataset comes from only one platform namely personalitycafe forum, which may lead to biased results. 5) All the experiments conducted in this proposed work are based on the classical or traditional machine learning algorithms. 6) The textual contents which are classified for personality traits identification belong to only one site Twitter, however other social networking sites are ignored. 7) Only textual data is analysed and investigated for user's personality traits recognition in his proposed work. 8) Less weightage is given to feature extraction in classification of text, only TF-IDF technique is utilized. www.ijacsa.thesai.org B. Future Proposal 1) The predictive performance of MBTI personality model needs to be compared with the Big Five Factor (BFF) model for better assessment of the traits. 2) Multilingual textual content, especially Urdu and Pashto language textual data can be examined for personality classification. 3) SMOTE (Synthetic Minority Over-sampling Technique) can be utilized as class balancing method for more robust and reliable performance. 4) Labelled data may need to be collected from other platforms like "Reddit" using multiple benchmark datasets. 5) More experiments on personality recognition may be conducted using Deep learning algorithms. 6) Other social networking sites like FACEBOOK posts and comments are required to be examined for automated personality traits inference. 7) Data available in the format of images and videos on social networking sites can be experimented for the task of personality traits identification. 8) More advanced features selection approaches are required to be exploited for enhancement of the proposed work.
9,643.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Fibre based hyperentanglement generation for dense wavelength division multiplexing Entanglement is a key resource in quantum information science and associated emerging technologies. Photonic systems offer a large range of exploitable entanglement degrees of freedom (DOF) such as frequency, time, polarization, and spatial modes. Hyperentangled photons exploit multiple DOF simultaneously to enhance the performance of quantum information protocols. Here, we report a fully guided-wave approach for generating polarization and energy-time hyperentangled photons at telecom wavelengths. Moreover, by demultiplexing the broadband emission spectrum of the source into five standard telecom channel pairs, we demonstrate compliance with fibre network standards and improve the effective bit rate capacity of the quantum channel up to one order of magnitude. In all channel pairs, we observe a violation of a generalised Bell inequality by more than 27 standard deviations, underlining the relevance of our approach. Introduction Precise engineering and control of entanglement has led to remarkable advances in quantum information science. Photonic entanglement is advantageous over classical means for securing communication [1], solving computational problems faster [2,3], and in high-precision optical sensing [4][5][6]. It has been shown that a practical quantum advantage can be reached with systems comprising a few tens of qubits [7]. To engineer and access the resulting Hilbert space, one way is to coherently superpose n ? 10 photons emitted from quantum dots [8,9] or from nonlinear wavemixing sources [10,11]. To reduce the associated challenges on the photon generation side [12], less photons can be entangled over more degrees of freedom (DOF), referred to as hyperentanglement [13][14][15][16][17]. Photons are excellent candidates for carrying hyperentanglement due to a large variety of exploitable DOFs. Hyperentangled states are advantageous over their single-observable counterparts in many ways. They lead to a stronger violation of local realist theories, making them less sensitive against decoherence [15]. From the applied side, complete photonic Bell state analysis can be implemented [18], having immediate repercussions in teleportation-based quantum networking. Additionally, detection failure on one DOF does not necessarily stop networking activity, as faithful entanglement transport is still achieved over all other DOFs. Considering two DOFs, hyperentanglement quality and usability can be inferred through a generalized Bell inequality, based on two Bell operators β 1,2 − one for each DOF [15]. For theories admitting local elements of reality, it is , while quantum physics permits reaching 2 2 | through sequential measurements on each individual DOF with adapted analyzers [14,19]. However, this strategy does not provide evidence for an application-relevant quantum advantage. Crucially, a faithful analysis necessitates controlling (and stabilising) all analyzers simultaneously, and showing that they do not influence each other. Provided that simultaneous measurements on all observables are possible, one can violate a generalized Bell inequality via the operator β=β 1 ⊗β 2 [15]. Here, local realistic theories predict , and quantum physics permits reaching a twice as high value, It is important to note that the only pure states saturating the generalized Bell inequality with simultaneously. Additionally, one can estimate the fidelity of the generated hyperentangled state using likelihood maximisation tomography [20]. Here, we use those criteria to benchmark our practical and fully guided-wave source of telecom-wavelength hyperentangled photon pairs. In our experimental configuration, we infer all hyperentangled observables simultaneously without replacing components on the setup. This allows to faithfully quantify the suitability of our approach for quantum networking applications. We exploit polarization and energy-time DOFs as they can be efficiently guided in standard fiber optical networks. To further enhance the quantum channel networking capacity, we demonstrate also compliance with telecommunication standards by analyzing hyperentanglement in five dense wavelength division multiplexed channel pairs [21]. The combination of hyperentanglement and multiplexing permits to increase the channel capacity up to one order of magnitude compared to ordinary (single-observable) entanglement distribution. This paves the way for more efficient quantum information protocols, notably in quantum communication and computation [22]. Methods As depicted in figure 1(a), our scheme is based on a fully guided-wave nonlinear Sagnac interferometer (see also [23]). Our goal is to generate a hyperentangled two-photon state of the form Here, H and V represent horizontal and vertical photon polarization modes, and E and L denote early and late emission times of an energy anticorrelated photon pair [24]. The index k labels the wavelength channel pair in which the pair is generated. As a pump, we use a wavelength-stabilized fiber-coupled 780 nm continuous-wave laser, which is sent through a wavelength division multiplexer (WDM 1 ) to the nonlinear fiber Sagnac loop in which a fiber polarizing beam-splitter (f-PBS) defines the input and output. After the f-PBS, horizontally (vertically) polarized light propagates in the clockwise (counter-clockwise) direction through polarization maintaining fibers (PMF). One of the PMFs is physically rotated by 90 • such that vertically polarized 780 nm light pumps a 3.8 cm long periodically poled lithium niobate waveguide (PPLN/w) from both sides simultaneously. Inside the PPLN/w, pump photons are converted to vertically polarized signal (s) and idler (i) photon pair contributions in both directions through type-0 SPDC As shown in figure 1(b), the photon pair emission spectrum shows a bandwidth of about 40 nm centered at the degenerated wavelength of 1560 nm. We now define that signal (idler) photons are above (below) degeneracy. After the PPLN/w, the pair contributions are coupled back into the PMF, subsequently overlapped at the f-PBS, and separated from the pump at WDM 1 . By precisely adjusting the polarization of the pump laser, a maximally polarization entangled Bell state isgenerated: [23]. The paired photons are further deterministically separated as a function of their wavelengths using a standard telecom C/L-band splitter (WDM 2 ), and sent to Alice and Bob. We project the photons' polarization state onto angles α s and α i , respectively using a half-wave plate (HWP), a PBS, followed by single-photon detectors (SPD, IDQ220, 20% efficiency, 100 ps timing resolution). Coincidence counting allows then to reveal non local correlations. The second DOF of our photon pairs is energy-time entanglement which is mediated through the energy conservation of the SPDC process. As the energy of each generated photon pair must equal the energy of one pump laser photon, the (vacuum) wavelengths of the involved photons are related by: p Here, the subscript p stands for the pump photon. Energy-time entanglement is revealed using unbalanced Michelson interferometers (MI) in the Franson configuration [24]. For optimal passive stability, our home-made MIs are made fabricated using fused fiber beam-splitters (BS), and Faraday mirrors (FM) at which photons are reflected back to the BS [25]. Moreover, both MIs are actively stabilized using a 1560 nm reference laser, and a piezoelectric fibre stretcher in the longer arm [26]. To avoid undesired single-photon interference, each MI has a travel time difference of 300 ps, which is much larger than the single-photon coherence time, 5 ps c t = . However, we adjust both interferometers to have identical travel time differences within 0.03 ps  , so as to observe and maximise higher-order interference for simultaneously arriving photons [21,24,26]. Postselection of simultaneously arriving photons thus results in an energy-time entangled state: . Note that energy-time entanglement is measured using unbalanced interferometers, whose outcomes can be thought of as early and late time-bins. Here, f s and f i are phase terms depending on the path length difference of the MIs, which can be precisely adjusted to arbitrary settings using the active stabilization system. As our setup provides polarization and energy-time entanglement simultaneously, the resulting overall quantum state reads which covers a 16-dimensional Hilbert space. To even further increase the quantum channel capacity, we exploit standard telecom dense wavelength division multiplexing (DWDM). As shown in figure 1(b), photon pairs are created pairwise symmetrically (anticorrelated) around the degenerate wavelength of 1560 nm. We exploit this to demultiplex the spectrum into the channel pairs ITU10−33, ITU11−32, ITU12−31, ITU13−30, and ITU14−29, according to the International Telecommunication Union (ITU) standards in the 100 GHz grid [27]. Although these high-quality results underline the suitability of our scheme for distributing hyperentanglement, we have to consider that 8 b á ñ » | | can be reached by different mixtures of polarization/ energy-time hyperentangled states (and not only the state given in equation (1). To this end, we perform an additional check in which we consider our source as a 'black box'. This way, we remove this ambiguity by performing maximum likelihood estimation tomography on the measured data, without making any prior assumptions on the prepared state [20]. By inferring all the density operators associated with the experimentally observed correlations, we extract that the state given in equation (1) is generated with a net fidelity of 0.86 0.95 Results without subtraction of detector dark counts). The uncertainty in this measurement is primarily due to limited coincidence statistics. Nevertheless, this is enough to ensure that the state has indeed reached the maximum dimensionality [28,29]. To further demonstrate compliance with DWDM telecom standards, we repeat the measurements in four additional channel pairs. The summary of the experimental results are shown in table 2. We observe a clear violation of the generalized Bell inequality in all channel pairs by more than 27 standard deviations, which stands are a clear validation of our approach. Discussion Although the above-demonstrated high-quality results are already sufficient to envisage quantum key distribution tasks, additional improvements could be made by further boost the quality of the source. E.g. our current MI active phase-stabilisation system achieves a stability of ±2π/40, which results in a fringe visibility reduction of 1.2%. The photon pair contribution probabilities for H H s i ñ ñ | | and V V s i ñ ñ | | , fluctuate by ±1%, thus leading to a further visibility reduction of 0.2%. This issue could be overcome by gluing the fibre ends to the PPLN/w chip. All further visibility reduction is explained by non-equilibrated loss in the short and long MI arms, non-ideal polarisation state behaviour of the used HWPs and PBSs, as well as small residual polarisationdependent loss throughout the setup. In view of maximising bit rates in quantum networking applications, it is important to note that the use of DWDM filters typically induces an extra 3 dB of loss per photon. However, as we will show now, the DWDM strategy still achieves higher bit rates. This is because the bottleneck in photonic entanglement distribution is usually the pair generation rate per channel, R, which is limited by three constraints: (1) to avoid degradation due to multi-pair events, the rate is limited to about 5% of the inverse single-photon coherence time [21] (5 ps with 5-channel DWDM, 1 ps without DWDM); (2) for the same reason, the rate must stay below about 5% of the inverse timing resolution of the detector; (3) the detected photon pair rate must stay below the detector saturation. In our particular setup, count rates are limited by detector saturatation at 20 kcps. Further, the transmission per photon from source to detector is −13 dB (−10 dB) with DWDM (without DWDM), and we use only two detectors per channel. Thus, the pair generation rate, per channel, at the source is limited to R 8 10 s . We note that for long-distance scenarios with significantly higher transmission loss, the DWDM advantage remains at the number of exploited channels divided by the square of the additional single-photon transmission loss, as the detector timing resolution will always represent the ultimate limitation. It may also be interesting to further enhance the bitrates by combining our work with recently demonstrated multi-core fibre distribution techniques [31]. Conclusion In summary, we have demonstrated high-quality wavelength division multiplexed hyperentanglement generation and analysis. A generalised Bell inequality was violated by more than 27 standard deviations in all 13 29 channel pairs, and our results were reinforced by maximum likelihood tomography. The very way that hyperentanglement is generated in our scheme allows it to be straightforwardly adapted to the needs of various experiments across all fields of quantum information science. As examples, a similar source has been used for a fundamental quantum physics experiment [32] and for the quantum-enhanced determination of fibre chromatic dispersion [23]. We showed further that our scheme can be straightforwardly applied to practical fiber-based quantum key distribution with increased bit rates compared to ordinary schemes [21]. In this perspective, it has already been shown that wavelength division multiplexed quantum key distribution is possible with only a moderate increase in resources [25]. In view of such an implementation, the presented fully guided-wave scheme further allows ultra-compact and stable design, e.g. including a fiber pigtailed PPLN/w module and an integrated PBS. The performance of such protocols can be further enhanced using quantum memories capable of storing hyperentanglement [33]. The robustness of quantum networks is also increased through hyperentanglement [34]. For example, if one entanglement analyser fails, secure quantum key distribution is still possible by exploiting the other DOFs. E.g. by averaging over all polarisation settings, our net date show that we violate Bellʼs inequality for energy-time entanglement with 2.69 0.01 . Similarly, when averaging over all energytime settings, polarisation entanglement is observed with 2.72 0.02 2 b á ñ =  | | . We therefore believe that our approach has the potential to become a working horse solution in a large variety of photonics applications where quantum enhancement is sought.
3,081.4
2019-08-11T00:00:00.000
[ "Physics" ]
Characterizing atherosclerotic tissues: in silico analysis of mechanical properties using intravascular ultrasound and inverse finite element methods Atherosclerosis is a prevalent cause of acute coronary syndromes that consists of lipid deposition inside the artery wall, creating an atherosclerotic plaque. Early detection may prevent the risk of plaque rupture. Nowadays, intravascular ultrasound (IVUS) is the most common medical imaging technology for atherosclerotic plaque detection. It provides an image of the section of the coronary wall and, in combination with new techniques, can estimate the displacement or strain fields. From these magnitudes and by inverse analysis, it is possible to estimate the mechanical properties of the plaque tissues and their stress distribution. In this paper, we presented a methodology based on two approaches to characterize the mechanical properties of atherosclerotic tissues. The first approach estimated the linear behavior under particular pressure. In contrast, the second technique yielded the non-linear hyperelastic material curves for the fibrotic tissues across the complete physiological pressure range. To establish and validate this method, the theoretical framework employed in silico models to simulate atherosclerotic plaques and their IVUS data. We analyzed different materials and real geometries with finite element (FE) models. After the segmentation of the fibrotic, calcification, and lipid tissues, an inverse FE analysis was performed to estimate the mechanical response of the tissues. Both approaches employed an optimization process to obtain the mechanical properties by minimizing the error between the radial strains obtained from the simulated IVUS and those achieved in each iteration. The second methodology was successfully applied to five distinct real geometries and four different fibrotic tissues, getting median R 2 of 0.97 and 0.92, respectively, when comparing the real and estimated behavior curves. In addition, the last technique reduced errors in the estimated plaque strain field by more than 20% during the optimization process, compared to the former approach. The findings enabled the estimation of the stress field over the hyperelastic plaque tissues, providing valuable insights into its risk of rupture. Introduction Atherosclerotic plaques in coronary arteries can trigger diverse acute syndromes, including angina or myocardial infarction (Gutstein and Fuster, 1999).Briefly, atherosclerotic plaques are the result of cholesterol deposition inside the artery walls.That leads to a lipid core surrounded by fibrotic tissue, which, in case of rupture, causes a thrombus due to the contact between lipids and blood.Vulnerable plaques are those which are prone to rupture, therefore the fibrous cap thickness (FCT) that separates the lipid core from the blood is widely used to classify the plaque into stable or vulnerable (Virmani et al., 2005).In literature, FCT smaller than 65 μm is considered to be vulnerable (Finet et al., 2004).In addition, other geometrical variables are usually considered to determine the risk of rupture, such as the lipid core area or the degree of stenosis (Cilla et al., 2012;Corti et al., 2022).However, as the plaque rupture is the mechanical failure of the fibrotic tissue, the mechanical properties of the plaque tissues also play a key role in the vulnerability (Ohayon et al., 2008;Akyildiz et al., 2016;Gómez et al., 2019).It has been demonstrated that peak stresses on the fibrotic tissue and stress distributions are correlated with the risk of rupture and its location (Ohayon et al., 2005;Versluis et al., 2006).The stress state of the arterial wall could be only accurately calculated by knowing the mechanical behavior of the tissues.The clinical detection and characterization of atherosclerotic plaques remain a challenge for early diagnosis.Nowadays, Intravascular Ultrasound (IVUS) images are one of the most common imaging techniques for the diagnosis of atherosclerotic plaques in coronary arteries. The mechanical characterization of atherosclerotic tissues is highly dependent on previous tissue segmentation that could be performed manually on IVUS images due to the different echo reflectivity characteristics of the tissues (Olender et al., 2020), using virtual histologies (Kubo et al., 2011) or new methodologies based on machine learning (Sofian et al., 2019;Du et al., 2022).Some studies employed an optimization process to simultaneously segment and obtain the elasticity map of the arterial wall (Le Floc'h et al., 2009;Le Floc'h et al., 2012;Tacheau et al., 2016).In a different approach, Narayanan et al. (2021) achieved segmentation using deep learning techniques on OCT images.These approaches showed how important it is to obtain accurate segmentation results.The mechanical characterization of the properties usually involves three steps.It commonly begins with the acquisition of, at least, two clinical images (base and target shapes), normally in systolic and diastolic pressure (Liu et al., 2019), and then the relative displacements or deformation between them are computed (Le Floc'h et al., 2009;Tacheau et al., 2016;Torun et al., 2022).Secondly, the segmentation of the tissues is performed by using Magnetic Resonance Imaging (MRI), ex-vivo testing and histologies (Akyildiz et al., 2016;Torun et al., 2022), optical coherence tomography (OCT) (Narayanan et al., 2021) or segmentation based on mechanical properties (Le Floc'h et al., 2009;Nayak et al., 2017).Thirdly, the last step consists of estimating the mechanical properties by means of an optimization process, where the displacements/strains estimated in the first step are compared with those computed by an inverse finite element analysis (Le Floc'h et al., 2009;Torun et al., 2022).Other approaches tried to match meshes between the base and the target shapes (Liu et al., 2019) or were based on micromorphological information, like the interfaces of the plaque tissues, to recover the material behavior (Narayanan et al., 2021), and others used the virtual fields method to obtain the material parameters (Avril et al., 2004;Avril et al., 2010).The optimization algorithm plays a key role in determining the mechanical properties, and the choice of algorithm depends on the type of problem we want to solve.The computational cost and the complexity of the process vary greatly depending on the application.To obtain the linear elastic properties of the tissues, a gradient-based optimization procedure could provide robust results (Le Floc'h et al., 2009;Le Floc'h et al., 2010;Tacheau et al., 2016;Porée et al., 2017).However, for more complex material properties, this type of algorithm could become stuck in local minima.Genetic algorithms, such as the Nondominated Sorting Genetic Algorithm used by Narayanan et al. (2021), select an initial population of parameters, and then propagate the population over several generations.This kind of algorithm allows the evaluation of a large number of material parameters, but it requires a lot of time and computational cost.Nowadays, more complex new optimization methods have emerged that enable the evaluation of complex material models to be evaluated by using machine learning methods, such as the Bayesian optimization (Torun and Swaminathan, 2019;Torun et al., 2022).A very different approach, like the principal component analysis optimization used by Liu et al. (2019), permits optimization times of 1-2 h by partitioning the possible stress-stretch curves using a dimensional reduction technique (Liu et al., 2018;Liu et al., 2019).As part of the optimization process, several studies used arterial images of two pressure steps within a pressure increment of 5 mmHg between them (Le Floc'h et al., 2009;Nayak et al., 2017).Despite the hyperelastic behavior of the arterial tissues, this procedure allows the application of small deformation theory to estimate the linear elastic properties of the tissues.In these cases, the estimated Young's modulus (Le Floc'h et al., 2009;Le Floc'h et al., 2010;Le Floc'h et al., 2012;Nayak et al., 2017) or orthotropic modulus (Gómez et al., 2019) refers to the associated relative stiffness at that pressure.Akyildiz et al. (2016) proposed a framework to describe the mechanical properties of atherosclerotic tissues from ex-vivo testing images.They estimated the Neo Hookean material parameters for different pressure increments, showing a correlation between increased pressure and increased stiffness.This trend corresponded to the hyperelastic behavior of arterial tissues, with the stress-stretch curve exhibiting greater stiffness at higher loads.In spite of the methodology providing a hyperelastic behavior of the tissues, Neo Hookean parameter values exhibited variations with changes in pressure and failed to describe the high non-linear behavior of the atherosclerotic tissues.To accurately predict the hyperelastic mechanical behavior of tissues, a non-linear analysis using unpressurized geometry should be conducted.IVUS images are taken at a certain pressure, and if these images are assumed to be in an unpressurized configuration, incorrect strain and stress distributions would be obtained.To account for unpressurized geometries, some studies utilized histologies (Akyildiz et al., 2016), ex-vivo MRI (Torun et al., 2022) or assumed the first clinical image as stress-free geometry (Narayanan et al., 2021) to obtain the hyperelastic parameters of hyperelastic multi-parameter materials on atherosclerotic carotid arteries. In this article, we present a theoretical framework to estimate the non-linear mechanical properties of atherosclerotic plaques in coronary arteries based on clinical images.We previously proposed a method based on two consecutive images taken by IVUS for segmenting the different atherosclerotic tissues (Latorre et al., 2022).In addition, in that contribution, we also defined the strategy to simulate the IVUS data from finite element (FE) models.That segmentation enabled us to describe geometrical measures related to plaque vulnerability, such as the FCT or the lipid core area.After the image segmentation, in this paper, we propose to use an inverse FE analysis in order to obtain the mechanical properties of the segmented materials.Since it is an in silico study, all the IVUS data are simulated using FE models with some noise over the strain distribution.We introduce two different approaches for estimating the mechanical properties of atherosclerotic tissues.Both of them use the information from two different pressure steps to collect the relative radial strains.In the first method, we determine the linear elastic properties of the tissues through a simple optimization process using those radial strains (Le Floc'h et al., 2009;Bouvier et al., 2013;Tacheau et al., 2016).However, it must be said that this approach only provides the relative stiffness of the tissues at a certain blood pressure.Then, in the second approximation, we implement a process to estimate the non-linear properties of the atherosclerotic tissues.The arterial behavior exhibits high non-linearity, therefore, we include a Pull-Back algorithm to estimate the unpressurized geometry inside the optimization process.This implementation enables us to obtain the hyperelastic properties of plaque materials and an estimated zero-pressure (ZP) geometry.It is worth highlighting that these variables are critical to a proper determination of the stresses on the plaque.At the end of this optimization process, we could evaluate the stress state of the arterial tissue at physiological pressures and evaluate the risk of rupture. Materials and methods The appearance of atherosclerotic tissues on IVUS images varied due to their different echo reflectivity characteristics (Olender et al., 2020).While it was feasible to differentiate calcifications and softer inclusions like lipids through visual inspection, it was not possible to obtain a proper segmentation or estimate mechanical behavior.The aim of this paper was to determine the mechanical properties of atherosclerotic tissues.For this purpose, we compared two different methods for determining the mechanical properties as linear elastic or non-linear hyperelastic.The first one estimated the linear elastic properties of the tissues by applying incremental pressure.This resulted in a measurement of the relative stiffness of the tissues at a specific pressure.While this method allowed for the quantification of the relative modulus of elasticity of atherosclerotic tissues, it did not enable the determination of the stress state of the plaque throughout the cardiac cycle.It is a common methodology found in literature, where arterial tissues were considered with linear or orthotropic materials (Le Floc'h et al., 2009;Gómez et al., 2019).To overcome this limitation, the second approach included a Pull-Back algorithm in the inverse FE analysis in order to estimate the nonlinear properties of the tissues.The use of this algorithm enables the mechanical response of tissues to be analyzed from the unpressurized configuration. Determination of linear elastic properties We initially simulated the IVUS data from FE models using Neo Hookean materials.Then an image segmentation was performed to finally obtain the linear elastic properties through an optimization process. Simulated IVUS data The IVUS data were simulated using FE models with five real patient IVUS geometries obtained from the literature (Finet et al., 2004;Le Floc'h et al., 2009;Bouvier et al., 2013).IVUS images typically do not enable detection of the adventitia or media layers, this is why only the fibrotic tissue, the lipid core, and calcification were considered.Moreover, clinical images display only the cross-section of the arterial wall, so the FE models were 2D including the plane strain assumption.Since IVUS images were taken under specific pressure, the unpressurized geometry was previously estimated.The robustness of this approach was tested with five geometries and different material combinations of lipid and fibrotic tissues to ensure that the results were consistent regardless of the geometry or material properties.In previous work, Caballero et al. (2023) conducted a study on the elastic modulus ranges for lipid and fibrotic tissues through the literature.They collected a range of [1-100 kPa] for lipid elastic modulus and 200 kPa] for fibrotic elastic modulus.To cover the whole range of combinations, we performed a Latin hypercube sampling (LHS) to take 15 representative samples (Corti et al., 2020).In this approximation, lipidic and fibrotic tissues were modeled as quasi-incompressible Neo Hooke materials using Eq. 1.Both Eqs 2, 3 establish a relationship between Neo Hookean and linear elastic parameters.The influence of geometry was analyzed with the material properties reflected in Table 1 (Le Floc'h et al., 2009;Babaniamansour et al., 2020), whereas the influence of the material was solely conducted using the first plaque geometry with 15 material combinations obtained from the LHS.In both analyses, calcifications were fixed and modeled as linear elastic material with 5,000 kPa of Young's modulus and ] = 0.333 (Le Floc'h et al., 2009). The analysis was performed in the commercial software Abaqus (Dassault Systems 2014), where we applied an internal pressure of 115 mmHg in the lumen, which represents the average pressure in patients with high-normal pressure and grade 1 hypertension (Ramzy, 2019).It should be noted that all the geometries were meshed using plain strain three-node linear elements (CPE3).The mesh size was set to achieve at least three elements between the lumen and the lipid core, taking into account the accuracy of the IVUS technique.The five different FE models had a number of elements of 4,375, 6,392, 7,945, 3,173, and 4,207, respectively.Rigid body motion was constrained by fixing three external contour points of the fibrotic tissue (Cilla et al., 2012).Figure 1 shows the five different FE models with their tissues.The FE models simulated the atherosclerotic plaque; to mimic the acquisition of two consecutive IVUS images we used the FE results at various pressure steps.Nowadays, there are several different approaches for estimating displacement or strain fields from two ultrasound images (Maurice et al., 2004).To replicate this, we gathered the nodal coordinates (X and Y) and displacements (u x and u y ) at pressures of 110 and 115 mmHg.Then, we computed the relative displacements between both pressure steps.This process aimed to simulate the data obtained through displacement estimators on two IVUS images with 5 mmHg between both (Porée et al., 2017).Once we had obtained the relative displacement from the small pressure increment, we were able to calculate the strains under the infinitesimal strain theory.Finally, we added a signal-to-noise ratio (SNR) of 20 dB to the strain fields in order to simulate the intrinsic noise present in IVUS data (Porée et al., 2015).We computed the strains in both Cartesian and cylindrical coordinates, as well as the principal and equivalent strains.However, to be consistent with prior studies (Le Floc'h et al., 2009;Le Floc'h et al., 2012;Tacheau et al., 2016), we mainly utilized radial strains for the segmentation and optimization process due to their lower estimation error from IVUS images compared to other deformation variables. Segmentation The segmentation process was fully described previously in Latorre et al. (2022).Briefly, the method was based on the representation of Strain Gradient Variables (SGV).This type of variable highlighted the contours of the different atherosclerotic tissues and after a Watershed-Gradient Vector Flow segmentation it was possible to extract the plaque components.The segmentation results varied depending on the chosen SGV; in this work, we segmented all the lipids and calcifications with the modulus of the gradient of the radial strains (|▽ε rr |) alone or in combination with other SGVs like (|▽ε rr | + |▽ε min |, |▽ε rr | + |▽ε vMises |. ..).We also proved the accuracy of the method by measuring FCT and lipid areas in some geometries.The segmentation was performed through imaging techniques, so we converted the extracted tissues from images to meshes.We used the Partial Differential Equation toolbox from Matlab (version R2022b, Mathworks, MA, United States) to build the FE model from the segmented tissues.As a result of this process, we obtained a segmented FE model of the plaque at 110 mmHg. Mechanical characterization After the segmentation at 110 mmHg, the estimated radial strains (ε iterated rr ) were computed by imposing a lumen pressure of 5 mmHg and optimizing the material properties of the segmented tissues.Since the pressure increment happens to be low, we assumed a linear elastic behavior of the materials (Le Floc'h et al., 2009;Tacheau et al., 2016).In addition, lipid core and fibrotic tissues were considered quasi-incompressible materials with a fixed Poisson's ratio of 0.49, while calcifications were considered an isotropic material with a fixed Poisson's ratio of 0.33 (Babaniamansour et al., 2020;Corti et al., 2020).Therefore, only the elastic modulus of the fibrotic tissues, lipid, and calcification (E Fib , E Lip and E Calc ) varied during the optimization.The optimization was performed by linking Matlab and Abaqus software FIGURE 1 Five real geometries considered in the analysis (Finet et al., 2004).(Papazafeiropoulos et al., 2017) and also using a pattern-search algorithm, which works with smooth and non-smooth functions as it is not based on gradient descent.This optimization algorithm partitions the space of the objective function into mesh points and starts evaluating the function at an initial point.Then, it uses that information to generate additional points to find the parameters that most minimize the cost function (Hooke and Jeeves, 1961).At each iteration, the algorithm polled different mesh points and if the method finds a point that minimizes the function, the polling mesh size will increase by two; otherwise, the polling mesh size will decrease by half.The selected poll algorithm was the Generating Set Search which is more efficient for linear-constrained problems than the classic algorithm.The target cost function (J 0 ) to minimize was the Normalized-Root-Mean-Squared Error (NRMSE) between the simulated IVUS radial strains (ε IVUS rr ) and the estimated by the method (ε iterated rr ) shown in Eq. 4. In order to validate this method, we compared the resulting elastic modulus with the Neo Hookean parameters that were employed in the simulated data using Eqs 2, 3.This first approach is schematized in Figure 2. Finally, to evaluate the accuracy of the estimation of the Young's modulus, we introduced the Success Rate (sr) coefficient, which quantifies how closely our predicted Young's modulus (E estimated ) matched the actual FE values (E real ) Eq. 5. Determination of non-linear properties This approach attempted to characterize the mechanical properties of the atherosclerotic tissues as hyperelastic and wanted to provide an estimation of the unpressurized plaque geometry. Simulated data We proposed a similar methodology as in the previous approach, with the major difference being the material used for the FE models and the simulation of the ZP geometry.To consider a non-linear hyperelastic material, it is necessary to know the unpressurized configuration where the non-linear stress-strain curve begins.Normally, healthy arterial tissues were assumed to be anisotropic such as the media or adventitia.However, diseased tissues, such as fibrotic or lipid tissues, were considered isotropic exponential-type materials.While lipids were treated as Neo Hookean material, fibrotic tissue was modeled with the Gasser-Ogden-Holzapfel (GOH) strain energy function (Eq.6) (Gasser et al., 2006).The parameter D was fixed to 0.005 to reproduce the quasi-incompressibility behavior of the tissues.Meanwhile, C 10 represented the initial stiffness of the tissue at zero-pressure, and k 1 and k 2 indicated the stiffness at higher pressures and the shape of the exponential curve, respectively.Finally, κ was set at 0.3333 to consider an isotropic fiber response.In order to understand the influence of the geometry and composition of the plaque, we analyzed five real geometries and four different fibrotic tissues (cellular, hypocellular, and two calcified) (Loree et al., 1994;Versluis et al., 2006).The material parameters for the GOH model are presented in Table 2 where the "Calcified 1" material was the calcified fibrotic tissues used in the different geometrical analyses.Moreover, atherosclerotic plaques could present calcifications as highly rigid inclusions.These calcifications were considered isotropic linear elastic materials with E = 5,000 kPa and ] = 0.333. Finally, the displacement or strain fields obtained from two consecutive IVUS images and the image segmentation process were Optimization methodology We developed a novel pipeline to characterize the non-linear properties of the plaque tissues.After segmentation (see Section 2.1.2),an optimization process was conducted to obtain the hyperelastic mechanical properties of the tissues.During each iteration, the methodology involved three different steps.First, the optimization algorithm selected an initial seed to iterate the material parameters and generated the initial FE model.Then, a Pull-Back algorithm was employed to estimate the ZP geometry.Secondly, it collected the radial strain within the pressures of 110 and 115 mmHg (ε iterated rr ).The objective of the algorithm was to minimize the cost function error between the simulated IVUS strains and the strains obtained during the iteration (Eq.4).These two steps were repeated until the resulting error was less than a tolerance of 10 -4 or the optimization time exceeded 4 hours.Finally, we estimated the unpressurized geometry with the final material parameters.The entire process is described below and is illustrated in Figure 3. • First Step.On each analysis, only three material parameters needed to be optimized for the fibrotic tissue (C 10 , k 1 and k 2 for isotropic GOH materials), two for each lipid (C 10 and D 1 for Neo Hooke materials), and one for each calcification (E for linear elastic materials).To recover an approximated ZP geometry we used an adapted version of the Pull-Back algorithm developed by Raghavan et al. (2006).This algorithm was originally created to obtain the ZP geometry of a 3D arterial aneurysm, so it was modified to recover the initial geometry of 2D atherosclerotic plaques.The algorithm created an initial FE model with the segmented pressurized geometry and added an internal pressure of 110 mmHg (the pressure at which the segmentation was performed).Then, the resulting nodal displacements ([u ZP x u ZP y ]) were collected.Unpressurized geometry was obtained by constructing a new FE model using the pressurized segmented geometry and imposing the nodal displacements multiplied by a recovery factor (K ZP ) as a boundary condition (BC), as depicted in Eq. 7. The final ZP geometry was achieved through an iterative process by varying the recovery factor and comparing the error of the coordinates between the ZP geometry candidate, after adding 110 mmHg, and the pressurized segmented geometry.Raghavan et al. (2006) considered the recovery factor as a parameter that should be optimized.However, we assumed a recovery factor K ZP = 1, which was an intermediate value, with the aim of avoiding another optimization process and also reducing the computational cost. • Second Step.Once the unpressurized geometry was obtained, we imposed an internal pressure of 115 mmHg at this unpressurized geometry to compute the iterated radial strain (ε iterated rr ).At this stage, we used the same cost function defined in Eq. 4 and the same pattern-search optimization algorithm (see Section 2.1.3).In linear elastic materials, as in the previous approach, due to its simplicity, the initial size of the polling mesh was equal to 1.However, GOH materials are more complex which could lead to a higher number of local minima in the cost function.Thus, we changed the initial mesh size of the poll algorithm to 100 to cover more different variable possibilities and also to avoid being stuck in local minima.Hence, after analyzing the initial point, the algorithm evaluated different mesh points with a distance of 100 between them, allowing many different parameter combinations to be analyzed.Furthermore, the optimization process needed a search range for each material parameter, and the pattern-search algorithm used this range to give more relevance to those variables with a greater search range.So, the ranges for fibrotic GOH parameters were C 10 [1 → 50 kPa], k 1 [5 → 100000 kPa], k 2 [1 → 100] obtained from curve fitting of literature data (Loree et al., 1994;Versluis et al., 2006), and giving more • Third Step.After completing the optimization process, we obtained the hyperelastic properties of the tissues, however, the resulting unpressurized geometry was obtained with the recovery factor K ZP fixed to 1.In this final step, we implemented the whole Pull-Back algorithm, optimizing the value of K ZP .The process was the same as described in the first step, with the difference of changing the recovery value. Figure 4 shows a scheme of the iterative Pull-Back process.This method was implemented into the five different geometries and the four distinct fibrotic materials (cellular, hypo-cellular, and two calcified). Results The proposed segmentation methodology was previously presented and validated (Latorre et al., 2022), so no comments about that have been included here.Regarding the mechanical characterization, we present the results of both approaches. Determination of linear elastic properties In the first approach, we obtained the relative Young's modulus of the tissues at 110-115 mmHg of blood pressure.In order to validate the approach, five geometries and fifteen material combinations of lipid-fibrotic tissues were analyzed using FE models with Neo Hookean materials.We computed the (sr) coefficient for the different cases. In Figure 5 there is a box plot of the sr for the different geometries and material combinations of the LHS.The FE models of the simulated data used for analyzing the influence of the geometry were constructed with Neo Hooke parameters presented in Table 1, which correspond, using Eqs 3, 4, to Young's modulus of 600 kPa for the fibrotic tissue and 10 kPa for the lipid (Le Floc'h et al., 2009).The resulting mean elasticity modulus obtained was 535.25 and 10.05 kPa for the fibrotic and lipid core respectively, while the median sr for the fibrotic tissue was 88.5% and 79.6% for the lipids core, as we can see in reddish color in Figure 5.The interquartile range for lipid's sr was 4.8 times higher than the fibrotic tissue. On the other hand, analyzing the influence of material combinations, orange box plots of Figure 5 show a median sr value of 92.33% for fibrotic tissues and 81.62% for lipids.Once again, the interquartile range for lipid sr was higher than fibrotic's.For calcifications, the method detected a highly rigid material with Young's modulus over 5,000 kPa.However, this did not affect the estimated radial strains used in the cost function.Figure 6 presents the LHS distribution of the lipid-fibrotic material combination, with each data point marked in a different color based on its estimation results.Each circle was divided into two halves: The color of the left half of the circle represents the sr of the fibrotic tissue, whereas the color of the right half shows the sr of the lipids.The best results were obtained for combinations with higher stiffness in the fibrotic tissue.In contrast, with lower Young's modulus resulted in a worse sr regardless of the stiffness of the lipid tissue.As the linear materials calculations converged quickly, the convergence tolerance was the stopping criterion rather than limiting the optimization time.Each optimization process took about 2-3 h, equivalent to around 180 material evaluations, depending on complexity of the geometry. Previously to determine the non-linear properties, we checked the methodology of the first approach using simulated IVUS data from FE models with GOH material model instead of Neo Hookeans.With this test, we checked the availability of the proposed linear methodology to reproduce the response of non-linear tissues.Unlike Neo Hooke materials, GOH parameters did not have a direct relationship with Young's modulus, so a direct comparison of the material parameters was not available.In the geometric analysis conducted on the five geometries, the mean value of Young's modulus was 1,512.97kPa.On the other hand, in the material analysis, the elasticity modules were 516, 708.6, and 1,404.5 kPa for hypocellular, cellular, and calcified tissues respectively.The resulting elasticity for fibrotic tissues exceeds the range previously proposed (Caballero et al., 2023).Therefore, the method failed to accurately estimate the stiffness of the lipid tissue, resulting in softer values than the actual ones.However, the maximum principal stress distribution achieved with this method made it possible to obtain approximate values for the stress in the plaque.Supplementary Figures show the qualitative comparison between the actual stress and that resulting from taking into account the modulus of elasticity calculated. Determination of non-linear properties In the second approach, we obtained the non-linear properties by using the previous optimization method and adding a Pull-Back algorithm to recover the unpressurized geometry.Since different GOH material parameters (C 10 , k 1 , k 2 , and κ) could result in similar curves, we compared the behavior curve rather than the parameter values.Figure 7 displays the median and the range of the resulting behavior curves under uniaxial tensile loading obtained for the different geometries.The error of the estimated curves was computed with the coefficient of determination R 2 , represented in Table 3.Among the analyzed geometries, the first four reached a R 2 between 0.95 and 0.99.However, more complex geometries, like the fifth plaque, only got a R 2 of 0.50.Despite this low coefficient, the resulting curves behaved similarly to the real ones.This means that the resulting mechanical response is similar to the actual one. For the analysis of different fibrotic tissues, Figure 8 presents the resulting GOH fitted curves under uniaxial tensile loading on the cases with calcified, cellular, and hypocellular fibrotic tissues for the first IVUS geometry.The resulting R 2 in calcified tissues was 0.97 and 0.99, while for the hypo-cellular and cellular tissues was 0.79 and 0.88. The pattern-search algorithm efficiently minimized the error of the cost function (Eq.4) in a maximum of 4 h, which is equivalent to about 40 iterations.The final ZP geometries were estimated by using a Pull-Back algorithm optimizing the recovery factor.Figure 9 compares the "true" ZP geometry with the estimated one for different geometries.In most cases, the geometries are similar, except for the second geometry, where the lumen was estimated to be larger than the actual one.Figure 9 shows some differences in the lipid and calcification contours between the unpressurized geometries used to simulate the IVUS data and the unpressurized geometry after the optimization process.The roughness of the contours comes from the error resulting from the segmentation process and the size of the mesh elements.Different SGVs provided different errors and smoother contours (Latorre et al., 2022).The size of the mesh elements was related to the resolution of the IVUS images. Comparison between approaches Finally, we compared both techniques over the same simulated IVUS data from GOH models.Figure 10A shows the radial strain obtained in the simulated IVUS data, which represents the ground truth in our cost function (Eq.4).Then, Figure 10B presents the segmentation process, where the SGV was used to get the segmentation of the tissues.Both the simulated IVUS data and the segmentation were the same for the two approaches.We then defined the linear elastic or non-linear hyperelastic material FIGURE 6 LHS with the differences material combinations between fibrotic-lipid elastic modulus.The left half of the circle presents the sr of the lipid core characterization and the right half for sr in fibrotic tissues.The colors range from yellow to dark red depending on how high the success rate is. FIGURE 7 Results of the stress-stretch curves under uniaxial tensile loading obtained with the second approach over the different geometries with the material properties of calcified 1 (Table 2). parameters for each tissue, depending on the approach.At the end of each approach, Figures 10C, D present the radial strain maps obtained with the linear and non-linear approaches respectively.On the one hand, the first method used linear elastic materials to mimic a highly hyperelastic behavior, so the resulting cost (J linear ) was over 55.73% for the first IVUS geometry.On the other hand, the second one obtained an error in the cost function (J non−linear ) of 18.53%.All the cost values are collected in Table 3.As a summary, it can be stated that, in all cases, the second method reduced the mean error in the cost function, providing more accurate radial strain maps. Once the material properties and the unpressurized geometry had been estimated, it was possible to calculate the stress distribution on the plaque.Figure 11 shows the maximum principal stress (σ max ) maps resulting from the linear and non-linear approaches compared to the ground truth for the fifth geometry.Although the method gave a low R 2 for fibrotic tissue in this geometry, the stress distribution in the second approach is more similar to the true one compared to the first approach.Supplementary Figure S1 compares the stresses between the ground truth and the results of both approaches for the other four geometries, and the second figure shows the stress distributions for the different fibrotic tissues.It can be seen that in all cases both approaches were able to reproduce the stress distribution and the areas of maximum values.However, the results suggested that the second method more accurately reproduced the areas of highest stress. Discussion In this study, we compared two different approaches to determine the mechanical properties of atherosclerotic tissues.In the first one, we obtained Young's modulus of the tissues at a specific blood pressure.This kind of process allowed us to compare the stiffness of different tissues and classify them according to their behavior.If we were able to capture images of the plaque over time, it could also be helpful in evaluating the evolution of the pathology and the result of some treatments.However, atherosclerotic tissues exhibited a significant non-linear behavior (Narayanan et al., 2021;Torun et al., 2022).Thus, this method provided only a relative Young's modulus that did not fully explain the behavior of the tissue.To overcome this limitation, we proposed the second approach, which consisted of the use of a Pull-Back algorithm to recover the unpressurized geometry, we tried to characterize the full non-linear behavior of atherosclerotic plaque tissues.The process yielded an estimation of the mechanical properties and ZP geometry at the same time, which enabled the determination of the stress state over the atherosclerotic plaque under physiological conditions. Determination of linear elastic properties To assess the robustness of the method, we initially simulated IVUS data using FE models with Neo Hookean materials.This procedure estimated Young's modulus of the tissues, which was then compared with the Neo Hookean parameters using the relationships outlined in Eqs 2, 3.Although Neo Hookean models were simple for describing the behavior of the arterial tissues, this type of model has been considered enough for capturing the mechanical response of the plaque (Akyildiz et al., 2016;Noble et al., 2020).The geometries analyzed in this theoretical study were previously used in other in silico works, where the FE models were built directly with linear elastic properties and a lumen pressure of 1 kPa (Le Floc'h et al., 2009;Bouvier et al., 2013;Tacheau et al., 2016).These studies did not consider the hyperelasticity behavior of the tissues and obtained a direct correlation between the results and the Young's modulus used in their FE models.Our approach successfully characterized lipid and fibrotic tissues in different geometries and material Five different geometries and four different fibrotic tissues were analyzed.The first data row represents the R 2 between the real behavior curve and the resulting curve obtained with the second methodology.The last two rows show the cost function value between the ε IVUS rr and ε iterated rr using the first approach (J linear ) and the second one (J non−linear ). FIGURE 8 Results of the stress-stretch curves under uniaxial tensile loading obtained with the second approach over the first geometry with the calcified 1 fibrotic tissue (A), calcified 2 tissue (B), cellular tissue (C) and hypocellular tissue (D). combinations with similar results as those presented in literature (Le Floc'h et al., 2009;Bouvier et al., 2013).Results suggested that sr was higher for the fibrotic than the lipidic tissues.This was because the sr depended on the relative error of the Young's modulus, so even if the estimated value for the lipid was 12 kPa instead of the actual value of 10 kPa, it could still result in an sr of 80%.Calcifications were estimated as highly stiff solids, with two orders of magnitude above the other tissue.However, the estimated values were far from the actual ones.Similar outcomes were reported by Le Floc'h et al. ( 2009) and Tacheau et al. (2016) who successfully identified calcium inclusions but failed to accurately estimate their Young's modulus due to the small strains amplitudes.Nevertheless, the differences between considering the actual or the estimated Young's modulus did not affect the radial strains map. In addition to the geometric influence, we also analyzed the impact of the materials on the mechanical characterization.It should be noted that material combinations between lipid and fibrotic tissues played a key role in Young's modulus estimation.For atherosclerotic plaques with softer fibrotic tissues, the methodology yielded worse mechanical characterization and, in those cases, the stiffness of the lipid seemed to have no influence.The segmentation process, which was based on SGV, was also found to be slightly affected by the material combination (Latorre et al., 2022).Although cases with less gradient between the stiffness of the lipid and fibrotic tissues were more challenging for segmentation (Latorre et al., 2022), the segmentation was performed properly in all cases.It is worth noting that the cases that were more challenging for segmentation were not the same as those with worse sr. Figure 5 shows that the mechanical characterization was more dependent on the geometry rather than the atherosclerotic materials.The optimization procedure was conducted by using a pattern-search algorithm instead of a gradient-based method, as fmincon algorithm, used in previous studies (Le Floc'h et al., 2009).The newly chosen algorithm provided faster results and showed less dependence on the initial point in the optimization process. After the validation with Neo Hooke models, we applied the methodology to more realistic FE models with fibrotic tissues modeled as GOH material in order to reproduce real arterial tissue behavior.In these cases, it was not possible to directly compare the estimated Young's modulus with the GOH material parameters.However, stiffness values were found to be over the limits (Caballero et al., 2023).Due to the high Young's modulus estimation of fibrotic tissues, lipids appeared to be softer than their actual stiffness values.This overestimation of the fibrotic tissue stiffness was the result of trying to describe a highly non-linear hyperelastic material with a linear elastic model.As can be seen in Figure 11, and in the Supplementary Figures, the maximum principal stress field obtained with linear properties had some similarities with the true field.However, the properties obtained are unrealistic and the stresses are only close to the study pressure and cannot be generalized to other physiological pressures.This was because the estimated Young's modulus was obtained as the slope of the straight line secant to the real curve.As a result, the method provided a relative stiffness value at a certain blood pressure, which could assist in determining the nature of the tissues but would not provide their actual behavior.We applied the process at different pressure loads (80)(81)(82)(83)(84)(85)(110)(111)(112)(113)(114)(115)(135)(136)(137)(138)(139)(140) and observed an increase in the relative stiffness at higher pressures, although these results were not included in the current paper.Akyildiz et al. (2016) used a similar process to obtain the stiffness of the plaque tissues at systolic and diastolic pressures, proving the same outcome that stiffness increases over the cardiac cycle. Determination of non-linear properties In this method, due to the highly non-linearity of the problem, a Pull-back algorithm was included to compute the ZP geometry in order to obtain a better estimation of the hyperelastic material parameters.A correct reference geometry was important not only for the stress distribution but also for the estimated vessel diameter (Alastrué et al., 2008).The first approach used a reference geometry of the FIGURE 9 Comparison between the true unpressurized geometries (A) with the estimated ZP geometries (B).Fibrotic tissues are represented in reddish color, lipids in orange, and calcifications in gray. Frontiers in Bioengineering and Biotechnology frontiersin.orgpressurized state at 110 mmHg, assuming a linear elastic behavior of the tissues.Thus, this pressurized reference could lead to Young's modulus acquisition (Le Floc'h et al., 2009;Bouvier et al., 2013;Tacheau et al., 2016) or orthotropic linear properties (Gómez et al., 2019).However, this assumption was not valid for estimating non-linear materials, such as soft tissues.In order to consider the unpressurized geometry, some studies obtained the Yeoh material parameters by taking ex-vivo images or tests of the atherosclerotic plaque (Akyildiz et al., 2016;Torun et al., 2022) or using the pressurized geometry at low pressures as ZP geometry (Narayanan et al., 2021), which could lead to a more rigid characterization.In the present paper we implemented a modified version of a Pull-Back algorithm defined by Raghavan et al. (2006), to estimate an initial unpressurized geometry of aneurysms avoiding the iterative process by fixing the recovery factor K ZP = 1.Although the estimated initial geometry was not entirely accurate, it was continually updated with each material evaluation.At the end of the iterative process, we got the hyperelastic material parameters for the atherosclerotic tissues.Subsequently, a more accurate initial geometry was obtained by optimization of the Pull-Back algorithm by fixing the mechanical properties without constraining the recovery factor.For both approaches, we used an i7-10700K CPU with 8 cores running at 3.79 GHz and with 64 GB RAM.While the first approach took around 2 h to accomplish around 180 evaluations, the second approach needed twice time to get about 40 iterations.This was due to the complexity of the second approach which included the Pull-Back step and the convergence velocity of the non-linear tissues.This new technique was successfully applied to five different geometries and four different fibrotic materials (cellular, hypocellular, and two calcified).Results showed that different GOH material parameters lead to similar curves and in all analyzed cases the fibrotic tissues were correctly characterized.The resulting R 2 was above 0.95 showing similar behavior curves to the real ones, except for complex geometries or hypo-cellular tissues.Complex geometries, like the fifth IVUS, had a worse estimation of the curve, due to the presence of two lipids and one calcification that shielded the strain maps in those regions.Moreover, other fibrotic tissues were more challenging to estimate during the optimization process.We analyzed many different initial points for the pattern-search algorithm with similar outcomes and results suggested that fibrotic materials were considered slightly stiffer than the actual ones.This was the result of assuming the recovery factor fixed K ZP = 1 during the optimization.Lipid Neo Hookean parameters were obtained with a lot of variation regarding the actual values, especially in complex geometries, where the estimated values were close to the initial point of optimization.For softer fibrotic tissues, such as cellular and hypocellular, the mechanical properties of the lipid played a more important role in estimating the properties of the fibrotic tissue.The stiffness of calcifications was determined with a similar level of error as in the first approach.Therefore, both approaches were consistent with results reported in the literature, indicating that the exact Young's modulus was difficult to assess due to the small strain variation over a rigid solid (Le Floc'h et al., 2009;Tacheau et al., 2016), as well as the high stiffness of calcifications compared to fibrotic tissues (Gijsen et al., 2021). Despite the lipid and calcification results, the final strain maps (ε estimated rr ) were close to the ground truth, resulting in a low error in the cost function.Torun et al. (2022) computed the mechanical properties of all plaque tissues but they focused mainly on fibrotic tissue.This suggested that strains observed in the evaluated atherosclerotic plaques were mainly influenced by the fibrotic tissues rather than the lipid core.One advantage of the proposed approach was that, while the majority of the literature used information on the diastolic and systolic pressures for determining the mechanical properties (Liu et al., 2018;Liu et al., 2019); this approach could be applied at any state of pressure.Since the methodology estimates the material parameters of GOH, we obtain the response for all ranges of pressures independently of the pressures used for the estimation of the parameters.So, the results could be extrapolated to other lumen pressures.Once the material properties were finally estimated, we applied the Pull-Back algorithm without fixing the recovery factor to obtain the final unpressurized geometry.With this geometry and the estimated mechanical properties, it would be possible to determine the stress state in the arterial wall and apply it to the risk of rupture of the plaque.Although Figure 11 and Supplementary Figures show that both approaches gave similar stress distributions to the ground truth at 115 mmHg, the second method could be extrapolated to any other physiologic pressure.Furthermore, this approach produced a smoother and more accurate σ max distribution in the different analyzed cases even in complex geometries where the optimized material properties provided a low R 2 .Thus, this method would allow a better estimation of the areas with the highest stress and therefore the areas with the highest risk of plaque rupture. Limitations Although the results of this new methodology are very promising, it is important to note that the study has a number of significant limitations. • The study was basically theoretical and should be considered as the first step to lay down a methodology and validate it with different geometries and materials.We used the radial strains in our cost function because it was commonly used in the literature (Le Floc'h et al., 2009;Tacheau et al., 2016;Gómez et al., 2019).However, we also tried to use displacement fields with similar outcomes (Akyildiz et al., 2016).Radial strains or displacement fields could be obtained from IVUS images by using speckle tracking or other algorithms (Maurice et al., 2004;Lopata et al., 2009), that had been successfully applied in vitro (Le Floc'h et al., 2010;Porée et al., 2017) and in vivo (Le Floc'h et al., 2012).Due to the noisy nature of IVUS data, the radial strains obtained by these estimators lead to noisy strain maps.We mimicked that noise by adding an SNR of 20 dB over the FE strain field.• The in silico data were constructed using 2D FE models under the assumption of plane strain.These FE models overestimated the magnitude of stress compared to 3D FE models (Ohayon et al., 2005;Carpenter et al., 2020;Peña et al., 2021).Narayanan et al. (2021) developed a method to create meshes from OCT images, and they captured not only the 3D morphological information but also ensured that the applied load was physiologically representative.This kind of process needed different segmented slices over the axial direction of the artery which would increase the segmentation error in complex geometries.We used 2D FE models to simulate the information provided by a standard IVUS image.Furthermore, the influence of residual stress was not taken into account in this study.Although it has a relevant impact on the location of the maximum stress in the atherosclerotic plaque (Cilla et al., 2012), residual stress requires ex vivo information that is difficult to obtain (opening angle test, axial stretch. ..). • The fibrotic tissues were considered homogeneous, with the same mechanical properties as the tissue.Nonetheless, histologies showed a heterogeneous composition, and it affected the mechanical properties and the stress state (Akyildiz et al., 2016).However, heterogeneities were very difficult to segment or detect and some methodologies took them into account by changing the number of inclusions evaluated in their models and obtaining heterogeneous Young's modulus over the fibrotic tissue (Le Floc'h et al., 2009;Porée et al., 2017).In this study, only fibrotic tissue and macro inclusions, such as lipids or calcifications, were segmented, and homogeneous behavior was assumed in each tissue.• The optimization process was set to take no more than 4 h; but it depended on the initial point and the limits of every parameter.A previous study was conducted to analyze different initial points with similar results.Although pattern-search was a local optimization algorithm, the polling method was modified to avoid local minima.We also compared these results with those provided by the genetic algorithm, but the required time was much longer.More complex global optimization methodologies, like modifications of Bayesian optimization (Torun and Swaminathan, 2019), take more time (approximately 7 h) to estimate the hyperelastic material properties of the arterial wall (Torun et al., 2022).Liu et al. (2019) managed a computational cost of 1-2 h for determining the mechanical properties of ascending thoracic aortic aneurysm.They reduced the time by using principal components analysis from the stress-stretch curves and using an algorithm to go from coarse to fine to analyze lots of behavior curves in less time (Liu et al., 2018;Liu et al., 2019).They characterized the properties assuming the arterial wall with the same properties, however, in atherosclerotic plaques, the importance remained on the different properties of the tissues, and this method should be applied for each segmented material increasing the computational cost and the complexity of the study. Conclusion In this work, we have presented a new method to determine the non-linear material properties of atherosclerotic tissues.The proposed approach had been compared with a classical process based on linear properties, providing a more accurate description of the mechanical behavior of atherosclerotic tissues and resulting in a lower error in the cost function.We estimated the non-linear properties of the tissues and the unpressurized geometry of the plaques, which allowed us to obtain the mechanical response of the atherosclerotic tissues throughout the entire cardiac cycle, rather than only at a specific blood pressure.Despite being a theoretical framework, this method was successfully applied to different real geometries and different fibrotic materials, demonstrating its potential as a valuable tool for assessing the vulnerability of patients with atherosclerotic coronary plaques. FIGURE 2 FIGURE 2Scheme of the optimization process of the first method to recover the linear elastic properties of the atherosclerotic tissues. FIGURE 3 FIGURE 3Scheme of the optimization process of the second method to recover the non-linear hyperelastic properties of the atherosclerotic tissues. FIGURE 4 FIGURE 4Scheme of the Pull-Back algorithm used to recover the Zero-Pressure geometry. FIGURE 5 FIGURE 5Box plot of the sr variability in the fibrotic material (left) and lipidic material (right) for the different material combinations of LHS (orange) and different geometries (red). FIGURE 10 FIGURE 10Results of both methods over the first IVUS geometry with calcified 1 material properties.(A) Simulated radial strains after adding 20 dB of SNR to the FE results.(B) Segmentation process, where the chosen SGV to extract the lipid was |▽ε rr | which is represented next to the image segmentation results.Then, the mechanical characterization used this segmentation to estimate the radial strain with linear material properties (C) or non-linear properties (D). FIGURE 11 FIGURE 11 Max.Principal Stress distribution [kPa] at 115 mmHg in the fifth IVUS plaque taken as ground truth (A), and the resulting (σ max ) for the linear (B), and non-linear (C) approaches. TABLE 1 Material properties used for geometrical analysis in the first approach. TABLE 2 GOH material parameters used for the simulated fibrotic tissues. TABLE 3 Summary table to resume the results obtained with the FE models using GOH material models.
11,853.8
2023-12-13T00:00:00.000
[ "Engineering", "Medicine" ]
Characterization of VuMATE1 Expression in Response to Iron Nutrition and Aluminum Stress Reveals Adaptation of Rice Bean (Vigna umbellata) to Acid Soils through Cis Regulation Rice bean (Vigna umbellata) VuMATE1 appears to be constitutively expressed at vascular system but root apex, and Al stress extends its expression to root apex. Whether VuMATE1 participates in both Al tolerance and Fe nutrition, and how VuMATE1 expression is regulated is of great interest. In this study, the role of VuMATE1 in Fe nutrition was characterized through in planta complementation assays. The transcriptional regulation of VuMATE1 was investigated through promoter analysis and promoter-GUS reporter assays. The results showed that the expression of VuMATE1 was regulated by Al stress but not Fe status. Complementation of frd3-1 with VuMATE1 under VuMATE1 promoter could not restore phenotype, but restored with 35SCaMV promoter. Immunostaining of VuMATE1 revealed abnormal localization of VuMATE1 in vasculature. In planta GUS reporter assay identified Al-responsive cis-acting elements resided between -1228 and -574 bp. Promoter analysis revealed several cis-acting elements, but transcription is not simply regulated by one of these elements. We demonstrated that cis regulation of VuMATE1 expression is involved in Al tolerance mechanism, while not involved in Fe nutrition. These results reveal the evolution of VuMATE1 expression for better adaptation of rice bean to acid soils where Al stress imposed but Fe deficiency pressure released. INTRODUCTION Multidrug and toxic compound extrusion (MATE) protein forms a large family of transporters in prokaryotes and eukaryotes where they perform a broad range of functions with diverse substrates (Omote et al., 2006). Some localize to the tonoplast where they transport secondary metabolites into the vacuole. For examples, anthoMATE in Vitis vinifera is an H + -dependent acylated anthocyanin transporter (Gomez et al., 2009) and the NtMATE1 and NtMATE2 proteins in Nicotiana tabacum are H + -dependent nicotine transporters (Shoji et al., 2009). EDS5 is a MATE protein in Arabidopsis exporting salicylic acid (SA) from the chloroplast to the cytoplasm (Serrano et al., 2013), while DTX50 functions as an abscisic acid (ABA) efflux transporter . A sub-group of MATE proteins in plants function as citrate transporters and perform at least two separate physiological roles. One role is involved in the translocation of iron (Fe) from the roots to the shoots, such as Ferric Reductase Deficient 3 (FRD3) in Arabidopsis (Durrett et al., 2007) and OsFRDL1 (for FRD3like 1) in rice (Oryza sativa; Yokosho et al., 2009). These proteins are mainly expressed in the pericycle where they transport citrate into the xylem. Knockout mutations of these genes in Arabidopsis and rice result in Fe accumulation in the root vasculature. The second role for this sub-group of MATE proteins involves in Al 3+ tolerance, such as HvAACT1 (Aluminum-activated citrate transporter 1) in barley (Hordeum vulgare; Furukawa et al., 2007) and SbMATE in sorghum (Sorghum bicolor; Magalhaes et al., 2007). These proteins are expressed in the root apices and they facilitate the Al 3+ -activated secretion of citrate which protects the growing root apices from Al 3+ toxicity in acid soils. Therefore, the functions of MATE proteins that transport citrate depend on where they are expressed in the plant. Interestingly, recent evidence from rice and wheat (Triticum aestivum) indicates that a single citrate transporter can perform both physiological roles (Fujii et al., 2012;Tovkach et al., 2013). For instance, in Al 3+sensitive genotypes of barley, HvAACT1 is expressed in the vascular system of the root where it transports citrate into the xylem for efficient translocation of Fe to the shoots. However, in Al 3+ -tolerant genotypes of barley, a 1 kb insertion in 5 -UTR region of HvAACT1 extends expression of this gene to the root apices that results in citrate secretion from these cells as well (Fujii et al., 2012). Similarly, TaMATE1B gene in the wheat is expressed in the vasculature of roots and induced by Fe deficiency. However, some Al 3+ -tolerant genotypes of wheat have a 11.1 kb transposon-like insertion in 25 bp upstream of the start codon of TaMATE1B which constitutively enhances its expression in root apices, increases citrate secretion from these cells and contributes to Al 3+ tolerance (Tovkach et al., 2013). Thus, the physiological functions of these proteins have been extended through mutations which alter their tissuespecific expression pattern. These findings are consistent with the hypothesis that the original functions of these genes are altered or extended by mutations to the coding or regulatory regions of the gene . VuMATE1 from rice bean (Vigna umbellata) is another example of a MATE protein that transports citrate. It was first isolated from Al 3+ -stressed root tips and mediates Al 3+activated release of citrate from those cells (Yang et al., 2011). On the one hand, VuMATE1 is not expressed in the root apices in the absence of Al stress, but induced by Al stress after 3 h of exposure (Liu et al., 2013). This raises the first question how the expression of VuMATE1 was regulated under Al stress. Gene expression is regulated by different mechansims, such as chromatin condensation, DNA methylation, transcriptional initiaition, alternative splicing of RNA, mRNA stability and so on (Wray et al., 2003). In the case of TaMATE1B and HvAACT1, transposon-like element (TE) insertion in the promoter results in extention of expression into root apices and confers Al tolerancce phenotype. A miniature inverted repeat transposable element (MITE) insertion in the promoter region of SbMATE in sorghum was suggested to be involved in regulating the expression of this gene (Magalhaes et al., 2007). Whether similar mechanism is involved in VuMATE1 expression regulation or not deserves to be investigated. On the other hand, VuMATE1 seems to be constitutively expressed in the vascular system of mature roots cells (Liu et al., 2013). This raises the second question whether VuMATE1 plays roles in Fe nutrition as found for some other members of the family. In this study, we first evaluated the contribution of VuMATE1 to Fe nutrition using in planta complementation assays, and then explored the role of promoter (cis-regulatory sequences) in transcriptional regulation of VuMATE1 expression in response to Al stress by means of GUS staining assays of VuMATE1 promoter-GUS fusion transgenic lines. The results show that VuMATE1 is expressed in epidermis and vasculature of mature roots but it is not involved in citrate release from roots of Fe deficient plants or in the translocation of Fe to the shoots. We further demonstrate that cis-regulation is involved in Al-induced expression of VuMATE1, and the cis-acting elements involved in root-tip-specific and Al-inducible expression of VuMATE1 resided in promoter region between −1228 and −574 bp. We discuss these results with respect to the adaptation of rice bean to acid soils. Isolation of VuMATE1 Promoter The promoter of VuMATE1 was isolated from genomic libraries that have been constructed before (Liu et al., 2013). Nested PCR was performed using the outer/inner adaptor primer provided by the GenomeWalker TM Universal Kit (Clontech, Mountain View, CA, USA) and two VuMATE1 gene-specific primers (Supplementary Table S1). The amplified fragments were cloned into the pMD18-T vector (Takara, Dalian, China). The sequences that extends upstream of the cDNA clones were isolated as the 5 -upstream regions of the gene. Identification of Transcription Start Site To determine the transcription start site (TSS) of VuMATE1, total RNA from Al-stressed root apices was isolated with an RNAprep pure Plant Kit (Tiangen, Beijing, China). 5 -RACE was performed using SMART TM RACE cDNA Amplification Kit (Clontech). Gene-specific primers for 5 -RACE amplification are listed in Supplementary Table S1. Amplified cDNA fragments were cloned into the pMD18-T clone vector (Takara) and sequenced. Vector Construction and Genetic Transformation A series of 5 deleted promoters, −1720, −1228, −574, and −192 bp from the TSS of VuMATE1, were amplified by PCR method (Supplementary Figure S1). The primer sequences are shown in Supplementary Table S1. The amplified fragments were cloned into the pMD18-T vector (Takara) and their sequences were confirmed by DNA sequencing. The verified fragments were excised using KpnI and NcoI sites at the 5 ends of forward and reverse primers, respectively, and inserted at the corresponding sites of pCAMBIA1301. The plasmid containing −1720 bp construct was also cloned into pCAMBIA1302 at the KpnI and NcoI sites creating an in-frame translational fusion to the GFP gene that was confirmed by DNA sequencing. These constructs was moved into Agrobacterium strain EHA105 and transformed into wild-type Col-0 plants (Clough and Bent, 1998). For complementation assays, the VuMATE1 coding sequence without stop codon was amplified via PCR method (Supplementary Table S1). The purified fragment was subcoloned into the constructed VuMATE1p::GFP plasmid between VuMATE1p and GFP. The generated plasmid was moved into Agrobacterium strain EHA105 and transformed into frd3-1 homozygous lines. For the construction of transgenic lines over-expressing VuMATE1, a previous constructed vector harboring 35S::VuMATE1 was used to electroporate into EHA105 (Yang et al., 2011), and transformed into frd3-1. For all transgenic plant lines, T1 seeds were collected from transgenic T0 plants, and were surface sterilized and planted on one-half-strength Murashige and Skoog medium supplemented with hygromycin. The resistant plants were transferred to soil and allowed to set seeds (T2). Transgenic lines that displayed a 3:1 ratio for hygromycin resistance in the T3 generation were selected for further analysis. All experiments were performed using plants corresponding to the T4 or T5 generation. Growth Conditions For rice bean experiments, rice bean [V. umbellata (Thunb.) Ohwi and Ohashi] seeds were soaked in deionized water overnight, and germinated at 26 • C in the dark. After germination, the seeds were transferred to a net floating on a 0.5 mM CaCl 2 solution (pH 5.5). The solution was renewed daily. At day 3 after germination, seedlings were transferred to a 1.2-L plastic pot (four holes per pot, two seedlings for each hole) containing aerated one-fifth-strength Hoagland nutrient solution (pH 5.5). For Fe-deficiency experiment, plants were cultivated for 1 week in Fe sufficient conditions (1/5 Hoagland solution) and then transferred to the same nutrient solution with or without Fe-EDTA (20 µM) for 12 days. The treatment solution was renewed every 3 days. For Arabidopsis experiments, seeds were surface-sterilized in 75% ethanol for 4 min, and subsequently rinsed thoroughly with sterile water. Seeds were stratified at 4 • C for 2 to 4 days before being planted on Petri dishes with 1/5 Hoagland nutrient solution supplemented with 1 mM MES and 0.8% agar (pH 5.5). After 1 week, the seedlings with uniform size were selected to be transferred onto a net floating on one-fifth-strength Hoagland nutrient solution (pH 5.5). The solution was renewed every 3 days. All of the experiments were carried out in an environmentally controlled growth room under 12 h 24 • C : 12 h 22 • C, light : dark, a light intensity of 200 µmol photons m −2 s −1 and 70% relative humidity. RT-PCR and qRT-PCR Plant materials were ground in liquid nitrogen, and total RNA was extracted using an RNAprep pure Plant Kit (Tiangen). Firststrand cDNAs were synthesized using a PrimeScript TM RT-PCR Kit (Takara), and diluted to 100 ng µL −1 . Semi-quantitative RT-PCR was performed with the diluted cDNA as template. Quantitative real-time PCR (qRT-PCR) was performed on a LightCycler480 machine (Roche Diagnostics, Indianapolis, IN, USA) using a SYBR PremixEx Taq kit (Takara). Primer pairs used in both RT-and qRT-PCR were listed in Supplementary Table S1. Three biological replicate RNA/cDNA samples were generated, and each cDNA sample was performed with triplicate technical replicates, from which the relative expression was calculated against that of the internal control gene 18S rRNA using the formula 2 − Cp . Collection and Analysis of Root Exudates After treatments, the roots were briefly washed with 0.5 mM CaCl 2 solution (pH 5.5), and then root exudates were collected in 0.5 mM CaCl 2 solution (pH 4.5) either in the absence or presence of 25 µM Al for 6 h the collected root exudates were purified and concentrated according to Yang et al. (2006). The concentration of citrate was analyzed enzymatically (Delhaize et al., 1993). Immunostaining of VuMATE1 The synthetic peptide ATTDNNDIETGDEG-C (positions 173-186) was used to immunize rabbits to obtain antibodies against VuMATE1 (Genscript, Nanjing, China). One-week-old seedlings were used to immunolocalization of VuMATE1. The procedure was following previous report (Yamaji and Ma, 2007). Ferric Chelate Reductase Activity Measurement Ferric chelate reductase (FCR) activity was determined according to the previous study . In brief, the whole excised root (c. 1.0 g) was placed in a tube filled with 50-ml of assay solution, which consisted of 0.5 mM CaSO 4 , 0.1 mM MES, 0.1 mM BPDS, and 100 µM Fe-EDTA at pH 5.5 adjusted with 1 M NaOH. The tubes were placed in a dark room at 25 • C for 1 h, with periodic hand-swirling at 10-min intervals. The absorbance of the assay solutions was measured at 535 nm, and the concentration of Fe(II)[BPDS] 3 was quantified using a standard curve. Chlorophyll Extraction and Quantification After 3 weeks culture on hydroponics, newly expanded leaves were harvested. Chlorophyll was extracted in methanol and absorbance measured at 652, 665, and 750 nm. Total chlorophyll concentration was calculated as described (Porra et al., 1989). Microscopy For GUS histochemical staining, seedlings (10-day-old) were incubated with the substrate 5-bromo-4-chloro-3-indolyl β-Dglucuronide as described (Jefferson et al., 1987). To localize Fe 3+ , seedlings after 3 weeks culture were vacuum-infiltrated with Perls stain solution [equal volumes of 4% (v/v) HCl and 4% (w/v) potassium ferrocyanide] for 30 min. Seedlings were then rinsed with water, observed, and photographed with a Nikon AZ100 microscope (Tokyo, Japan). The fluorescence of VuMATE1p::GFP in Arabidopsis was observed with a confocal laser scanning microscope (LSM710: Karl Zeiss, Jena, Germany). For imaging GFP, the 488 nm line of the Argon laser was used for excitation and emission was detected at 520 nm. VuMATE1 Promoter Structure Analysis To investigate regulatory mechanisms of VuMATE1 expression, we analyzed the promoter sequence of VuMATE1. We obtained 2-kb DNA sequence upstream of VuMATE1 translation start codon (ATG) (Supplementary Figure S1A). We next determined the potential transcription start site (TSS) of VuMATE1 using 5 -RACE, and only one TSS (A, +1 position) in the VuMATE1 promoter was identified (Supplementary Figure S1B). The TSS was located in the 3 region of the putative TATA box (TATAA, −35/−30bp). There is an intron of 170 bp in length between TSS and translation start codon (Supplementary Figure S1A). Expression Patterns of VuMATE1 under Fe Deficiency and Al Stress We have previously demonstrated that the expression of VuMATE1 is induced by Al stress in root tip region (Liu et al., 2013). However, tissue expression localization of VuMATE1 in transgenic Arabidopsis plants carrying β-glucuronidase (GUS) reporter gene under control of VuMATE1 promoter showed that VuMATE1 is constitutively expressed at vasculature of maturation root zone (Liu et al., 2013). Here, we further examined the expression and localization of VuMATE1 in vivo. We transformed wild-type Arabidopsis (col-0) with a construct that harbors the VuMATE1 promoter upstream of green fluorescent protein (VuMATE1p::GFP). GFP signal was not observed in root apex under normal growth conditions, but detectable in maturation root zone (Supplementary Figure S2A). Detailed analysis of the expression of VuMATE1p::GFP revealed that it was mainly expressed in epidermis and vasculature of maturation root zone (Supplementary Figures S2C,D). Exposure of roots to Al for 9 h resulted in significant increase of VuMATE1p::GFP expression in root apex (Supplementary Figure S2B), but had no effects in maturation root zone (Supplementary Figure S2D). To examine whether the expression of VuMATE1 is regulated by Fe status, we analyzed the response of VuMATE1 expression to Fe nutrition by RT-and qRT-PCR. After 12 days culture of rice bean seedlings in nutrient solution with (+Fe) or without Fe (−Fe), newly expanded leaves of −Fe plants displayed obvious chlorosis symptom (Supplementary Figure S3). SPAD values (indications of chlorophyll content) of newly expanded leaves decreased by ∼60% following this treatment compared to +Fe conditions ( Figure 1A) and the ferric chelate reductase (FCR) activity in roots increased by almost threefold (Figure 1B), demonstrating that the plants were Fe-deficient after 12 days treatment. This is supported by RT-PCR and qRT-PCR analysis indicating that expression of VuIRT1 (Iron Regulated Transporter 1), the ferrous Fe transporter, was fifteen times greater in the roots of −Fe plants compared to +Fe controls (Figures 1C,D). By contrast, VuMATE1 expression in roots was unaffected by Fe status since expression levels were similar in the −Fe and +Fe plants ( Figure 1E). VuMATE1 cannot Complement the frd3-1 Phenotype The localization of VuMATE1 to the vascular system prompted us to investigate whether VuMATE1 is also involved in Fe nutrition similar to HvAACT1 in barley (Fujii et al., 2012) and TaMATE1B in wheat (Tovkach et al., 2013). We expressed VuMATE1 using its native promoter in the Arabidopsis mutant frd3-1 (VuMATE1p::VuMATE1/frd3-1), which is defective in Fe translocation (Green and Rogers, 2004), and two independent transgenic lines (line1 and line2) were used for further analysis. RT-PCR analysis indicated that VuMATE1 was expressed in both transgenic lines but not in frd3-1 mutant (Figure 2A). When grown in one-fifth-strength Hoagland nutrient solution, newly expanded leaves of the frd3-1 mutant lines exhibited severe chlorosis (Figure 2B), which is in accordance with lower chlorophyll levels ( Figure 2C). Perls blue staining demonstrated that significantly more Fe was accumulated in the root vasculature of frd3-1 than WT plants (Supplementary Figure S4), which is consistent with previous descriptions of this mutant (Durrett et al., 2007). The frd3-1 lines transformed with VuMATE1p::VuMATE1 showed these same general symptoms ( Figure 2B). Chlorophyll content of the newly expanded leaves in frd3-1 and the two independent complementation lines was approximately half of WT levels ( Figure 2C). Perls blue staining of roots showed similarly high Fe precipitation in the vasculature of the complementation lines (Supplementary Figure S4), indicating that Fe translocation to the shoots was reduced in all the lines. These results indicate that VuMATE1 expression driven by its native promoter could not complement the mutant phenotype of frd3-1. We have previously demonstrated that VuMATE1 is a plasma membrane-localized citrate-permeable transporter protein (Yang et al., 2011). Thus, the inability of VuMATE1 to restore frd3-1 phenotype with respect to Fe nutrition suggests that the expression pattern but not gene function is responsible for the loss of its role in Fe nutrition. To test this hypothesis, we introduced VuMATE1 using 35S CaMV promoter into the frd3-1 mutant (35S::VuMATE1/frd3-1). The chlorosis was greatly, albeit not completely, restored in two independent transgenic lines, OX1 and OX2 (Figure 3A). In addition, Perls blue staining result also showed that the accumulation of Fe in root vasculature was decreased dramatically in comparison to frd3-1 mutant ( Figure 3B). Cell-Specificity of Localization of VuMATE1 To further investigate why VuMATE1 expression under the control of its native promoter could not complement the frd3-1 mutant, we examined the cell-specificity of localization of VuMATE1 with a polyclonal antibody. In root apex, no fluorescent signal was observed ( Figure 4A). However, fluorescence signal was detected in the epidermis of root apex after 9 h exposure to 25 µM Al ( Figure 4B). Moreover, VuMATE1 is localized on the plasma membrane of the distal side of epidermis cell ( Figure 4B). Being mainly localized to cells near xylem vessels, and the epidermis in maturation root zone, VuMATE1 could not be detected in the pericycle or in cells internal to the pericycle ( Figure 4C). FIGURE 4 | Cell-specificity of localization of VuMATE1 in roots. Immunostaining with anti-VuMATE1 antibody is shown in the root apex either before (A) or after Al stress for 9 h (B), and maturation root zone (C). Bar, 100 µm. Effect of Fe Supply on Citrate Secretion Citrate anion secretion from roots can also complex with Fe in the rhizosphere to improve its mobilization and uptake by roots (Jones et al., 1996). The localization of VuMATE1 in the epidermis cells of the root led us to investigate its possible role in mobilizing Fe in the rhizosphere through enhanced citrate secretion. Therefore, we analyzed citrate secretion from whole roots under either +Fe or −Fe conditions and found that Fe deficiency did not increase citrate secretion. However, Al treatment significantly increased citrate secretion regardless of Fe supply (Figure 5). FIGURE 5 | Citrate secretion from rice bean roots in response to Fe deficiency and Al stress. One-week-old seedlings were subject to nutrient solution with 20 µM Fe (+Fe) or without (−Fe) for 12 days after treatment, root exudates were collected in 0.5 mM CaCl 2 solution (pH 4.5) in the absence (−Al) or presence of 25 µM Al (+Al) for 6 h. Data are expressed as means ± SD (n = 4). Columns with different letters indicate significant difference at P < 0.05. Promoter Analysis to Characterize VuMATE1 Expression To characterize in detail the regulation of VuMATE1 transcription by Al stress and Fe status, we analyzed GUS expression in VuMATE1p::GUS transgenic lines carrying different lengths of the 5 region of the VuMATE1 promoter. In the absence of Al stress, GUS staining was not observed in the root apex of all promoter-GUS reporter transgenic lines carrying different 5 deletion promoters ( Figure 6A). However, after 9-h exposure to Al stress, promoter-GUS reporter lines, −1720 and −1228 bp::GUS, exhibited Al-inducible and root apex staining of GUS activity, whereas GUS staining was not observed in −574 and −192 bp::GUS transgenic lines ( Figure 6A). This result indicated that cis-acting elements responsible for root-tip specific and Al-inducible expression resided in VuMATE1 promoter sequence between −1228 and −574 bp. On the other hand, all promoter-GUS reporter transgenic lines examined showed constitutive expression in maturation root zone, even in the shortest −192 bp::GUS transgenic line (Figure 6B). To find cis-acting elements in the promoter, we first searched the VuMATE1 promoter sequence between −1228 and −574 bp using the PLACE and PlantCARE databases (Higo et al., 1999;Lescot et al., 2002). As shown in Table 1, both databases predicted cis-acting elements related to drought-inducible element (MBS), ABA response element (ABRE), wounding and pathogen responsive element (W box), and SA response element (TCA). We then searched three reported Al-responsive cis-acting elements, i.e., GGN(T/g/a/C)V(C/A/g)S(C/G)T of STOP1 ortholog, ART1 (Tsutsui et al., 2011), GGCCCA(T/A) of ASR5 (Arenhart et al., 2014), and CGCG box of CALMODULIN-BINDING TRANSCRITPION ACTIVATOR (CAMTA; Tokizawa et al., 2015). Only a single cis-acting element of ART1 was found to be located at −629 bp, whereas none of both ASR5 and CAMTA was observed ( Table 1). In order to examine whether phytohormones, ABA and SA, are involved in the transcriptional regulation of VuMATE1, we compared GUS activities among different treatments, i.e., ABA (10 µM; pH 5.5), SA (10 µM; pH 5.5), or Al (10 µM; pH 5.5). Previous study showed that apart from Al, low pH (5.0) also affect VuMATE1 expression in GUS transgenic lines (Fan et al., 2015). Therefore, in order to clarify the regulatory role of hormones, we used the pH5.5 as normal growth conditions to eliminate the impact of pH. As shown in Figure 7, under normal growth conditions (pH 5.5), GUS activity was undetectable in the entire root apex region. Both ABA and SA could slightly induce GUS activity restricted to elongation zone. Addition of Al resulted in the strong induction of GUS activity at entire root apex region. DISCUSSION In the present study, we demonstrated that cis regulation is involved in the transcriptional regulation of VuMATE1 expression in adaptation to acid soils. In planta GUS expression analysis indicated that cis-acting motifs responsible for Alinducible expression reside between −1228 and −574 bp of VuMATE1 promoter (Figure 6). Several lines of evidence suggest FIGURE 7 | VuMATE1p::GUS expression analysis in transgenic Arabidopsis plants. Transgenic seedlings were exposed to 10 µM Al, 10 µM ABA, or 10 µM SA for 9 h. Activation of the VuMATE1 promoter was observed by GUS staining (blue). At least two independent transgenic lines were used to analyze GUS expression. Bars, 100 µm. that the acquisition of Al tolerance mechanism can occur through mutations that modify the expression patterns of a single gene. For example, in wheat, the number of tandemly repeated elements of 33-803 bp in length located upstream of TaALMT1 coding region was associated with higher levels of TaALMT1 expression (Sasaki et al., 2006;. Interestingly, these tandemly repeated elements are absent in the AetALMT1 which is the ortholog of TaALMT1 from Aegilops tauschii, the donor of D genome in hexaploid wheat . It was speculated that these tandemly-repeated elements provided some advantage to hexaploid wheat as individuals spread to more acid soils. In Al-tolerant genotypes of barley, a 1-kb insertion upstream of the HvAACT1 coding region extends the expression of this gene into the root apex (Fujii et al., 2012). Similarly the higher expression level of TaMATE1B in several Brazilian wheat lines was found to be associated with the presence of a Sukkula-like transposable element in its promoter region as well (Tovkach et al., 2013). Apparently, the acquisition of Al tolerance mechanism in rice bean is different from these species, since no insertion segments were observed in the VuMATE1 promoter. Recently, adaptation of Holcus lanatus to acid soils was achieved by increasing the number of cis-acting elements for the transcription factor, ART1, in the promoter of HlALMT1 . The expression of Arabidopsis AtALMT1 is also largely attributed to the interaction between AtSTOP1 and its cis-acting element. However, the cis-acting elements and trans-regulatory factors involved in VuMATE1 expression seem to be different from these genes. We have previously demonstrated that an inducible C 2 H 2 -type zinc finger transcription factor, VuSTOP1, can bind to VuMATE1 promoter region between −1228 and −920 bp, but not to the region between −820 and −555 bp where a putative cisacting motif of ART1 resides ( Table 1; Fan et al., 2015). Furthermore, in planta complementation of Atstop1 mutant with VuSTOP1 demonstrated that the expression of AtMATE was only slightly restored in the complemented lines (Fan et al., 2015). Thus, it is very likely that transcription factors other than VuSTOP1 are involved in the transcriptional regulation of VuMATE1 under Al stress. Recently, a series of cisacting elements and transcription factors responsible for the expression regulation of AtALMT1 were identified by integrating bioinformatics and molecular biological approaches (Tokizawa et al., 2015). Thus, it is possible that multiple mechanisms are also involved in VuMATE1 expression regulation, which needs further investigation. We identified a putative W-box cis-acting motif (TTGACC) in the position of −1009 bp of VuMATE1 promoter (Table 1), which is potentially responsible for the interaction with WRKYtype transcription factors. Recently, Ding et al. (2013) reported that WRKY46 functions as a repressor of AtALMT1, whereas WRKY46 itself is repressive to Al stress. Given that WRKYtype transcription factors were involved in the transcriptional regulation of VuMATE1 expression in rice bean, they will perform as a transcriptional activator instead of repressor. This is evidenced by GUS expression in VuMATE1p::GUS transgenic lines carrying different lengths of the 5 region of the VuMATE1 promoter. In both −579 and −194 bp::GUS lines, GUS activity was undetectable either in the absence or presence of Al stress (Figure 6A), indicating it is not repressors but activators to be mainly involved in VuMATE1 expression. Furthermore, pretreatment with CHX, a protein translation inhibitor, resulted in the significantly inhibition of VuMATE1 expression (Liu et al., 2013;Fan et al., 2015), confirming that repressor is not involved in the transcriptional regulation of VuMATE1 expression in response to Al stress. Phytohormones are important signal inducers that participate extensively in plant response to environmental stresses (Sun et al., 2010;Tian et al., 2014;Yang et al., 2014). In soybean, ABA was reported to enhance citrate secretion both in the absence and presence of Al stress (Shen et al., 2004). Al stress resulted in the accumulation of SA, and exogenously application of SA could in turn improve Al tolerance through modulation of citrate secretion in Cassia tora (Yang et al., 2003). In this present study, we identified both potential ABA-responsive and SA-responsive cis-acting elements in the promoter of VuMATE1 (Table 1), suggesting possible involvement of these phytohormones signaling in Al-regulated VuMATE1 expression. However, we found that phytohormones (both ABA and SA) are not the major factors responsible for Al-induced VuMATE1 expression in rice bean root tip, although they induced GUS activity. In planta GUS expression analysis revealed that there were differences in the intensity and tissue localization of the GUS staining between phytohormones (ABA and SA) and Al treatments (Figure 7). Recently, it was demonstrated that several signal inducers, especially IAA and ABA, can trigger AtALMT1 expression in Arabidopsis (Kobayashi et al., 2013). However, the induction of AtALMT1 expression in response to Al stress is independent of IAA and ABA signaling, which in combination with our present observation provides evidence that Al-induced organic acid secretion seems to be independent of signaling from phytohormones. We found that in control conditions (−Al) the expression of VuMATE1 was absent in the root apex, but constitutively expressed in root hairs, epidermis and vasculature of mature roots (Supplementary Figure S2; Figure 6). This pattern of expression suggested that VuMATE1 could be involved in aspects of Fe nutrition in rice bean. This is based on previous reports that citrate secretion from roots is associated with Fe mobilization from the rhizosphere in chickpea (Cicer arietinum; Ohwaki and Sugahara, 1997) and maize (Zea mays) (Carvalhais et al., 2011). Furthermore, other citrate-transporting MATE proteins such as HvAACT1 in barley (Fujii et al., 2012), TaMATE1B in wheat (Tovkach et al., 2013), FRD3 in Arabidopsis (Rogers and Guerinot, 2002) are induced by Fe deficiency and likely responsible for Fe translocation to the shoots by loading citrate into xylem. However, here we demonstrated that VuMATE1 was not involved in Fe nutrition in rice bean. This conclusion is supported by the following pieces of evidence. First, VuMATE1 expression was not affected by Fe deficiency (Figure 1E). There are two potential reasons responsible for the inability of responsiveness of VuMATE1 expression to Fe nutritional status. One is that there are no Fe-deficiencyresponsive components regulating VuMATE1 expression in rice bean. However, since both VuIRT1 expression and FCR activity was induced by Fe deficiency (Figures 1B,D), Fe-deficiencyresponses are clearly operating in rice bean. The other reason is that there are either no Fe-deficiency-inducible cis-elements in the promoter of VuMATE1, or that such cis-elements were originally present but have been lost or mutated during the adaptation of rice bean to acid soils. Acid soils typically have toxic concentrations of Al but they rarely induce Fe deficiency in plants because the low pH can maintain a higher concentration of soluble Fe. Therefore there is less requirements to mobilize Fe from acid soil through the secretion of organic anions. In line with this result, there is only a trace amount of citrate secreted into growth medium and the secretion rate was unaffected by the onset of Fe deficiency (Figure 5). Thus, it is possible that as these plants adapted to acid soils they lost the ciselements responsible for iron-deficiency-inducible expression of VuMATE1 but maintained the induction of expression by Al. Such a change could confer an advantage to rice bean because in addition to improving Al tolerance, it would minimize unnecessary carbon loss in conditions when it would not be beneficial. This is supported by the finding that transgenic tomato lines that express VuMATE1 under control of constitutive 35S CaMV promoter exhibited constitutive citrate efflux (Yang et al., 2011). The second reason for concluding that VuMATE1 is not involved in Fe nutrition is that VuMATE1 does not appear to translocate Fe from the roots to the shoots. This was supported by the finding that transgenic Arabidopsis lines expressing VuMATE1 under the control of its native promoter could not rescue the phenotype of frd3-1 mutant which is defective in Fe translocation (Figure 2). There are two possible reasons for the ineffectiveness of VuMATE1 in Fe translocation. One is that the expression level of VuMATE1 was too low to exhibit a sufficient phenotype ( Figure 1C). However, FRD3 expression in wild-type Arabidopsis is also at the limits of detection yet this gene was able to complement the frd3-1 mutant (Green and Rogers, 2004). Thus, the low level of VuMATE1 expression may not be the main reason for its inability to translocate Fe. An alternative possibility is that the tissue-specific localization of VuMATE1 is unsuited for Fe translocation. Both FRD3 in Arabidopsis and OsFRDL1 in rice have been demonstrated to be localized mainly in the pericycle (Green and Rogers, 2004;Yokosho et al., 2009). However, the expression of VuMATE1 was absent in pericycle and cells immediately internal to pericycle (Figure 4). Although it is still not clear how critical localization to the pericycle is for Fe translocation, the restoration of phenotype by VuMATE1 under 35S CaMV promoter (Figure 3) suggests that this difference in localization contribute to the ineffectiveness of VuMATE1 in helping Fe move to the shoots. Why then does VuMATE1 appear to lack a role in Fe nutrition whereas other citrate transporters from the MATE appear to be involved in Fe nutrition as well as Al tolerance? The explanation might be found in the very different origins of rice bean compared to barley and wheat. For example, barley and wheat arose and were originally cultivated in regions with calcareous soils where Fe deficiency is a major constraint to growth (von Bothmer et al., 2003;. Therefore, both species evolved efficient mechanisms for accessing and taking up Fe from these soils. The adaptation of these species to more acidic soil may have been long enough to evolve Al tolerance but not to lose their efficient mechanisms for acquiring Fe. Indeed, a recent report concluded that selection pressure for greater Al tolerance in natural populations of H. lanatus required only 150 years for Al tolerant alleles of HlALMT1 to become more frequent in the surviving population . By contrast, rice bean was originally cultivated on acid soils where Al toxicity poses a stronger selection pressure than Fe deficiency (Yang et al., 2006). We hypothesized that the Fe-deficiency-responsive cis-acting element does not evolve for VuMATE1 or such element has lost during long adaptation process to acid soils. The GUS reporter transgenic line carrying the shortest promoter exhibited the same expression patterns to others carrying longer promoter in term of Fe nutrition status provided the circumstantial evidence to support our hypothesis ( Figure 6B). However, the evolution of root tip-specific and Alinducible cis-elements in rice bean not only alleviates Al toxicity but also prevents excessive loss of fixed carbon through citrate secretion. In summary, we have characterized the expression of VuMATE1 in response to Fe deficiency. We conclude that this gene is not involved with citrate secretion from roots during Fe deficiency and nor is it involved with Fe translocation from the roots to the shoots. The role of VuMATE1 in Al tolerance is controlled by elements in the promoter which respond to Al stress, via unknown pathways, to increase expression in root apices. The loss or gain of specific physiological functions in response to environmental selection pressures can occur via cis mutations that modify the level and distribution of gene expression. AUTHOR CONTRIBUTIONS JY and SZ conceived the study. ML, JX, HL, and WF performed the experiments and carried out the analysis. ML and JY designed the experiments and wrote the manuscript. All authors read and approved the final manuscript.
8,176
2016-04-19T00:00:00.000
[ "Biology", "Environmental Science" ]
On the Angular Momentum and Spin of Generalized Electromagnetic Field for $r$-Vectors in $(k,n)$ Space-Time Dimensions This paper studies the relativistic angular momentum for the generalized electromagnetic field, described by $r$-vectors in $(k,n)$ space-time dimensions, with exterior-algebraic methods. First, the angular-momentum tensor is derived from the invariance of the Lagrangian to space-time rotations (Lorentz transformations), avoiding the explicit need of the canonical tensor in Noether's theorem. The derivation proves the conservation law of angular momentum for generic values of $r$, $k$, and $n$. Second, an integral expression for the flux of the tensor across a $(k+n-1)$-dimensional surface of constant $\ell$-th space-time coordinate is provided in terms of the normal modes of the field; this analysis is a natural generalization of the standard analysis of electromagnetism, i. e. a three-dimensional space integral at constant time. Third, a brief discussion on the orbital angular momentum and the spin of the generalized electromagnetic field, including their expression in complex-valued circular polarizations, is provided for generic values of $r$, $k$, and $n$. 1 Introduction: Preliminaries, Notation, and Main Results Generalized Maxwell Equations For a given natural number r, the generalized Maxwell field F(x) and source density J(x) are characterized by multivector fields of respective grades r and r − 1 at every point x of a flat (k, n)-space-time with k temporal and n spatial dimensions [1,Sec. 3]. For any 0 ≤ s ≤ k + n, grade-s multivectors belong to a vector space with basis elements eI , where I is an ordered list of s non-repeated space-time indices; we represent space-time indices by Latin letters. We denote by Is the set of all such ordered lists of s space-time indices; we let I0 = Ø and we write I for I1. Let ∆II = eI · eI for I ∈ Is be the space-time metric, where · denotes the dot product [1, Eqs. (12)- (13)]. The temporal (resp. spatial) basis elements are e0 to e k−1 (resp. e k to e k+n−1 ) and have metric −1 (resp. +1). The generalized Maxwell equations for arbitrary r, k, and n are the following pair of coupled differential equations: in units such that c = 1. The interior derivative (or divergence), expressed with the left interior product ( ) in (1), and the exterior derivative, expressed in terms of the wedge product (∧) in (2), are both defined in [1,Sec. 2] or [2,Sec. 2] and the operator ∂ is given by ∂ = i∈I ∆ii∂i. For r = 2, k = 1, and n = 3, Eqs (1)-(2) coincide with the standard Maxwell equations, with the identification of F as the (antisymmetric) Faraday tensor of the electromagnetic field, in contravariant form and ∂ the four-gradient [3,Ch. 4], [4,Ch. 11]. The Maxwell equations can be derived by an application of the principle of stationary action [5,Ch. 19], [3,Sec. 8]. For a field theory, the action is a quantity given by the integral over a (k + n)-dimensional space-time of a scalar Lagrangian density L(x). For generalized electromagnetism, the basic field in this formulation is taken to be the vector potential A(x), a multivector field of grade r − 1, such that The Lagrangian density L is expressed in terms of the multivector dot (scalar) product [1,Sec. 2] as the sum of two terms: a free-field density, Lem = (−1) r−1 2 F · F, and an interaction term, Lint = J · A, that is The Euler-Lagrange equations for the Lagrangian density L in (4) give indeed the Maxwell equation (1) as vector derivatives of L with respect to the potential A and its exterior derivative ∂ A, namely [6, Sec. 3.2] If we replace the potential A by a new field A ′ = A +Ā + ∂ ∧ G, whereĀ is a constant (r − 1)-vector and G is an (r − 2)-vector gauge field, the homogenous Maxwell equation (2) is unchanged [1,Sec. 3]. For a given Maxwell field, there is therefore some unavoidable (gauge) ambiguity on the value of the vector potential if r ≥ 2. Of special interest for this work are the Coulomb-ℓ gauge and the Lorenz gauge. For a space-time index ℓ, let us define the differential operator ∂l = i∈I ∆ii∂i. In the Coulomb-ℓ-gauge, the following two conditions are imposed: ∂l A = 0. In classical electromagnetism, setting ℓ = 0 recovers the Coulomb or radiation gauge. In the Coulomb-ℓ gauge, it also holds that ∂ A = 0. In the less restrictive Lorenz gauge, it simply holds that The multivectorial equation in (8) has k+n r−2 components, i. e. a scalar equation for r = 2. Energy-Momentum Tensor and Lorentz Force Energy-momentum can be transferred from the field to the source through a process modelled as a force acting on the source. The generalized Lorentz force density f is a grade-1 vector with k + n components given by [1,Sec. 4] f = J F = (∂ F) F. The volume integral of the Lorentz force density f over a (k + n)-dimensional hypervolume V k+n quantifies the transfer of energy-momentum to the source in that volume. The conservation law relating the Lorentz force (9) and the stress-energymomentum tensor Tem of the free Maxwell field F is given by [1,Sec. 4], [7,Sec. 4.3], where Tem is a symmetric rank-2 tensor for all values of r, k, and n. In analogy to the (antisymmetric) multivector basis elements eI , we denote the rank-s symmetric-tensor basis elements by uI , where I ∈ Js is an ordered list of s, possibly repeated, space-time indices and Js denotes the set of all such lists. The interior derivative (divergence) ∂ Tem is computed according to the interior product [7,Eq. (25)], and indeed satisfies (10), cf. [7,Eq. (40)]. The tensor Tem is expressed in terms of the ⊙ and tensor products [7,Sec. 2.4]. Given two multivectors a and b of the same grade s, the a ⊙ b and a b are two rank-2 tensors [7,Sec. 2.4] with basis elements wij = ei ⊗ ej and respective (i, j)-th components given by where ∆ii and ∆jj are the space-time metric defined previously. In general, neither a ⊙ b nor a b are symmetric; however, the sum a ⊙ b + a b is symmetric in its components [7,Sec. 2.4]. For all values of r, k, and n, the tensor Tem is expressed in terms of the ⊙ and tensor products [1,Sec. 4.2], [7,Sec. 4.3], as The diagonal, T em ii , and off-diagonal, T em ij with i < j, components of Tem are explicitly given by [7,Eqs (38) -(39)] T em ii = (−1) r−1 2 ∆ii T em ij = − L∈I r−1 :i,j / ∈L ∆LLσ(L, i)σ(j, L)F ε(i,L) F ε(j,L) , where for two disjoint lists I and J of non-repeated space-time indices, σ(I, J) is the signature of the permutation that sorts the concatenated list (I, J), and ε(I, J) is the sorted concatenated list (I, J). If the lists I and J are not disjoint, we adopt the convention that σ(I, J) = 0. For later use, let us define the product between basis elements ei and uI , I = (i1, i2) ∈ J2 as ei uI = Here I! denotes the set of all permutations (not necessarily ordered) of I, and I π = (i π 1 , i π 2 ) denotes one such permuted list. The condition i π 2 = i is implicitly enforced by the permutation signature σ(i, i π 2 ). Both the conservation law (10) and the formula for the symmetric tensor Tem (13) can be derived by exterior-algebraic methods from the invariance of the free-field action with density Lem to infinitesimal space-time translations [7]. This exterior-algebraic derivation directly gives a symmetric tensor, without recurring to the Belinfante-Rosenfeld procedure to symmetrize the canonical tensor that appears in a standard application of Noether's theorem to the invariance of the action [8,9], [10,Sec. 3.2], [11,Sec. 2.5]. In Sec. 2 of this paper, we show how a formula for the relativistic angular-momentum tensor can be derived by exterior-algebraic methods from the invariance of the action for the free field with density Lem to infinitesimal space-time rotations. Generalizing the usual electromagnetic analysis of flux as a three-dimensional space integral at constant time, the energy-momentum flux Π ℓ across the (k + n)-dimensional half space-time V k+n ℓ of fixed ℓ-th space-time coordinate x ℓ , for ℓ ∈ {0, . . . , k+n−1}, can be expressed in terms of the transverse normal modes of the field [1, Eq. (86)] as a multidimensional integral over Ξ ℓ , the set of values of ξl for which ∆ ℓℓ ξl · ξl ≤ 0, where ξl = ξ − ξ ℓ e ℓ , namely where dξ ℓ c is an infinitesimal element [2, Sec. 3.1] along all coordinates except the ℓ-th, the frequency χ ℓ is given by χ ℓ = + −∆ ℓℓ ξl · ξl, and ξl ,+ = ξl + χ ℓ e ℓ ; the complex-valued normal field components are denoted byÂ(ξl ,+ ). In Sec. 3 of this paper, we provide an analogous formula for the angular-momentum flux and its split into center-of-motion, orbital angular momentum, and spin components, as described in the next section. Relativistic Angular Momentum: Background and Summary of Main Results In classical mechanics, the angular momentum L is an axial vector (or pseudovector) with three spatial components. The relativistic angular momentum Ω is an antisymmetric tensor of rank 2, or a bivector, that combines the angular momentum L and the polar vector N for the velocity of the center-of-mass (also known as moment of energy). In fact, the way Ω is constructed is the same as the way the electromagnetic field bivector F is constructed from the axial magnetic field and the polar electric field, that is Ω = e0 ∧ N + components. In analogy to energy-momentum, a conservation law relates the transfer of angular momentum over a (k +n)-dimensional hypervolume V k+n to the divergence of an angular-momentum tensor Mα with rotation center α. In contrast to Tem, the basis elements of Mα are of the form wi,I = ei ⊗ eI , where i ∈ I and I ∈ I2. For classical electromagnetism, with r = 2, k = 1, and n = 3, this tensor is given in contravariant form as [4,Sec. 12.10.B] where T αβ are the components of the symmetric stress-energy-momentum tensor. In our notation, T αβ = T em ε(α,β) . The vectors L and N are given by volume integrals of some appropriate functions of Mα. For instance, for α = 0, the spatial angular momentum vector L of the electromagnetic field is given [4,Prob. 7.27] in terms of the standard cross product of the spatial position vector x and electric and magnetic fields E and B by: Since the spatial relativistic angular momentum bivector is the space-Hodge-dual L H , using [1, Eq. (36)] we have Moreover, the j-th component of the Poynting vector E × B coincides with T em ij in (15), with i = 0, where we have used that r = 2 to rewrite L as m ∈ I, that ∆mm = 1 for the spatial indices, and that σ(m, 0) = −1 for any spatial m, as well as the definition of the cross-product E × B. The (i, j)-th component of L H in (19) is thus given by the volume integral of the quantity which in turn can be identified with the component in w0,ij of the product x Tem defined in (16). In Sec. 2, we prove that this is no coincidence, and that in general it holds that The proof is built on the principle of invariance of the action to infinitesimal space-time rotations around α. In Sec. 3, we provide a formula for the relativistic angular momentum Ω ℓ α of the generalized electromagnetic field, including L and the center-of-mass velocity N, for any values of k, n, and r, as the flux of the tensor Mα across a (k + n − 1)-dimensional surface of constant ℓ-th space-time coordinate (Eqs (58) and (63)), for any ℓ, where the flux integral is carried out with respect to the inverse Hodge of the infinitesimal element dx [2,Eq. (19)]. The total angular momentum Ω ℓ α can be decomposed as Ω ℓ α = N ℓ + L ℓ + S ℓ − α ∧ Π ℓ , i. e. the center-of-mass component N ℓ , the orbital angular momentum L ℓ , and the spin S ℓ . In terms of the transverse normal modes of the field, evaluated in the Coulomb-ℓ gauge, these three terms are, respectively, expressed (cf. Eqs (74)-(76)), as where cc stands for the complex conjugate. Expressions for the bivector components of L ℓ and S ℓ are given in (77) and (80). Of special interest are the circular-polarization-basis formulas for the orbital angular momentum and the spin, respectively given in (87) and (85). For the standard electromagnetic field, the spatial components of the orbital angular momentum and spin in (28)-(29), computed for ℓ = 0, r = 2, k = 1, and n = 3, coincide with the well-known values [12,Eq. (16) in BI.2], respectively given in vector notation, rather than as a bivector, by By construction, the components of the angular momentum and spin bivectors that include the index ℓ are zero. The feasibility of the separation of angular momentum into orbital and spin parts in a gauge-invariant manner, as well as its possible operational meaning, have been subject to some discussion, particularly in a quantum context [13,14,15]. Since the consideration of quantum aspects is beyond the scope of this work, and it seems unlikely that statements about the generalized electromagnetic field can be supported by experimental observations to settle the issue, we do not dwell on this matter in this paper, apart from noting that we carry out our analysis in the Coulomb-ℓ gauge (or equivalently for the transverse normal modes of the field [12, Sec. BI]), the condition that has been found to be in best empirical agreement with observations for the standard electromagnetic field [15]. Angular-Momentum Conservation Law for the Free Generalized Electromagnetic Field In this section, we exploit the invariance of the action with Lagrangian density Lem to infinitesimal space-time rotations, e. g. Lorentz transformations, to derive a conservation law and an expression for the relativistic angular-momentum tensor by direct exterior-algebraic methods, avoiding the non-symmetric canonical tensor and the related currents in Noether's theorem. For the sake of notational compactness, we remove the subscript em in the tensor. Conservation Law for Angular Momentum Let us shift the origin of coordinates by an infinitesimal perturbation ε ε ε. For a translation, each of the k + n components is an independent function of space-time ε ε εt. For a space-time rotation (Lorentz transformation) around a center point α, and given an infinitesimal bivector ε ε εr with k+n 2 components, it holds that Let {e ′ } denote the rotated (perturbed) basis elements, expressed in the original basis {e}. Along the i-th coordinate, the basis element ei is perturbed to first order by an infinitesimal amount where 1 = i∈I ∆iiwii is the identity matrix and the Jacobian partial-derivative matrix ∂ ⊗ ε ε ε is given by The j-th column of the Jacobian matrix contains the exterior derivative, i. e. gradient, of the j-th component of the perturbation in the coordinates, εj. As proved in [7, Sec. 3.3], a similar general expression holds for the transformation of multivector basis elements of grade s, where 1s = I∈Is ∆II wI,I is the grade-s identity matrix and the matrix G s ε ε ε is given by [7, Eq. (70)] G s ε ε ε = (−1) s−1 I∈Is i∈I j∈I\{I\i} ∆II σ(I \ i, i)σ(j, I \ i) ∂iεj w I,ε(j,I\i) . Writing the action functional over a closed region R in the new perturbed coordinates involves changing the integrand and the differentials according to (33) and (35). For the Lagrangian density Lem, given by a scalar product of two multivectors, the full details are given in [7,]. Let us assume that the fields vanish at infinity sufficiently fast, e. g. the integral of ε ε ε T = (ε ε εr (x − α)) T at infinity (the boundary of the volume in the action) vanishes. Then, the change of action δSL em is expressed in terms of the rank-2 manifestly symmetric tensor T, the stress-energy-momentum tensor (13) of the free generalized electromagnetic field, as having assumed that the integration region R is large enough to make the physical system closed, and that the fields decay fast enough over R so that the flux of the fields over the boundary of R is negligible. This formula for the change of action (38) holds for arbitrary grades of the generalized electromagnetic field F. The integrand in (38) can be rewritten using (32) and [1,Eq. (27)] as Assuming that infinitesimal space-time rotations are a symmetry of the system and that the fields decay sufficiently fast, the fact that the variation of the action δSL em must be zero for all infinitesimal perturbations ε ε εr implies that This expression characterizes the conservation law related to angular momentum, in the absence of external currents. Differently from the condition ∂ T = 0 that appears in the context of invariance to translations and gives a the conservation law for the energy-momentum, invariance to infinitesimal rotations requires the interior derivative (divergence) of the stressenergy-tensor to be radial, or equivalently parallel to the relative-position vector x − α. In the following section, we provide an expression for a rank-3 angular-momentum tensor, valid for any number of space-time dimensions and grade of the electromagnetic field. Relativistic Angular-Momentum Tensor In this section, we prove that (40) can be expressed as the matrix derivative (divergence) of a rank-3 tensor, which we will identify with the relativistic angular-momentum tensor of the generalized electromagnetic field. To start, we expand the bivector equation (40) in components as Consider now a bivector of a similar form, where (xi − αi) and T ε(j,ℓ) are swapped, i. e. (xi − αi)∂jT ε(j,ℓ) is replaced by T ε(j,ℓ) ∂j (xi − αi). Since ∂j(xi − αi) = δji, this bivector can be evaluated as the zero bivector, where we have used that σ(i, i) = 0 to keep only the terms with i = ℓ and then split the summation into the disjoint cases i < ℓ and ℓ < i and interchanged the roles of i and ℓ in the latter case. Since σ(i, ℓ) = −σ(ℓ, i), we verify that Eq. (45) is zero. Adding this zero bivector to (42) and applying the Leibniz rule for the derivative gives It remains to prove that (47) is the divergence of a suitably defined tensor field. Let Mα = (x − α) T be the angularmomentum tensor field, where the product is defined in (16). The tensor field Mα is antisymmetric in the second and third components, as its basis elements are given by wi,I = ei ⊗ eI . Expanding the product (x − α) T with the definition in (16), the tensor field Mα is given by where we have split the summation over lists I ∈ J2 into two, the first one for the lists I of the form (j, j) and the second one for the lists of the form (j, ℓ), with j < ℓ. Splitting further the second summation into two, and renaming j and ℓ as ℓ and j, respectively, we obtain where we have combined in (52) the separate summations over j < ℓ and j > ℓ into one single summation over j = ℓ, and then in (53) combined this result with the first summand, expressed as a double summation over j and ℓ such that j = ℓ, into a triple summation over indices i, j, and ℓ. Computing the matrix derivative [7, Eq. (34)] of Mα, denoted by ∂ × Mα, we recover (47), that is Substituting this expression in (39) and the result back in (38), we find that the change of action is given by The invariance of the action to rotations, δSL em = 0, implies (40) and equivalently that ∂ × Mα = 0. In the presence of sources, the divergence ∂ × Mα can be seen as an angular-momentum density, and the volume integral of ∂ × Mα across an (k + n)-dimensional hypervolume V k+n gives the transfer of relativistic angular momentum from the field to the sources in the volume. In the next section, we characterize this transfer of angular momentum in terms of the flux of Mα, and provide an expression for the flux in terms of the normal modes of the field. 3 Flux of the Angular-Momentum Tensor: Spin and Orbital Angular Momentum of the Generalized Electromagnetic Field Integral Form of the Conservation Law and Angular-Momentum Flux The angular-momentum conservation law admits an integral form, which we derive next. First, the volume integral of the divergence ∂ × Mα over an (k + n)-dimensional hypervolume V k+n gives the transfer of angular momentum from the field to the sources. This volume integral is the flux of the divergence over where the flux integral is carried out with respect to the inverse Hodge [2, Eq. (10) As an example, and for some fixed x ℓ and ℓ ∈ I, consider the (k + n)-dimensional half space-time region The boundary of this region is a surface of constant space-time coordinate ℓ of value x ℓ , given by Let Ω ℓ α denote the flux of the tensor field Mα = (x − α) T across the boundary ∂V k+n ℓ . In this case, the Hodge-dual infinitesimal vector element in the r. h. s. of (58) is given by [ where the factor σ(ℓ, ℓ c ) arises from the orientation such that the normal vector e ℓ points outside the integration region. Using (53) in (58) and using (61), carrying out the matrix product, and rearranging the expression, yields An alternative, slightly more explicit, expression for (63) is the following Normal Modes of the Field Substituting in (62) the stress-energy-momentum tensor T by its expression in (13), the flux Ω ℓ α of the angular-momentum tensor a surface of constant space-time coordinate ℓ of value x ℓ is given by the integral The r. h. s. of (65) is computed w. r .t. x ℓ c , being ℓ c the set of indices excluding ℓ. We let xl = x − x ℓ e ℓ and similarly ξl = ξ − ξ ℓ e ℓ for the frequency vector defined below. We also let κ ℓ = − 1 2 ∆ ℓℓ σ(ℓ, ℓ c ). In the absence of charges, the free field F satisfies the homogeneous wave equation and can be expressed as a linear superposition of complex exponentials e j2πξ·x such that ξ · ξ = 0. Note that here j = √ −1; the context will make it clear whether j refers to a coordinate label or to the imaginary number. Denoting the coefficient of each complex exponential bŷ F, the Fourier transform of F, and with the definition d k+n = dξ0 · · · dξ k+n−1 , we have We resolve the Dirac delta by rewriting the condition ξ · ξ = 0 in terms of ξl as ∆ ℓℓ ξ 2 ℓ + ξl · ξl = 0. This equation has real solutions for ξ ℓ only if ∆ ℓℓ ξl · ξl ≤ 0, namely the two possible values ξ ℓ = ±χ ℓ , where χ ℓ is given by Let Ξ ℓ be the set of values of ξl for which ∆ ℓℓ ξl · ξl ≤ 0. We define the pair of frequency vectors ξl ,σ as for σ ∈ S = {+1, −1}, respectively, shortened to + and −. Using [16, p. 184], we can write the inverse Fourier transform (66) w. r. t. the integration variables ξ ℓ c , now with the appropriate constraints on the integration range so that χ ℓ exists, in various equivalent forms as where we have factored out a common factor e j2πξl·xl and defined the functionF ℓ (ξl) aŝ We may rewrite the flux Ω ℓ α in terms ofF ℓ by substituting (70) in (65) as Spin and Angular Momentum of the Generalized Electromagnetic Field In Appendix B.1 we carry out the rather tedious evaluation of this integral in terms of the transverse normal modes in the Coulomb-ℓ gauge. Under the assumption that the various field components commute, we obtain the following formula for the angular momentum as a sum of four components, cf. Eq. (151), namely the center-of-mass velocity N ℓ , the orbital angular momentum L ℓ , and the spin S ℓ , respectively, given by where Π ℓ is the energy-momentum flux across the region in (17) and contributes to the angular momentum with a term dependent of the origin of coordinates α. The product ⊙ could be replaced by in (76) with an overall change of sign, since the off-diagonal transposed components of both products coincide [1,Eq. (22)], and the diagonal components vanish in the Coulomb-ℓ gauge defined in (6)- (7). Using the various product definitions, the I-th component, where I = (i, j) ∈ I2 and ℓ / ∈ I, of the orbital angular momentum and spin are, respectively, given by and By construction, the subspace of vector potential components in (79) is restricted to those lists disjoint from I, with components different from ℓ (from the Coulomb-ℓ gauge condition in (6)), and orthogonal to ξl ,+ (from (7)). This leaves a total of k + n − 4 space-time indices, to be distributed in lists of r − 2 different elements. The dimension of this subspace is thus k+n−4 r−2 . This dimension might be related to the classification of distinct pairs of spin-1 particles linked to the direction ξl ,+ , a possibility to be studied elsewehere. The feasibility of the separation of angular momentum into orbital and spin parts in a gauge-invariant manner, as well as its operational meaning, have long been subject to some level of discussion, particularly in a quantum context [13,14,15,17,18,19]. As stated earlier in the paper, quantum aspects lie beyond the scope of this work and we do not dwell further on this matter, apart from noting that our analysis is done in the Coulomb-ℓ gauge (or equivalently for the transverse normal modes of the field [12, Sec. BI]), the condition that has been found to be in best empirical agreement with observations for the standard electromagnetic field [15]. As a complement, we include in Appendix C a "canonical" derivation of the spin components extended to the generalized multivectorial electromagnetic field. Ignoring the quantum aspects, we have used as a basis Sections 12 and 16 of Wentzel's treatise on quantum field theory [20], one of the first book treatments of the subject. Our analysis bypasses the canonical tensor that Wentzel makes use of, so the appropriate adaptations have been made. As expected, the final formulas obtained with this extended analysis coincide with (76) and (79). Spin and Orbital Angular Momentum in a Complex-valued Circular Polarization Basis From the definition of the ⊙ product in (11), the I-th component S ℓ I of the spin bivector S ℓ in (76) is given by where I = (i, j). The component S ℓ I adopts a particularly transparent form in the complex-valued circular-polarization basis. For any ϕ, let the right-and left-handed basis elements, respectively denoted by e I + and e I − , be given by Note that the symbol j is used to represent both the imaginary unit and one of the components of I, a possible source of confusion in expressions as (81) The basis elements for ϕ = π 4 appears in the analysis of helicity and circular polarization [4,Problem 7.27]; for ϕ = 0, and apart from a factor −j, we recover the standard basis, i. e. linear polarization. When we substitute these expressions for ei and ej in (80) we have to take into account that the complex-conjugate operation acting on the potential also affects the basis elements. For the standard space-time basis, this observation is irrelevant since the basis elements are real-valued. However, the polarization vectors are complex-valued and we need to use e * i rather than ei in (82a). With this observation, the component S ℓ I is given by where we have grouped common terms under the assumption that the fields * (ξl ,+ ) andÂ(ξl ,+ ) commute, as it corresponds to a classical theory. For the choice ϕ = π/4, the basis elements satisfy e I + * · e I + = e I − * · e I − = 1 2 (∆ii + ∆jj ) and e I + * · e I − = 1 2 (∆jj − ∆ii), and the components S ℓ I adopt a particularly simple form, and similarly for ∂ ξ i and ∂ ξ j . Again, the symbol j doubly represents a coordinate label in the left-hand side and the imaginary unit in the right-hand side of (86b). We therefore can express the orbital angular momentum component L ℓ I in (77) in terms of the coefficients in the circular-polarization basis in (86) as a formula reminiscent of that of the spin for the standard electromagnetic field [4, Problem 7.27]. As we have seen throughout the previous pages, a large number of standard results in the analysis of angular momentum for free electromagnetic fields naturally extend to arbitrary number of space-time dimensions and multivector field grade. This brief discussion on the orbital angular momentum and the spin of the generalized electromagnetic field and their relationship to complex-valued circular polarizations, for generic values of r, k, and n, concludes the paper. The remainder is devoted to appendices with details or proofs of several results mentioned earlier in the paper. A Proof of the Stokes Theorem In this appendix, we prove the following statement: the flux of a tensor field M, antisymmetric in the second and third components and with basis elements given by wi,I = ei ⊗ eI, across the boundary ∂V m of an m-dimensional hypersurface V m is equal to the flux of the divergence of M across V m for any m ≤ k + n, and in particular for m = k + n. This Stokes theorem thus gives We prove (89) thanks to the generalized Stokes theorem for differential forms [21, pp. 80], where ω is a differential form and dω is its exterior derivative, corresponding to the operator The procedure we follow is almost identical to what was done in [2, Sec. 3.4-3.5], and it starts by identifying ω with the integrand on the right-hand side of (89). Using (53) with M = Mα, we have having applied em × w j,ε(i,ℓ) = ∆mj e ε(i,ℓ) . We then let the exterior derivative in (91) act on (93) to obtain dω = m∈I i,j,ℓ∈I Since j c ∈ I k+n−1 , we can identify m with j and write dx ε(j,j c ) = dxj dxjc σ(j, j c ) to obtain In parallel, we identify dω in (90) with the integrand of the left-hand side of (89), which can be expanded as namely the same expression as (95), therefore proving (89). B.1 Computation of the Angular-Momentum Flux Writing x − α = (x ℓ − α ℓ )e ℓ + xl − αl, we may split the flux in (72) as a weighted sum, namely where the bivector-valued integrals I ℓ , Il ,+ , and Il ,− are, respectively, given by We next evaluate these integrals, starting with I ℓ . Interchanging the integration order of frequency and space-time, we evaluate the integral of e j2π(ξl+ξ ′ ℓ )·xl over space-time R k+n−1 as the (k + n − 1)-multidimensional Dirac delta. After integration over ξ ′ ℓ c to remove this Dirac delta, we directly obtain It will prove convenient to define an integral Im for m ∈ I, replacing e ℓ inside the parentheses in (101) by em, The integral Il ,− in (100) can now be evaluated in a similar way to I ℓ to obtain where It is given by (102) setting m = t. As for the integral Il ,+ in (99), we first rewrite the formula for Il ,+ by interchanging the integration order of frequency and space-time, using linearity and making some minor rearrangements and algebraic manipulations, as Under the usual assumptions that the fields vanish sufficiently fast at infinity, the space-time integral in (104) can be evaluated by integration by parts in terms of a derivative of the Dirac delta as where the vector-derivative operator ∂ ξl is given by Now, an extension of the proof in [16, p. 26] to our multidimensional bivector-valued integrals in (104) shows that the derivative of the Dirac delta can be evaluated as Using the definition in (106), we can express (108) as where the bivector-valued integrals It,+ are given by Finally, we evaluate the derivative ofF ℓ (ξl)/(2χ ℓ ) as Using this expression, we may therefore rewrite (110) as It,+ = It,1 + ∆ ℓℓ ∆ttIt,0, where It,1 and It,0 are, respectively, given by Substituting (101), (109), (112), and (103) back into (97), we obtain where Im, for m ∈ I is given in (102), and It,1 and It,0 are, respectively, given by (113) and (114). The three bivector-valued integrands in (102), (113) and (114) are of the form e ℓ × em B , for some index m and some symmetric rank-2 tensor B. As for some indices I = (i1, i2) ∈ J2 it holds that e ℓ × em uI = 0, only some components of the tensor B contribute to the integral. To determine which components of B contribute to the integral, we compute the double product e ℓ × em B with the definition of the product in (16), where we have used that I and its permutation I π must be such that i π 1 = ℓ, i. e. that I must be of the form ε(ℓ, j) for some j ∈ I. This observation fixes also the permutation I π = (ℓ, j). Besides, we can remove j = m from the summation as σ(m, m) = 0. For each m, we need thus consider only the components B ε(ℓ,j) , where j = m. Computation of Im. In (101), the tensor B mentioned in the previous paragraph is given by In Section B.2 we evaluate its components B ε(ℓ,j) needed in (118). Substituting (181) in (118) gives where βr is given by and with some abuse of notation, σ denotes in this equation both the signature of a permutation and a sign. Substituting (120) back in (101) gives Assume that j = ℓ, so that ξl ,σ,j = ξj, regardless of the value of σ. Then, splitting the integral in two, and making a change of variables ζl = −ξl in the integral with ξl ,− yields since −ζl − χ ℓ e ℓ = −ζl ,+ , and |Â(ζl ,+ )| 2 =Â(ζl ,+ ) * (ζl ,+ ) = * (−ζl ,+ )Â(−ζl ,+ ) = |Â(−ζl ,+ )| 2 thanks to the hermiticity ofÂ(ζl ,+ ). The second summand in the integral in (122) coincides with the first. If j = ℓ, then ξl ,σ,j = σχ ℓ , and the integral in (122) is given by Splitting the integral in two, and making a change of variables ζl = −ξl in the integral with ξl ,− shows that the second integral in (125) coincides with the first one, as it happened in (124). Substituting (124) and (125) back in (122) gives the final expression for Im, namely where we have used that e ε(m,j) = σ(m, j)em ∧ ej, that em ∧ em = 0, and the decomposition ξl ,+ = ξl + χ ℓ e ℓ . With the definition κ ℓ = − 1 2 ∆ ℓℓ σ(ℓ, ℓ c ), the bivector-valued integral Im can be expressed in terms of the energymomentum flux Π ℓ in (17) across the (k + n)-dimensional half space-time V k+n ℓ of fixed x ℓ , for ℓ ∈ {0, . . . , k + n − 1}, in (59) as Computation of It,0. In (114), m = t while the tensor B is again given by (119). Substituting the components B ε(ℓ,j) in (181) into (118) with m = t, and then back in (114) yields an analogous equation to (122), namely It proves convenient to split the integral in two and separate the cases j = ℓ and j = ℓ. In the first case, i. e. j = ℓ, noting first that ξl ,−,j = ξl ,+,j = ξj , making a change of variables ζl = −ξl in the integral with ξl ,− gives which cancels out with the integral ξl ,+ and the integral in (129) vanishes for j = ℓ. If j = ℓ, ξl ,−,ℓ = −ξl ,+,ℓ = −χ ℓ , unaffected by the change of variables ζl = −ξl. The integral with ξl ,− gives thus and the total integral in (129) vanishes too. Therefore, Computation of It,1. In (113), the index m is again m = t, while the tensor B is now given by Substituting the double product (118) in (113) gives In Section B.3 we evaluate the components B ε(ℓ,j) needed in (118), namely ℓ = j and ℓ = j. Substituting the expression of B ℓℓ in (211) into the first integral in (135), and expanding the sum over σ gives We split the integrand in (136) into three terms, respectively, indexed by 1, 2, and 3, each with consecutive pairs of summands labelled by a and b. In the integrand with label 1b, the change of variables ζl = −ξl has opposite sign to the contribution from 1a, so the first integral is zero. Then, each of the integrands with labels 3a and 3b is an odd function of the integration variable ξl, as can be verified with the change of variables ζl = −ξl. Indeed,Â(ξl ,+ ) transforms intô A * (ξl ,− ) and (resp.Â(ξl ,− )) into * (ξl ,+ )) and therefore the third integral is zero too. It only remains the second integral, which can be expressed with the usual change of variables ζl = −ξl in 2b as where cc denotes the complex conjugate. Proceeding in a similar manner, substituting the expression for B ε(ℓ,j) in (212) into the second integral in (135), and splitting the integrand into three terms, respectively, indexed by 1, 2, and 3, each with consecutive pairs of summands labelled by a and b, gives As before, we consider separately the integrands. If the quantitiesÂ(ξl ,+ ) and * (ξl ,+ ) commute, the change of variables ζl = −ξl in the integrand 1b gives the integrand 1a with an opposite sign; this sign cancels the minus sign in front. Besides, for commuting quantities, the second summand of 1a (and of 1b) is the complex conjugate of the first summand. The same change of variables ζl = −ξl shows that the integrand 2b (resp. 3b) is the complex conjugate of 2a (resp. 3a). Similarly, the same change of variables and commutativity assumption applied in the second integrand of 3b shows that the second summand coincides with the first one. We therefore have Note that the rank-2 tensors with components  * (ξl ,σ 1 ) ⊙Â(ξl ,σ 2 ) tj − cc are actually antisymmetric in the indices t and j and can thus be seen as a bivector component with element basis etj. Combining (137) and (139) back into (135) gives Let us denote by I 1 t,1 the first summand in (140); the second and third terms in (140) can be grouped into a single summation, denoted by I 2 t,1 , over j ∈ I \ t. We denote by I 3 t,1 the remaining summand in (140). As e ε(t,j) = −σ(t, j)ej ∧ et and et ∧ et = 0, the contribution of I 1 t,1 to the flux is given by where we have extended the summations to t = ℓ and j = ℓ since these added terms are zero in the Coulomb-ℓ gauge in (142) and noted that every ordered list of non-repeated index pairs appears twice in the summation in (143). Interpretinĝ A * (ξl ,+ ) ⊙Â(ξl ,+ ) − cc as a bivector, we obtain Proceeding in a similar manner, we can rewrite I 2 t,1 as where we have also used that et ∂ ξt * (ξl ,+ ) ·Â(ξl ,+ ) = et∂ ξt ⊗ * (ξl ,+ ) ×Â(ξl ,+ ), as can be seen by direct computation. Getting back to (115), and summing over t ∈ I \ ℓ, this first term of (140) contributes to the flux as It is straightforward to verify that this expression is indeed a bivector, regardless of the value of r. Analogously, the contribution of I 3 t,1 is given by Under the change of variable ζl = −ξl and the commutativity assumption, this equation becomes Renaming now t as j ′ and j as t ′ , this equation coincides with (148), except that σ(t, j) picks a minus sign. The integrand is thus an odd function of ξl and the integral in (148) is zero, and therefore Combining Im in (127) with (133), (147), (142), and (148) into (115) gives where Π ℓ is the energy-momentum flux across the region in (17). Evaluating (200) for i = j = ℓ, and noting that t = ℓ, gives Similarly, evaluating (200) for i = ℓ and j = ℓ, and noting that t = ℓ and j = t, gives where we have used the definition of the ⊙ product in (11) and the identity ei A = (−1) r A ei [1, Eq. (21)]. C Spin Components: "Canonical" Analysis For the standard electromagnetic field, the intrinsic angular momentum is defined only for the spatial components of the angular momentum bivector Ω ℓ α . For the sake of simplicity, let α = 0. For generic ℓ, we study thus the components Ω ℓ I , with I ∈ I2, that do not include ℓ, i. e. ℓ / ∈ I. From (64), and writing I = (i, j), with i, j / ∈ ℓ, we have to evaluate the following integral: The following analysis is inspired by Sections 12 and 16 of Wentzel's book [20], which describe how to obtain the spin components from the canonical stress-energy-momentum tensor. Our analysis bypasses however the canonical tensor, and the appropriate adaptations have been made. Using the expression of the non-diagonal components of the stress-energymomentum tensor in (15) in the integrand in (213) gives − xi L ∈I r−1 :ℓ,j / ∈L ∆LLσ(L, ℓ)σ(j,L)F ε(ℓ,L) F ε(j,L) + xj L ∈I r−1 :ℓ,i / ∈L ∆LLσ(L, ℓ)σ(i,L)F ε(ℓ,L) F ε(i,L) .
10,131.8
2021-10-01T00:00:00.000
[ "Physics" ]
Temperature Programmed Oxygen Desorption and Sorption Processes on Pr2-XLaxNiO4+δ Nickelates HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Temperature programmed oxygen desorption and sorption processes on Pr2-XLaxNiO4+δ nickelates Alexandra Usenka, Vladimir Pankov, Vaibhav Vibhu, Aurelien Flura, Jean-Claude Grenier, Jean-Marc. Bassat The present work is devoted to the investigation of temperature programmed oxygen desorption -sorption (TPD) processes performed on SOFC innovative cathode materials with Pr2-xLaxNiO composition. The experiments were performed using the coulometric titration technique involving an , over the temperature cycle 20 900 20 parameters were the time and temperature dependences of both the titration current and the oxygen over-stoichiometry. TPD spectra of Pr2-xLaxNiO (x 0) exhibited two sharp peaks (maxima) of titration current with excellent resolution at p(O2) = 50 Pa, induced by removal of oxygen from different sites of the crystal lattice. TPD spectrum of Pr2NiO recorded under the same conditions drastically differs from the spectra of Pr2-xLaxNiO as four sharp maxima of titration current are detected. The TPD spectra of Pr2-xLaxNiO then strongly depend on the nature of the rare-earth (La or Pr) in link with the oxygen over-stoichiometry amount. Introduction Due to their high oxygen ion diffusivity and electronic conductivity at high temperature, the mixed ionic-electronic conducting (MIEC) lanthanide nickelates Ln2NiO4+ with K2NiF4-type structure can be used as solid oxide fuel cell (SOFC) cathodes, separation membranes or oxygen sensors (1 6). Oxygen exchange with atmosphere, i.e. desorption-sorption processes, obviously depends on the crystallographic structure as well as on the oxygen non-stoichiometry of the oxides. Temperature programmed desorption (TPD) followed by solid electrolyte coulometry/potentiometry gives valuable data on the oxygen exchange processes: temperature at which oxygen releases (desorption), stoichiometry changes and structure transition occur (7). Interestingly it also allows estimating the nature of desorpted oxygen species with respect to their mobility (8). To investigate oxygen TPD and ionic transport in MIEC oxides being induced by a chemical potential gradient, it is convenient to use the multifunctional solid electrolyte (7 13). It acts both as independent and supplementary technique to thermo-gravimetry, gaseous chromatography, X-ray and neutron diffraction in a wide range of temperature and oxygen partial pressure (8). A devices application for investigation of various compounds has been reported by Vashook et al. (8). described in the work of Teske (7), where properties of perovskite-like cuprates Y Ba Cu O were investigated. For example, the time dependence of oxygen exchange gave useful informations about oxygen diffusion in the materials. Spectra of temperature programmed desorption and sorption (TPD) detected the oxygen variation with temperature. According to the authors, TPD spectra of Y Ba Cu O could be associated with series of structural and conductivity transitions. Refs (9, 10) described the operating modes of the OSEC device that are required to build p T diagrams and to investigate the thermodynamic properties of mixed nonstoichiometric oxides. For instance, a tentative analysis of oxygen non-stoichiometry and electrical conductivity of the binary strontium cobalt oxide SrCoOx has been performed in ref. (11). The authors demonstrated strong evidence of correlation for maxima in TPD spectra (within the temperature range 500 and the relationship of a -disorder transition of the cubic high-temperature phase. Regarding nickelates with K2NiF4 type structure, oxygen non-stoichiometry combined with electrical conductivity measurements were performed on La SrxNiOy and Pr SrxNiOy (3,12,13). TPD spectra allowed concluding about the existence of weakly bonded oxygen, capable to reversibly exchange with the gas phase at temperature as low The present work is devoted to the investigation of oxygen TPD in Pr2-xLaxNiO4+ powders by coulometric solid electrolyte technique, i.e. The room temperature oxygen over-stoichiometry of the synthesized oxides was determined either by iodometric titration (6) or by thermo-gravimetry analysis (TGA) measurements performed under Ar/5%H2 flow with a SETARAM setup (MTB 10-8 balance). Both values (table I) are in very good agreement. Oxygen desorption-sorption processes with atmosphere were studied on Pr2-xLaxNiO4+ 900 required value of oxygen partial pressure set by coulometry dosing was 50 Pa. Argon was used as a carrier gas and air was used as a comparison gas. The considered parameters were the time and temperature dependences of both the titration current (I2 in Fig. 1) and / or the oxygen over-stoichiometry. To simplify further discussions, the dependence will be called TPD spectra (7), the spectra data concerning not only desorption but also oxygen sorption. coulometric regime. The measurement system consists of two identical solid electrolyte cells, each one using a tube made of 8 mol. % Y2O3 stabilized ZrO2 (8YSZ). Each cell has a pair of oxygen pumping (Ii, pump zone) and potential measurement (Ui, gauge zone) electrodes. Porous Pt partly covering areas of inner and outer walls of YSZ tube ensures the electrical contacts and serves as ele trode components. The cells allow stabilizing a steady-state gas flow with a given oxygen concentration by presetting the electrode voltage (Ui) at a known temperature. The voltage is controlled by feedback adjustment of the coulometric titration current (Ii) between the oxygenpumping electrodes. The experimental reactor and the cells are connected by stainless steel capillaries. If the reactor with a sample is placed between two such cells, the oxygen partial pressure can vary in a wide range (8). The temperature of the reactor can be selected independently of the cells temperature. Any oxygen partial pressure p(O2) in the in-flowing gas (Ar) can be modified in cell 1 by coulometric dosing with O2 to reach the required value p(O2)'. Cell 2 keeps the required value of oxygen partial pressure p(O2)'' after the flowing gas has just left the reactor by pumping out or pumping in O2. If no oxygen exchange between the flowing gas and the sample takes place in the reactor, no change in oxygen partial pressure (p(O2)''= p(O2) ') is observed and titration current I2 keeps constant value. If oxygen desorption occurs, oxygen partial pressure rises in the gas flow of the reactor (p(O2) ''> p(O2) ') and cell 2 pumps out the O2 excess to the value p(O2 2 2 -out procedure is detected through the decrease of I2 (titration current) if U2 is kept constant. If the sample absorbs O2, oxygen partial pressure decreases in the gas flow of the reactor (p(O2) ''< p(O2) ') and cell 2 pumps in O2 till reaching the value p(O2)' = p(O2) ''. The pumping-in procedure is detected through the increase of I2 if U2 is kept constant. All oxygen exchange in the reactor should be accompanied by a deviation of the coulometric titration current (I2,t) from the (I2, base) value, if U2 is kept constant. The oxygen exchanged mass m (O2)) may be calculated according to the law: (1) Knowing the mass and chemical composition of the investigated sample, the TPD spectra can be readily built. Prior to the coulometric measurements, powders of Pr2-xLaxNiO4+ were flowed at following temperature cycle was then applied in order to build the TPD spectra: i) heating ii) isothermal plateau iii) cooling fterwards, each heating-cooling cycle performed for coulometric measurements was carried out in Ar atmosphere with p(O2) = 50 Pa at flow rate 48 52 ml/min. The duration times will be detailed in the following when necessary. TGA measurements were used to compare oxygen over-stoichiometry defined with Prior to further detailed consideration of the spectra, one should pay attention to the titration curves of the samples recorded under cooling conditions. At the beginning of the cooling, and from maximum temperature down to 290 Results and discussion , an oxygen sorption is observed. The heated samples are oxygen deficient while they exhibit an oxygen overstoichiometry at room temperature before heating. Due to presence of oxygen in the carriergas, they start to absorb O2 when cooling the powder, which can be seen from values of the titration current which are higher than the base current. Oxygen sorption stops at 290 , resulting in a quick current dropping to the base current value. Such a thermal behavior is the same for all the samples and will not be more discussed below. TPD of La2NiO4.16 Two main desorption peaks are visible on the TPD spectrum of La2NiO4.16, located around 240 is chosen to refer on it); one can conclude that under the considered experimental conditions, there are two regions of very intensive and continuous . The second maximum of the titration current cannot be accurately determined, because it is stopped when heating at 900 (12) also observed two desorption maxima in the TPD spectrum of La2NiO4.15 powder in argon flow with lower oxygen partial pressure (10 Pa) in the temperature cycle 20 1050 with those of ref. (12) leads to conclude that the real location of the second maximum takes place at a temperature highe The oxygen desorption occurs with an apparent constant rate during heating La2NiO4.16 in the range 400 i.e. between the two previous peaks. The absorption of O2 during cooling represents a diffuse maximum, i.e. a broad peak in the TPD spectrum (Fig. 2) Table II, one can see a difference between the initial and final over-stoichiometry of the powders. Because the material is prepared under air, but here cycled under lower pO2, the amount of desorbed oxygen while heating is larger than the amount of absorbed oxygen while cooling, as expected: for instance the initial oxygen over-stoichiometry of La2NiO is 0.16, the final one being 0.124. The influence of the oxygen content in the gas flow can be confirmed by the data reported in ref. (12) for the same material: after cycling under p(O2) = 10 Pa, the final oxygen over-stoichiometry is 0.06. Fig. 2, two sharper maxima of oxygen desorption (compared to those of LNO) are visible in the TPD spectra of Pr2-xLaxNiO (x 0). More especially the second one becomes sharper and sharper when the Pr amount increases. In addition, at increasing Pr content, the start of the first oxygen desorption as well as the first maximum itself (which corresponds to the maximum speed of oxygen release) are shifted to lower temperatures (Table III) compared to La2NiO4. 16. The same conclusion on the temperature range of the second oxygen desorption maximum can be drawn. It should be also noticed that the released oxygen amount during heating increases with the increase of Pr content in Pr2-xLaxNiO . This is clearly visible from the value of the over-stoichiometry and after heating (see and in Table II) and the maxima values of oxygen desorption (Fig. 2). The amount of released oxygen depends on the initial overstoichiometry of the oxides, which in its turn depends on the Pr content, provided the preparation conditions of Pr2-xLaxNiO are the same. It is also well-known that the oxygen over--earth cation: the smaller the radius of the rare-earth cation, the higher the oxygen content is. For La, the value is reported to be 0.16 0 0.29 (14 19). In the Pr2-xLaxNiO increases, as expected. Very interestingly, the TPD spectrum of Pr2NiO4.26 drastically differs from the previous spectra (x < 2). Four sharp maxima (peaks) of oxygen desorption are observed at the (labeled 1, 2, 3, 4 in Fig. 3). Two additional prominent peaks (compared to the spectra of PLNO and LNO) are located at 370 (labeled 2 and 3, Fig. 3). Besides the four mentioned maxima in the TPD spectrum of Pr2NiO4.26, our data (labeled 5 and 6, the existence and resolution were confirmed in experimental series with l TPD Spectra Comparison Our TPD spectrum is in rather good agreement with that reported for Pr2NiO by Sadykov et al. (20). Due to different oxygen partial pressure and inert atmosphere used in our experiments, the observed maxima logically shift to lower temperatures compared to ref. 20 According to the available data in literature (6,20), the peaks observed in the temperature range 210 located in Pr2O2 layers (NaCl-type structure) but without any link with the possible lowering of the mean praseodymium charge (which would be between 3+ and 4+), as supposed by Sadykov et al. (20). Indeed, the charge is only 3+ in Pr2NiO whatever the oxygen over-stoichiometry in air at T = 25 by XANES studies (21). Combination of experimental and modelling data concerning interstitialcy / interstitial diffusion mechanisms from (20, 22 25) allows the following suggestions: the peaks 1 3, 5, 6, depicted in Fig. 3 could be likely attributed to mobile i) and reflect different dynamic positions of Oi in crystal lattice under heating at T= ~ 400 , corresponding to the sharp narrow peak (2 in Fig. 3), is especially noteworthy. In our experiments, the TPD spectra constantly preserved this peak as compared to 1, 3 6 ones. This peak can be assigned to the orthorhombic-tetragonal phase transition (22, 25) in Pr2NiO . According to single crystal neutron diffraction investigations performed by M. Ceretti et al. (23) on Pr2NiO at 2O2 rock salt layer from an embedding matrix of NiO2 layers takes place in this temperature range. Thus, this peak induced by crystal lattice transformation can be found whatever the initial oxygen stoichiometry. In refs. (16,22) it has been confirmed that the initial non-stoichiometry value, typically exceeding 4.20, does not significantly affect the presence of this phase transition. Th 4, Fig. 3) is supposed to result from oxygen removal from the direct environment of the nickel cations. Comparison of TPD Spectra Summarizing the above-mentioned features concerned with non-monotonous change of oxygen desorption when heating at constant rate, one can correlate the nature of released oxygen with titration current peaks (maxima) in TPD spectra: 1) phase transformation in a material, 2) un-equivalent crystallographic positions of oxygen atoms in the crystal lattice of the compounds. In the case of the phase transformations, an abrupt change in bonding energy of oxygen with the lattice takes place and, as a consequence, an anomaly in the oxygen desorption rate can occur. In the second case, with the increase of temperature, distinct consecutive removal and/or migration of oxygen from interstitial and regular positions can proceed in accordance with the increase in bonding energy of oxygen in the crystal lattice (12). The sharp narrow peak matching the phase transition is observed only for PNO and not detected for PLNO phases. A hypothesis is that the amount of oxygen removed during the phase transition for PLNO phases is significantly lower than for PNO (cf. Table 2 and Fig. 2, 4) due to their lower ionic mobility in the attendance of lanthanum cations. Figure 4. Oxygen non-stoichiometry value vs. temperature (converted TPD spectra) of Pr2-xLaxNiO4+ nickelates in argon flow (pO2 = 50 Pa) and heating rate 6 /min. The second reason for the absence of the narrow peak in TPD spectra for Pr2-xLaxNiO4+ (x 0) could be the vicinity of temperature ranges for weakly-bounded oxygen desorption (first peak) and oxygen removal linked to the phase transition (25). Indeed, from Fig. 2 it can be seen that the location of the first peak shifts towards higher temperature with decreasing x (for lowest Pr contents, which again supposes a lower mobility of such oxygen in PLNO), and can then overlap the narrow peak of oxygen desorption occurring during the phase transition. To confirm the existence of the TDP peaks and their resolution for Pr2NiO and La2NiO , experiments were conducted As shown in the TPD of PNO (Fig. 5) all oxygen desorption maxima were either present in the temperature range under consideration for intensity and shape slightly changed (Fig. 5). Prolonged thermal treatment (depending on the heating rate) results in both cases in the shift of the peaks location to lower temperatures, as well as in a change of the shape of the last maximum ( Fig.5(a)). Most likely, the two sharp maxima in TPD for Pr2-xLaxNiO (x 0) might be associated to removal of oxygen from different crystallographic positions in these nickelates. Over-stoichiometric oxygen in La2NiO structure is located in interstitial position with the coordinates ( , , close to layers formed by NiO6 octahedra. These interstitial oxygen ions, apparently, are the most weakly bonded in the nickelate lattice (26). Our suggestion is that during the heating of the nickelates interstitial oxygen ions leave the compound lattice at lower temperatures and cause the occurrence of the first oxygen desorption peak in the range 240 stoichiometry content, a further departure of oxygen apparently occurs accounting for the removing of oxygen from its normal (regular) lattice sites. Regular oxygen causes the occurrence as shown in Fig. 2. When the second oxygen desorption peak occurs in Pr2-xLaxNiO (x 0), the over-0.090 (Fig. 4, Fig. 6 (a) (c)). This over-stoichiometric content of oxygen is possibly so strongly linked to the nickelate lattice that it does not result in desorption of oxygen at heating the powders up to temperatures 650 1, Fig. 6). In this temperature range, oxygen is more easily removed from the regular lattice sites. (c) Figure 6. TPD spectra for Pr2-xLaxNiO4+ (x = 0.5, 1.0, 1.5) nickelates, including in the temperature cycle 20 900 in Ar flow (pO2 = 50 Pa) and heating-/min. Conclusions TPD spectra of Pr2-xLaxNiO (x 0) exhibit two sharp peaks (maxima) of titration current with excellent resolution at p(O2) = 50 Pa that are connected with removal of oxygen from different sites of the crystal lattice. The maxima observed during heating of Pr2-xLaxNiO powders at temperatures 200 predominantly result from weakly bonded over-stoichiometric oxygen, i.e. interstitial oxygen in the crystal lattice. The statement might be supported by the fact that, when increasing the Pr amount, whose cationic radius is smaller than that of La, the oxygen over-stoichiometry of the nickelates (synthesized in the same conditions) increases and the oxygen desorption shifts to lower temperature compared to La2NiO . TPD spectrum of Pr2NiO recorded under the same conditions drastically differs from the spectra of Pr2-xLaxNiO with lower Pr contents: three sharp and two local maxima of titration current are located in the range of temperature 210 prominent first maximum in a relatively low temperature range is supposed to have the same nature as for PLNO and might be associated with the removal of interstitial sites with the coordinates , close to layers formed by NiO6 octahedra. The second (narrow) maximum of the TPD at 370 is most probably attributed to the orthorhombic-tetragonal phase transition in Pr2NiO . The peaks observed in the temperature range 400 65 different (nonequilibrium) migration positions of mobile oxygen from crystal lattice under heating. The maxima of TPD spectra observed at temperature above 650 considered nickelate are apparently connected with the removal of oxygen from NiO6 octahedra building perovskite layers. It has been shown that the removal of oxygen from perovskite layers occurs when the lattice maintains some amount of oxygen over-0.09 for Pr2-xLaxNiO 0.14 for Pr2NiO ).
4,408.6
2019-07-10T00:00:00.000
[ "Materials Science" ]
Bioactive Compounds in the Ethanol Extract of Marine Sponge Stylissa carteri Demonstrates Potential Anti-Cancer Activity in Breast Cancer Cells Objective: Despite advanced treatment options available, drug resistance develops in breast cancer (BC) patients requiring novel effective drugs. Stylissa carteri, a marine sponge predominantly living in Indonesia territories, has not been extensively studied as anti-cancer. Therefore, this study targeted to assess the anti-tumor activity of the ethanol extract of S. carteri in BC cells. Methods: S. carteri was collected from Pramuka Island, at Kepulauan Seribu National Park, Jakarta, Indonesia and extracted using ethanol. Different BC cells including MDA MB 231, MDA MB 468, SKBR3, HCC-1954 and MCF-7 cells were treated with this extract for cytotoxic analysis using MTT assay. Spheroid growth assay and apoptosis assay were conducted in HCC-1954 cells. In addition, cell migration analysis and synergistic activity with doxorubicin or paclitaxel were conducted in MDA MB 231 cells. This extract was subjected also for GC-MS analysis. Results: The results show that ethanol extract of S. carteri demonstrated a cytotoxic activity in BC cells. The IC50 of this extract was lower 15 μg/ml in MDA MB 231, MDA MB 468, SKBR3, and HCC-1954 cells. Moreover, this extract inhibited spheroids growth and induced apoptosis in HCC-1954 cells. It inhibited cell migration and demonstrated a synergistic activity with doxorubicin or paclitaxel on triggering cell death in MDA MB 231 cells. Furthermore, GC-MS analysis indicated that this extract contained 1,2-Benzenediol, Dibutyl phthalate and 9,12-Octadecadienoic acid, ethyl ester. Conclusion: Our preliminary data indicate a potential anti-tumor activity of ethanol extract of S. carteri in breast cancer cells. Introduction Breast cancer (BC) is the first most diagnosed cancer in women worldwide, and they are becoming first main cause of cancer-related deaths in women, respectively (Torre et al., 2015). The high prevalence and mortality by BC along with weaknesses of existing managements and prevention raise the urgency and need for discovery of novel drugs (Aungsumart et al., 2007;Torre et al., 2015). Despite advanced BC treatment modalities are available, advanced stage of BC patients, develop resistant to current therapeutic options. Chemotherapy resistance leads into cancer progression and metastasis, thereby it remains as the greatest challenges in cancer management compound demonstrates a powerful anti-tumor activity compared to paclitaxel in prostate cancer cells (Guzmán et al., 2016). Above data indicating that sponges are very potential to be studied for novel anti-cancer drug discovery. Indonesia as an archipelago country possesses botanical biodiversity potential that incompletely developed for novel cancer treatment. Here we evaluate S. carteri for its anti-tumor activities. S. carteri are widespread in Indo-Pacific region, including the red sea, Australian sea and many Indonesian territories (Erpenbeck et al., 2017). To date studies subjected this species for its anti-tumor activities are very limited. Nevertheless, a recent study shows that an isolated compound from S. carteri induces cervical cancer Hela cells death (Dewi, 2017). This study wanted to identify anti-tumor activities of ethanol extract of marine sponge S. carteri in different BC cells. Chemicals and reagents RPMI 1640 Extraction of Stylissa carteri Marine sponge S. carteri was taken by SCUBA diving from different sites at 10 meters depth in Pramuka Island, which constitutes the Kepulauan Seribu Marine National Park located in the north of Jakarta, Indonesia ( Figure 1). Species were visually identified in the field, and confirmed at Department of Marine Science and Technology, Faculty of Fisheries and Marine Sciences, Bogor Agricultural University. Samples were cut into small size and then extracted using maceration technic in ethanol according to previous study (Hardani et al., 2018). Cell culture and conditions The triple negative ( SKBR3 cells were cultured using DMEM supplemented with non-essential amino acid, 10% heat-inactivated FBS, 1% penicillin/streptomycin and 2 mM L-glutamine in a regular cell culture incubator that contained 21% O 2 , 5% CO 2 , 37°C. All the other cell lines were cultured using RPMI 1640 medium supplemented with 10% heat-inactivated FBS, 1%. All experiments were conducted triplicate and from 3 different experiments at Laboratory of cell culture and cytogenetic, Faculty of Medicine, Universitas Padjadjaran, Indonesia. Cytotoxicity assay To evaluate cytotoxic activity of the Et(OH) extract of S. carteri in BC cells, we used MTT assay, as previously described . Cell lines were seeded on 98-well plate a day before untreated or treated with the Et(OH) extract as indicated concentrations then incubated for 72 hours. At the last day, MTT solution was added 4 hours before stopped by DMSO. Samples were read at 550 nm with a plate reader (Thermo Scientific® Multiscan EX, Singapore). Spheroid formation assay Single multicellular BC spheroids were generated according previous study . Briefly, a number of 9,000 HCC-1954 cells were seeded on agarose-coated (Sigma Aldrich, Steinheim, Germany) 96-well plates followed by 4 days incubation for initiation of spheroid formation. Spheroids were then treated or untreated with Et(OH) extract of Stylissa carteri. Spheroids were captured with the microscope using the camera connected with a computer and Toupview Software (version x64, 3.7.7892) using 40x magnifications. Images were analyzed using ImageJ software to have spheroid radius. Volumes of the spheroids were calculated (V = 4/3 πr 3 ). Apoptosis assay In order to identify apoptosis induced cell death triggered by the Et(OH) extract of S. carteri, we used Dead Cell Apoptosis Kit with Annexin V FITC and PI (Invitrogen, cat.no. V13242). After HCC-1954 cells were growth on 6-well plate, cells were treated and un-treated with the Et(OH) extract of S. carteri for 48 hours. Cells were harvested followed by stained with Annexin V/PI according to manufacture protocol. Cell suspension were then placed on an object glass followed by captured using Olympus fluorescence microscope BX51 using the camera connected with a computer and Toupview Software (version x64, 3.7.7892) using 100x magnifications. Images were stacked using ImageJ software. Combination index Cells were treated with different concentration of Et(OH) extract of S. carteri alone or in combination with doxorubicin, or paclitaxel followed by the cytotoxic assay. Combination index was analyzed with Compusyn software based on Chou Talalay method (Chou, 2010). Migration Assay To assess the anti-cell migration of Et(OH) extract of Stylissa carteri, we were conducted scratch/wound healing assay in MDA MB 231 cells according to previous study . After gaps were created, cells were S.carteri Shows Anti-Breast Cancer Cells including the TNBC cells, MDA MD 231 and MDA MB 468; HER2+ cells, SKBR3, and HCC-1954 as well as ER+ BC cells, MCF-7. Our data revealed that the Et(OH) extract of S. carteri induced cell death in all BC cells lines in dose dependent manner (Figure 2A-E). The IC 50 of the Et(OH) extract of S. carteri were less than 90 μg/ml in all tasted BC cell lines ( Figure 2F). Interestingly, the IC 50 of the Et(OH) extract of S. carteri were lower in the aggressive BC subtype cells, TNBC and HER2+ than in ER+ BC cells ( Figure 2F). The Et(OH) extract of S. carteri inhibits spheroid growth and induces apoptosis in HCC-1954 cells Next, we evaluate effects of the Et(OH) extract of S. carteri in BC cells using 3-dimentional culture system. Spheroids of HCC-1954 were treated or untreated with the Et(OH) extract of S. carteri. Data showed that the Et(OH) extract of S. carteri induced declining of BC spheroids volumes along with incubation periods ( Figure 3A-C). Importantly, the Et(OH) extract of S. carteri induced apoptosis in HCC-1954 cells ( Figure 3D-G). The Et(OH) extract of S. carteri inhibits TNBC cell migration in dose dependent manner Migration of cancer cells is one of the key aspects of cancer metastasis. Therefore, we then wonder whether the Et(OH) extract of S. carteri able to inhibit BC cell migration. Utilizing a basic cell migration assay, wound healing assay, the aggressive TNBC cells, MDA MB 231 cells were treated with low concentration of Et(OH) extract of S. carteri. In control group, the gap in MDA MB 231 cells was closed while in the treated groups, the gaps were still opened. Both concentrations of Et(OH) extract of S. carteri significantly inhibited MDA MB 231 cells migration (p<0.05) (Figure 4). The Et(OH) extract of S. carteri induces synergistic cell death with conventional chemotherapy agents in TNBC cells Heretofore our data showed a promising anti-tumor activity of the Et(OH) extract of S. carteri by inducing cell death and inhibiting cell migration. Considering that cancer cells were activating multiple pathways for resisting from cell death, it is important to target cancer cell with multiple agents. Here we evaluated the combination effect of Et(OH) extract of S. carteri with paclitaxel or with doxorubicin as two of main BC chemotherapy regiment. treated or untreated with Et(OH) extract of S. carteri in complete medium then placed in incubator. The 0th and 24th hour of treatment were captured under the microscope which connected with a computer and Toupview Software (version x64, 3.7.7892) and saved as TIFF. The gap area was measured using MRI Wound Healing Tool macro for ImageJ software (NIH) (http://dev.mri.cnrs.fr/projects/ imagejmacros/wiki/Wound_Healing_Tool). GC/MS analysis The GC/MS analyses were carried out on Shimadzu single quadrupole GCMS-QP2010 Ultra gas chromatograph-mass spectrometer according to previous study (Vetvicka and Vetvickova, 2016). Briefly, the GC was equipped with a 30 m x 0.25 mm RP-5 non-polar column (Shimadzu) with 0.25 l m film thicknesses. The MS was run in the electron impact ionization mode with an ionizing energy of 70 eV, scanning from m/z 1 to 2,000 at 0.3scan/sec. The ion source temperature was 300°C, and the quadrupole temperature was 280°C while the electron multiplier voltage was maintained at 0.8 kV. The chromatographic conditions were identical to those used for gas chromatography analysis. Helium was used as carrier gas, the flow through the column was 1 mL/min, and the split ratio was set to 400:1. The column was maintained at 40°C for 10 min, increased to 180°C at a rate of 2.5°C/min, and finally maintained at a rate of 20 min. Injection volume of the sample was 0.2µL. For the identification of the compounds, retention times and retention index were confirmed with database from software NIST11 Mass Spectral Library. Statistical analysis Four-parametric-logistic model by Sigmaplot for windows ver.12 software (Systat Software Inc) was used to generate drug curves and to analyze IC 50 . Anova test and posthoc Holm-Sidak method were used to determine the statistical significance of differences observed in treated versus control cultures. Data was significantly different if p < 0.05. The Et(OH) extract of S. carteri has cytotoxic activity in BC cell lines Cytotoxic effects of the Et(OH) extract of S. carteri were evaluated using MTT assay in different BC cell lines Importantly, these combinations trigger synergistic effect on cell death of TNBC cells ( Figure 5). GC-MS data of the Et(OH) extract of S. carteri The compounds detected in the Et(OH) extract of S. carteri were identified by GC-MS analysis ( Figure 6). Based on chromatogram, it was indicated that the prominent compounds in the Et(OH) extract of S. carteri were 1,2-Benzenediol, Dibutyl phthalate and 9,12-Octadecadienoic acid, ethyl ester (Table 1). Discussion Marine sponges have been studied worldwide for its anti-tumor activities. However, there are very limited studies on S. carteri for its anti-cancer activities. Alkaloid compounds, (Z)-debromohymenialdisine dan (Z)-hymenialdisine from methanol extract of S. carteri reveals a cytotoxic effects in MONO-MAC-6 leukemic cells (Eder et al., 1999). Similarly, an alkaloid compounds Cell gaps were capture. Medium with 1% DMSO was used as control. Data were presented as mean and SD from triplicate data. *p < 0.05; ** p < 0.01 to control. Figure 5. Combination of the Et (OH) Extract of S. carteri with Conventional BC Chemotherapy Agents Induces Synergism in TNBC Cells. MDA MB 231 cells were treated Et(OH) extract of S. carteri (Sc) 2 μg/ml or 10 μg/ml in combination with paclitaxel 1 or 4 nM and doxorubicin 1 or 4 nM for 72 hours prior cytotoxic assay using MTT assay. Combination index was analyzed using Compusyn Software. Figure 6. GC-MS Chromatogram of the Ethanol Extract of S. carteri of S. carteri which isolated from red sea, demonstrates cytotoxic effects in HCT-116 cells (Hamed et al., 2018). In addition, a recent study shows that the methanol crude extract of S. carteri has cytotoxic effects in cervical cancer HeLa cells (Dewi, 2017). Our previous data demonstrates that the Et(OH) extract of S. carteri triggers cell death in both parental HeLa cells and paclitaxel resistance HeLa cells (Hardani et al., 2018). Here our data shows a promising anti-tumor activity of the Et(OH) extract of S. carteri in BC cells. The Et(OH) extract of S. carteri prompts BC cell death in dose dependent manner (Figure 2). Importantly, the aggressive BC cell types are more sensitive to this extract ( Figure 2F). Along with this finding, the Et(OH) extract of S. carteri degenerates spheroids and induces apoptosis in an aggressive BC cell line, HCC-1954 cells (Figure 3). Moreover, the Et(OH) extract of S. carteri not only inhibits the TNBC cell migration (Figure 4) but it also produces synergistic antitumor activity with the conventional anti-BC agents, doxorubicin and paclitaxel ( Figure 5). Sponges contain different secondary metabolites that are proposed to have anti-tumor effects. Stylisin has been isolated from genus Stylissa and demonstrated antioxidant properties as well as cytotoxic effects to cancer cells (Sima and Vetvicka, 2011). In addition, previous study shows that debromohymenialdisine (DBH), hymenialdisine (HD), and oroidin, which isolated from S. carteri inhibits Human Immunodeficiency Virus 1 (HIV-1) (O'Rourke et al., 2016). Based on photochemistry analysis which was conducted by Central Lab Universitas Padjadjaran with register no. S-420/LS-AK.143/2017, our Et(OH) extract of S. carteri contains flavonoid, triterpenoid and steroid. Moreover, using GC-MS analysis our extract showed some bioactive compounds. According to its chromatogram, this extract contained 1,2-Benzenediol, Dibutyl phthalate, 9,12-Octadecadienoic acid, ethyl ester and many other compounds (Table 1). Previous studies have been shown their effects on cancer cells. Dibutyl phthalate is considered as carcinogenic agent. This compound induced cell proliferation and invasiveness in breast cancer cells (Hsieh et al., 2012). This compound also induces apoptosis in neural cells (Wójtowicz et al., 2017). Moreover, 9,12-Octadecadienoic acid, fatty acid, was also identified from other marine sponge, Scopalina ruetzleri, which collected from South Brazilian coastline. Ethyl acetate fraction of S. ruetzleri induces cell death of human glioma cells and neuroblastoma cells (Biegelmeyer et al., 2015). Having a promising data of the Et(OH) extract of S. carteri in BC cells, further study is conducting to advance analyze the molecular mechanism of this extract inhibiting cell survival, cell migration, as well as cell proliferation. We also want to further isolate the active compounds from this extract. In conclusion, our data indicate a potential anti-cancer activity of ethanol extract of S. carteri in breast cancer cells. This research is also expected to become a consideration in escalating acquaintance of marine sponge from Indonesia. Funding Statement This OCEAN project is supported by Competence Research Grant from Universitas Padjadjaran for MHB (no.2476/UN6.C/LT/2018) and a research grant (3670/UN6.C/LT/2018) by the Ministry of Research, Technology, and Higher Education of the Republic of Indonesia. Statement conflict of Interest No potential conflict of interest was reported by the authors.
3,487.4
2019-04-01T00:00:00.000
[ "Medicine", "Biology" ]
Unsteady Helical Flows of a Size-Dependent Couple-Stress Fluid The helical flows of couple-stress fluids in a straight circular cylinder are studied in the framework of the newly developed, fully determinate linear couple-stress theory. The fluid flow is generated by the helical motion of the cylinder with time-dependent velocity. Also, the couple-stress vector is given on the cylindrical surface and the nonslip condition is considered. Using the integral transformmethod, analytical solutions to the axial velocity, azimuthal velocity, nonsymmetric force-stress tensor, and couple-stress vector are obtained.The obtained solutions incorporate the characteristic material length scale, which is essential to understand the fluid behavior at microscales. If characteristic length of the couple-stress fluid is zero, the results to the classical fluid are recovered. The influence of the scale parameter on the fluid velocity, axial flow rate, force-stress tensor, and couple-stress vector is analyzed by numerical calculus and graphical illustrations. It is found that the small values of the scale parameter have a significant influence on the flow parameters. Introduction The existence of couple stresses in continuum mechanics is a consequence of the discrete character of the matter at the finest scale.Also, the noncentral property of forces between elementary particles of matter leads to the appearing of couple stresses [1].E. Cosserat and F. Cosserat [2] were the first to have incorporated the couple stresses by considering the oriented material point triads and independent microrotation in theory of continuum solids.Toupin [3], Mindlin and Tiersten [4], and Koiter [5] have developed theories with couple stresses, in which the rigid body motion of the infinitesimal element of matter at each point of the continuum is described by six degrees of freedom. Based on these theories with couple stresses, Stokes [6] has developed the theory of couple-stress fluids in order to study the size-dependent behavior of flows.Stokes' theory represents the simplest generalization of the classical theory of fluids, which considers the presence of couple stresses.All the above models have various inconsistencies.In the context of Stokes' theory, the indeterminacy of the spherical part of couple-stress tensor is a major shortcoming.Another problem appears in the theory of Mindlin and Tiersten; namely, in their theory, the constitutive relation for the forcestress tensor contains the body couples.The indeterminate spherical part of couple-stress tensor is ignored in Stokes' theory, without justifications.Also, the presence of body couples in the constitutive relations was neglected by Stokes in his theory. Hadjesfandiari et al. [1,7,8] have solved these issues by using arguments based upon the energy equation along with kinematical considerations.They developed a fully consistent couple-stress theory for solids and fluids, which establishes the couple-stress tensor and mean curvature rate tensor as skew-symmetric energy conjugate measures.Their results could be useful for examining fluid flow problems influenced by the mechanics at small scales.Significant application areas for size-dependent fluid mechanics involve modeling of blood flows, lubrication problems, liquid crystals, and polymeric suspensions.This branch of fluid mechanics has attracted a growing interest from researchers in the field. Advances in Mathematical Physics Bakhti and Azrar [9] studied the steady flow of a couplestress fluid through constricted tapered artery under influence of a transverse magnetic field, moving catheter, and slip velocity.Solutions to velocity and shear stress are expressed with Bessel's functions.Pralhad and Schultz [10] used the couple-stress fluid model for the study of the steady flow of blood through stenosed artery.Blood velocity, the resistance to flow, and shear stress distribution have been obtained.Verma et al. [11] have studied the blood flow in a stenosed tube considering the blood as a couple-stress fluid.Effects of slip velocity in stenosed tube were highlighted.Shenoy and Pai [12] have made the static analysis of a misaligned externally adjustable fluid-film bearing including turbulence and couple-stress effects in lubricants blended with polymer additives.Naduvinamani and Patil [13] obtained a numerical solution to finite modified Reynolds equation for couplestress squeeze film lubrication of porous journal bearings. The unsteady three-dimensional flow of couple-stress fluid over a stretched surface with mass transfer and chemical reaction was investigated by Hayat et al. [14].Devakar and Iyengar [15] investigated the generalized Stokes' problems for incompressible couple-stress fluids.The effects of kinetic helicity (velocity-vorticity correlation) on turbulent momentum transport were investigated by Yokoi et al. [16,17].Other interesting topics can be found in references [18][19][20][21]. We must mention that the models considered in the above articles are based on the theory developed by Mindlin, Tiersten, and Stokes. In the present paper we consider the consistent theory of couple-stress fluids elaborated by Hadjesfandiari and his coworkers [1,8] and, we study the helical flows of couplestress fluids within a straight circular cylinder under general boundary conditions.In the studied problem, the fluid motion is generated by the helical motion of the cylindrical surface with the time-dependent axial and azimuthal velocities and by the time-dependent couple-stress vector on the cylinder surface.The nonslip conditions are, also, considered.Using suitable nondimensional variables, we determine analytical solutions to the axial and azimuthal velocities by means of the integral transform method.The axial angular velocity and the flow rate are also obtained.Components of the nonsymmetric force-stress tensor and the couple-stress vector are determined from the constitutive relations and velocity field.Obviously, if the characteristic length of couple-stress fluids is zero, we recover results for the classical fluid.The obtained analytical solutions are used in order to perform numerical calculations using the Mathcad software for particular external loadings.The results are graphically presented.It is found that all flow parameters are influenced by the fluid scale parameter.The significant influence is obtained for small values of the scale parameter. Statement of the Problem We consider a homogeneous, incompressible, viscous couplestress fluid flowing inside a straight circular cylinder of radius .The cylindrical coordinate system (, , ) has the -axis identical with the cylinder axis.The governing equations of the couple-stress fluid are as follows [1,8]: (i) Continuity equation: where is the fluid velocity.(ii) Equation of linear momentum (the body forces are neglected): where is the fluid density, is the dynamic viscosity, is the viscosity coefficient of couple-stress fluid, and is the thermodynamic pressure. (iii) The constitutive equations: The nonsymmetric force-stress tensor: where represent the strain rate tensor and the angular velocity tensor, respectively.The polar couple-stress vector: In this paper, we consider that the velocity field and pressure are functions of the form Equation ( 1) is identically satisfied and ( 2)-( 6) become We consider the following initial-boundary conditions: Functions 1 (), 2 (), 1 (), and 2 () are piecewise continuous functions on [0, ]; for every > 0, they have exponential order at infinity and We define the characteristic material length which is absent in classical fluid mechanics but is fundamental for couple-stress fluids. Introducing the nondimensional variables into ( 8)-( 10) and dropping the star notation, we obtain the following nondimensional problem: = 0, (, 0) = 0, (0, ) = 0, In the end of this section, we give two lemmas regarding some properties to the operators from ( 16) and (17).These properties will be used in order to find solutions of the above problem. The demonstration of relations ( 26)-( 29) is made easy using integration by parts and properties of Bessel functions [22,23].It is noted that V () is zero-order finite Hankel transform of function V(, ) and () is the finite Hankel transform of the first order of function (, ), respectively. Solution of the Problem In order to find the solution of problem ( 15)-( 25), we use the Laplace transform with respect to the variable time and finite Hankel transform with respect the radial coordinate [24,25]. Velocity Field. Applying the Laplace and finite Hankel transforms to ( 16) and ( 17), using the initial and boundary conditions ( 22)-( 25) and Lemmas 1 and 2, we obtain the following transformed equations: where , and 2 () are the Laplace transforms of functions (), V (), 1 (), 2 (), 1 (), and 2 (), respectively.Equations (30) can be written in the suitable forms Now, using the integrals and applying the inverse Hankel and Laplace transforms, we obtain closed forms to azimuthal velocity and axial velocities as In the above relations we have used the notation ḟ () = ()/. The axial angular velocity (the spin vector) is given by and the flow rate is (37) Force-Stress Tensor and Couple-Stress Vector.Replacing (35) in ( 18)-( 21) and performing calculations, we get the following expressions for the force-stresses and couple-stress vector: It is easy to see that, for = 0 (the ordinary Newtonian fluid), the force-stress tensor becomes a symmetric tensor and the couple-stress vector is zero. Particular Case (Constant Velocity and Couple Stress on the Boundary). Let us consider the following boundary conditions: where 1 ≥ 0, 2 ≥ 0, 1 ≥ 0, and 2 ≥ 0 are constants and () = (1/2)sign()(1 + sign()) is the Heaviside unit step function.In this case, the derivatives of functions given by (39) are () being the Dirac distribution.Functions () and () are written in simpler forms, as Numerical Results and Discussion Unsteady helical flows in the consistent theory of couplestress fluids were considered, under general boundary conditions.By using suitable nondimensional variables, the governing flow equations are obtained in the dimensionless form.It is important to note that these equations contain as parameter the dimensionless scale flow parameter , defined as the square of the rate between the characteristic material length and the radius of circular cylinder.As a result from the studied particular problems, the scale parameter has a significant influence on the fluid behavior.Obviously, if the scale parameter equals zero, results corresponding to the Newtonian fluid are obtained.Solutions for fluid velocity, nonsymmetric force-stress tensor, and couple-stress vector were obtained using integral transforms method (Laplace transform with respect to the time variable and finite Hankel transform with respect to the radial coordinate).The axial angular velocity and the flow rate are also determined.The obtained solutions contain in their expressions the positive roots of the Bessel functions 0 () and 1 (), denoted by and , roots which were generated by means of Mathcad subroutine "root((), , , )."In the numerical simulations, we have used ∈ [500, 1000], values for which the numerical approximation accuracy is very good. In our study, the azimuthal velocity, axial velocity, and the components of the couple-stress vector are given on the cylinder surface as arbitrary functions of the time ; therefore, the obtained solutions can generate solutions to various problems with practical applications. The numerical results for Figures 1-3 and for Table 1 were generated under conditions 1 () = 2 () = 1 () = 2 () = () = (1/2)sign()(1 + sign()).As is apparent from Figure 2 and values of Table 1, the fluid azimutal velocity is almost steady and the axial velocity has a small time-variation, for > 1.These properties are due to the exponential terms in (34) which tend fast to zero, because, (For > 1 and ≥ 0 the exponential terms in (34) are negligible).The influence of the scale parameter on the flow rate () in axial direction is presented in Figure 3. Obviously, the significant variation of the flow rate is for small values of the time .It is important to note that, for large values of the scale parameter or for large values of the time , the flow rate in the axial direction becomes constant. The influence of the scale parameter on the components , , , and of the force-stress tensor and on the components and of the couple-stress vector is analyzed in Figures 4 and 5. The significant influence occurs for small values of the scale parameter.For these values the shear stresses and and the couple-stress component have an extreme value and tend to approach zero for large values of the scale parameter. Figures 4 and 5 Figure 1 : Figure 1: Profiles of azimuthal velocity (, ) and axial velocity V(, ) for small time and different values of scale parameter . is observed from Figure1(a) that azimuthal velocity increases with the scale parameter .The rotational velocity of couplestress fluid is bigger than the velocity of classical fluid, except the case of very small values of time .In this particular case, there are values of the scale parameter, for which the couplestress fluid flows more slowly than the Newtonian fluid in the central area of the flow domain.The influence of the scale parameter on the axial velocity is more significant than on the azimuthal velocity.It is seen from Figure1(b) that the axial velocity increases with the scale parameter. Table 1 : The influence of time values on fluid velocity components.Figure1shows profiles of both azimuthal and axial velocities, versus radial coordinate for different values of the scale parameter and for three small values of the time .The influence of the scale parameter on rotational velocity is significant only for very small values of the time .It
2,963.8
2017-02-08T00:00:00.000
[ "Engineering", "Physics" ]
Mixed alkali effect on borosilicate glass structure with vanadyl ion as spin probe Mixed alkali borosilicate glasses doped with 2% vanadium oxide were synthesized by conventional melt-quench method. The Micro structure of the materials was analysed using XRD, SEM and EDX spectrum. Optical nature of glasses was studied by computing optical bandgaps with absorption spectral data. Covalent nature of glasses was estimated correlating optical bandgap energy with EPR spectral data. The metallization criterion of glasses was evaluated from the physical parameters. FTIR and Raman spectral techniques were used to identify the structural groups of the borates and silicates that are formed with addition of modifiers. Emission spectra were recorded at excitation wavelength of 350 nm and their thermal nature was also studied using TGA/DTA techniques. Introduction The number of NBO (non-bridging oxygen) and thus the glass materials properties are comfortably modified by the addition of alkali oxides or by doping transition metals. Complex structure in borosilicate glass influences the local environment around alkali ions [1]. So a structural study is necessary to elucidate the properties of binary alkali borosilicate glass. The structure of any material depends on the coordination number and bond nature. Transition metals occupy interstitial sites, so they are good probes to find the structure of the glass. Transition metals have multiple potential oxidation states which directly influence the electrochemical properties of materials. Vanadium ions (Atomic number -23) exist in four oxidation states divalent, trivalent, tetravalent and pentavalent. They act as magnificent probe ions due to their large radial distribution of d -orbital electron wave function, which affects the spin-orbit coupling parameters [2]. Hence, materials contain Vanadium oxide exhibits fascinating electrochemical, photochemical, catalytical and spectroscopic properties [3,4]. It also shows excellent broad band luminescent properties in near IR. So Vanadium has many applications in photonics such as smart glass etc [5,6]. Vanadium doped glasses are also capable to transform UV into visible light [7]. The dependence of conductive properties on temperature of vanadium materials is used to design excellent compact electronic data storage mechanism and also to construct nanobots. Electron hopping between vanadate ions increases the number of NBO's and so electrical conductivity of material is increased [8]. It is also used to recharge the storage batteries at grid levels due to it's various oxidation states. Vanadium ions on the structure of borosilicate glasses has got a lot of significance because of their distinct applications in luminescence, conductivity, high absorption, change in activation energy and nuclear waste storage and elastic properties [9][10][11][12]. From the literature studies it is obvious to say that there may be a number of spectroscopic studies available on vanadium dopant glasses, but very limited studies are available for binary alkali borosilicate glass [13][14][15]. Hence in this present work we have attempted to investigate in detail, how vanadium ions manipulate the spectroscopic properties of mixed alkali Borosilicate glass by systematic variation of the alkali oxides Na 2 O/K 2 O ratio. Experimental preparation of glass samples Analar grade chemicals (Loba) were precisely weighted proportional to their mol% to prepare 15 gm of glass sample. The details of the glass composition are shown in table 1.The chemicals were ground and the obtained fine powder was taken in a Silica crucible and melted at 1040°C-1060°C for an hour in a muffle furnace. After getting bubble free melt it was stirred and was poured on a flat brass plate at room temperature. Now these quenched glasses are immediately kept in a furnace at 450°C for 3 h to avoid cracks. Now these annealed glasses were optically polished for characterisation. Characterization of glass samples The disorder in structure of glass was reassured by XRD using an XpertPRO analytical x-ray diffractometer with Cu Kα radiation. The surface transparency was observed with scanning electron microscope pictures (FE -SEM; ZEISS of resolution 0.8 nm) .Elements in glass composition were identified by dispersive x-ray spectroscopy (EDX). With Perkin Elmer lambda 950 UV-vis -NIR spectrophotometer characteristic light absorption spectra of glasses were recorded in wavelength range 200-1300 nm. FTIR and Raman spectral techniques identifies the structural groups of borate and silicates formed with the addition of modifiers. For Detection of paramagnetic species, EPR spectra were recorded at room temperature in X-band frequencies (9 GHz) with a field modulation of 100 KHz using EPR spectrometer (JEOL: JES-FA200). The photoluminescence spectra of glass samples were recorded using JY Fluorolog-3-11 flourimeter. Results and discussions 3.1. Physical properties Physical parameters of the glass like molecular weight of composition (M), molar volume (V m ), vanadyl ion concentration (N i ), mean Vanadyl ion separation (r i ), Polaron radius (r p ), field strength are evaluated using conventional formulae [16,17]. The values obtained are presented in table 2. By the replacement of Na + ions with K + ions molar volume decreases from V 5 to V 25 . It was observed that with the reduction in molar volume the number of transition metal ions per unit volume (TM ion concentration, N i ) increases. Further, the reduction in molar volume reduces theoretically, the average separation between successive transition metal ions (r i ) and polaron radius. So, the Field strength between transition metal ions from V 5 to V 25 glass increases. Refractive index at band edge, molar refraction, polarizabilty and Metallization criterion were computed using the optical bandgap, which are included in table 2. These parameters are estimated with standard formulae [18,19]. It is found that Rm Vm value is a little bit larger for V 10 glass than other studied glasses. This shows V 10 glass has less covalent nature or more metallic nature when compared to the other glasses studied. Micro structure analysis (SEM, EDX & XRD) In this section SEM, EDX for V 20 glass and XRD for all glasses studied are enclosed. These are given in figures 1-3 respectively. The scanning electron microscope pictures of the undoped and vanadyl ion doped glasses are shown in figure 1. The image clearly shows the texture of glass changing due to the doped vanadyl ions. Similar observations are drawn for remaining glasses too. The elements present in the glass composition are identified by EDX spectrum and their atomic % and weight % data is shown graphically in figure 2. Similar like observations are made with remaining glasses too. The x-ray diffraction pattern of studied glasses are shown in figure 3.The x-ray diffraction of vanadyl doped glasses have no sharp peaks only a diffused band is observed. This is a feature of long range disorder which ensures the non-crystalline nature of the studied material. 3.3. Thermal stability of the glass Thermo Gravimetric Analysis (TGA) and Differential thermal analysis (DTA) were recorded for the V 10 glass given in figure 4. It shows the endothermic peaks of glass transition temperature (Tg), melting temperature (T m ) and exothermic peak of crystallization temperatures (T c1 , T c2 ). Mass degradation with temperature of a material is estimated from TGA curve. Beyond 563°C mass loss was more. The glass transition temperature (T g ), the first crystalline temperature (T c1 ) and the second crystalline temperature (T c2 ) for V 10 glass were 495°C, 605°C and 713°C respectively from the DTA curve. Elastic nature of glass depends on the super cooled temperature DT=T c1 − T g . For the glass V 10 , the super cooled temperature is 110°C. The high DT of the studied glasses indicates their good ductile nature, so they could be used as good sealants [20]. Optical absorption study Optical absorption spectra were used to get information about bandgaps and defect levels. UV absorption spectrum of semiconductor materials is associated with the transitions of energy greater than bandgap and Visible-IR spectra collects the data related to lesser energy transitions. So we recorded absorption spectra in between 200 nm-1200 nm transitions for all the glasses. In Optical absorption spectra of all investigating glasses, two bands at wavelengths around 600 nm and 1020 nm were observed and were assigned to the transitions 2 B 2g → 2 B 1g , 2 B 2g → 2 E g,, (figure 5) the same characteristics transitions of V 4+ were reported in earlier studies by a number of researchers [21][22][23][24][25][26]. The ligand field is not symmetrical about the central transition metal cation (V 4+ ) since for E g orbitals lobes are exactly in the approaching direction of ligands so they experiences more columbic force than the T 2g orbitals. So, the distance of ligands along z-axis is slightly different compared with the other ligands in this octahedron structure and was thus distorted tetragonally. The ground state energy level of the vanadyl complexes is the 3d 1 electron in d xy -orbital which is denoted as 2 B 2g . Energy level diagram of 3d 1 -electron is shown in figure 6 with the reported transitions in the studied glasses. Optical band gap The applications of materials in optoelectronics are decided by their energy gap (Eg). Optical absorption coefficient 'α' depends on joint density of valence, conduction and impure energy states. In direct bandgap transitions, charge carriers energy and momentum both are conserved. These allowed vertical transitions are represented by a linear region of Tauc plot obeying the below relation (1). The indirect bandgap oblique collisions are associated with phonons for the conservation of momentum. In our studied glasses E indirect <E direct indicates less energy is enough for phonon assisted oblique transition than vertical allowed transition. Here α depends on absorption coefficients of both photon and phonon followed by the relation (2). Table 3. It is observed that V 10 glass has less bandgap and more Urbach energy. Emission spectra The fluorescence spectra of all vanadyl doped glasses are shown in figure 7, at the excitation wavelength 350 nm. A broad Emission band was observed ranging from 420 to 650 nm with a maximum around 540 nm. Hence these glasses could be used as sensors to detect ultraviolet radiation yielding green light. The emission wavelength shift is in reverse trend of the cut-off wavelengths of the glass series. These emission centres were because of the oxygen vacancies or interstitial defects present in glasses [2, 3, 29]. EPR study The EPR spectral plots of the first derivative of absorbance versus the magnetic field intensity (mT) in X-band frequency (9.32 GHz) at room temperature is shown in figure 8. In EPR spectra 8 perpendicular components and 6 parallel components are clearly seen, the other 2 parallel components are masked by the strong perpendicular components. The calculated parameters from EPR spectra are shown in table 4. The anisotropic environment around the metal d -electron affects the spin-Hamiltonian parameters. For the studied glasses the spin Hamiltonian parameters vary as g P <g ⊥ < ge (ge=2.0023) and A P >A ⊥ .This trend in spin-Hamiltonian parameters suggests that the interstitial site of V 4+ ion in the studied glasses is tetragonally compressed octahedron with V 4+ as central metal ion. This tetragonal structure has C 4v symmetry and the ground state is d xy orbital. [23-26, 29, 30]. Ligands aligned along z-direction experience more columbic force than other ligands. This leads to a shrink in V=O bond length along z-direction and is compressed tetragonally. In present glasses the tetragonality measure slightly varies with the alkalis concentration ratio. More Z-in distortion results more localization of orbitals at vanadium atom. Tetragonality measure is more for V 10 glass implying this glass is less covalent in nature. Metallization criterion parameter of V 10 glass from table 3 also informs the same. Fermi contact terms κ and the dipolar interaction parameter P are used to assess the extent of distortion. Isotropic distortion was theoretically calculated as P=g e g N β e β N r −3 and from EPR and optical absorption experimental data it was computed as Δ|| is the electron transition energy from d xy orbital tod x y 2 2orbital corresponding to 2 B 2g → 2 B 1g transition. In VO (H 2 O) 5 2+ the bond between vanadium and hydroxyl ions is purely ionic, and the value of P is 160 × 10 −4 cm −1 . The lower values of 'P' for the studied glasses than for the pure ionic vanadium complex indicates the decrease in dipolar interaction between'd' electron and nucleus. This may be due to increase in separation between d xy electron and nucleus magnetic dipoles. Both 'P' and the in-plane πbonding wave function β 2 are lower for V 10 glass. Least β 2 value implies that the d xy orbital is less localised on vanadium atom in V 10 glass than the other glasses of this series. This supports the least value of 'P' for V 10 glass compared with others. The isotropic parameters of g and A are calculated as Fermi contact parameter κ is given by expression (6) Not only d-electron, paired 's' electron also contribute to the hyperfine interaction with the nucleus. Fermi contact term 'κ' depends on the share of hyperfine splitting of s-orbitals. Large value of κ=0.73 for V 10 than the other glasses of the studied glass series indicates more interaction of 's' electrons of vanadium with the nucleus and less contribution of 3d xy unpaired electron to hyperfine interaction. Hence as κ decreases, P value increases [25]. Hyperfine coupling parameters  A and , hyperfine splitting parameters g , g ⊥ are related to molecular orbital coefficient β 2 as shown in equations (7) and (8) [31]. Here-Pκ term is due to resultant contribution of the all the paired 's' electrons of vanadium ion. A || | and A ⊥ | correspond to the magnetic moment interaction between 3d xy electron and the vanadium nucleus. Where ge (=2.0023) is the unpaired electron g value Using P and κ in above equations the covalency ratio β 2 is calculated for the V=O bonds. β 2 is the fraction of d xy ( ground state) orbital contribution in the corresponding nonbonding function. The value of β 2 =1 when the orbital is purely nonbonding like pure VO(H 2 O) 5 2+ . For studied glasses β 2 ≈1 which indicates in-plane π -bonding is nearly ionic and the orbital is more localized on the vanadium atom only a little on ligand. Decrease in β 2 value from unity indicates admixture of ligand orbitals. The molecular orbital bonding coefficients α 2 and γ 2 are evaluated correlating the EPR spectral data with optical bandgaps using the mathematical relations (9), (10) [31]. (1-α 2 )<0.5 indicate that in-plane σ -bonding orbital is more localized on ligands than the vanadium atom. (1−γ 2 ) values indicate that the out of plane πbonding orbital is more localised on vanadium atom and partly on ligands. The Z-in distortion in present glasses is mainly due to the out of plane π -bonds between 3d xz , 3d yz orbitals and the ligands, shown in figure 9(a). The ionic nature of glasses estimated from the physical parameters in table 2 also supports this ( figure 9(b)). The in-plane σbonds and out of plane πbonds between vanadium and it's ligands are moderately covalent [31].
3,525.2
2020-01-06T00:00:00.000
[ "Chemistry", "Materials Science" ]
The Free Material Design problem for stationary heat equation on low dimensional structures For a given balanced distribution of heat sources and sinks, $Q$, we find an optimal conductivity tensor field, $\hat C$, minimizing the thermal compliance. We present $\hat C$ in a rather explicit form in terms of the datum. Our solution is in a cone of non-negative tensor-valued finite Borel measures. We present a series of examples with explicit solutions.49J20, %Singular parabolic equations secondary: 49K20, 80M50 Background and an informal statement of results The paper concerns the problem of optimum design of local, spatially varying, anisotropic thermal properties of structural elements. The aim is to maximize the overall heat conductivity for given thermal conditions. All the components of the conductivity tensor C are viewed as design variables, while the trace of C is assumed as the unit cost. The optimal distribution of conductivity components is induced by the given distribution of the heat sources within the design domain Ω and by the given heat flux applied on its boundary. Since the heat sources are prescribed as measures, possibly singular with respect to the Lebesgue measure, the optimal conductivity is expected to be a tensor valued measure. The objective function is the so-called thermal compliance. By performing minimization of the objective function thus chosen, we come across the optimum design setting capable of shaping the best material structure by cutting off the redundant part of Ω as well as delivering the optimal distribution of the conductivity tensor field C in the remaining material part of the same domain. Our main result is Theorem 3.1, which states that for a given distribution of heat sources Q, which is a slightly more general object than a measure, there exists an optimal conductivity tensorĈ, explicitly given in terms of data. At each point of the support of measureĈ tensor A = dĈ d|Ĉ| has rank one. Here, |Ĉ| denotes the variation ofĈ. As a byproduct we obtain an explicit form of the minimizer of energy E(Ĉ, ·) associated withĈ, see (1.4). The same objective function, as in the present paper, has been chosen in the books by [10] and [1] where a similar problem has been dealt with, yet concerning the optimal layout of two given (hence homogeneous) isotropic materials within the design domain, the cost being the volume of one of the materials. In this setting, the correct formulation requires relaxation by homogenization. The numerical algorithm to find the optimal conductivity can be directly constructed with using the analytical formulae of the relaxation setting (as has already been done in [16]) or by utilizing some material interpolation schemes, see [12] and [15]. In the present paper the problem of the optimum distribution of a heat conducting material is considered, the cost being not the volume of a material (this volume is here unknown) but it is directly expressed by the conductivity tensor, as the integral of its trace. Our problem now is not finding an optimal layout of two distinct materials, but optimal distribution of nonhomogeneous and anisotropic conductivity properties within the design domain, admitting cutting off its part. The latter cutting off property is linked with admitting positive semi-definiteness of the conductivity tensor field to be constructed. The problem thus formulated is the scalar counterpart of the free material design problem (FMD) of creating elastic structures of minimal compliance, subject to a single load. The FMD method has been proposed by Bendsøe et al, [2]; its development has been described in [19], [11] and [4]. The mass optimization by Bouchitté and Buttazzo, [5], has delivered the mathematical tools for the measure-theoretic setting of the FMD, cf. [4] and in particular [3]. Our advantage over the papers mentioned above, which mainly deal with the vectorial problems, is relative simplicity. In this way we may gain deeper insight into the problem. Statement of the problem and the results Throughout the paper we assume that Ω ⊂ R N is a bounded, open set with a Lipschitz continuous boundary. Let us note that we do not assume convexity of Ω, since due to results of [8] (which will be discussed later) such hypothesis is not needed. We first present the classical setting of our problem. Later we will relax it and we will show existence of solutions to the relaxed problem. In the classical setting, for a given conductivity tensor A ∈ L ∞ (Ω, Sym + (R N )), heat sourcesQ and the flux q at the boundary we consider a stationary heat conduction problem, in Ω, wheren is the outer normal to Ω. Under natural assumptions on the data, which will be discussed later, the weak form of (1.1) is the Euler-Lagrange equation of the following functional, and γ u is the trace of u. Then, we define the thermal compliance by the following formula, whereû is a solution to min in a suitable function space containing D(R N ). We immediately notice that the Euler-Lagrange equation for E A , i.e. the weak form of (1.1), yields, The advantage of this definition of J is that it does not require existence ofû. We may ask about the dependence of thermal compliance on the conductivity tensor A, writing J = J(A), and we can seek to find the optimal A among all nonnegative-definite-matrix-valued measures bounded by Ω tr A dx ≤ Λ 0 (note that tr A is equivalent to the usual operator norm A when A is a nonnegativedefinite matrix). So, we consider: At this point observe that this definition of the cost functional Y should be treated very carefully. If we have a minimizing sequence {A n } ∞ n=1 , then even if at each x ∈ Ω the matrix A(x) is positive definite, then any limit may be only semi-definite. In addition, we have no mechanism preventing any concentration phenomena. These problems, and also the desire to reflect better real-life phenomena lead us to recast our problem using the measure theoretic tools. To this end, instead of a matrix valued function A, we consider a matrix-valued, bounded Radon measure C supported in Ω. The polar decomposition of C reads with |C| being the total variation of the tensor measure C, and A being a measurable, matrix-valued function A, such that A(x) = 1 for every x ∈ Ω. Moreover, we consider a measure Q, with a natural constraint, Ω dQ = 0, which guarantees solvability of the Neumann problem. Then, the energy functional E(C, u) replacing E A (u), takes the form A comment on solvability of (1.2) in this setting is in order. In the case |C| belongs to a class of so-called multi-junction measures, Q is sufficiently regular and zero boundary data q are imposed, the existence of a solution to the minimization problem (1.2) for E(C, u) in place of E A (u) was solved in [20]. Extension of this result for non-zero q seems possible. Observe that for the problem we study, one does not need to knowû in advance. However, as a byproduct of our reasoning we obtain also an existence result under much more general assumptions than these of [20] -see Proposition 3.3 in Section 3. We may define the relaxed thermal compliance functional as The ultimate definition of the cost functional Y becomes then (for a fixed Λ 0 > 0), Our main result, Theorem 3.1, consists of proving the existence of an optimal C as well as the precise description of the solution. We illustrate the construction process by a series of examples including one based on Brothers' benchmark (see Example 4.2) . Let us comment briefly on the proof of Theorem 3.1. We use the minimax procedure (see Proposition 2.1)) to show that In principle, one always has sup u inf C E(C, u) ≤ inf C sup u E(C, u) by an elementary argument; the point is that the equality holds. The dual problem is much easier to work with. In our context, we consider Q to lie in a suitable subspace of (Lip 0 (Ω)/R) * -the dual of Lipschitz functions modulo constants. Due to results of [8] one can represent Q equivalently as −div p, where p is a vector-valued measure with a certain regularity property: p is decomposed as p = σ|p|, where σ(x) lies in a so called tangent space to the measure |p| for |p|-a.e. x. The so called Kantorovich norm of Q is defined as Combining the arguments of [8] with the minimax procedure allows us to construct an optimal Lipschitz functionû for which Q 1 is achieved and an optimalp for which Q 1 = |p|(Ω). Then we proceed to show that an optimalĈ can be constructed precisely: its support is contained in the set {x : |∇û(x)| = ∇û ∞ } and it takes values proportional to ∇û ⊗ ∇û. Interestingly,p equals to the heat flux corresponding to the optimalĈ. For the precise statements we refer to the Section 3. Once we established existence of an optimal tensor-fieldĈ we address the question of solvability of (1.2) for this specific choice of the conductivity tensor. This is related to our previous work [20]. The result is presented in Proposition 3.3. The whole Section 4 is devoted to presenting examples. Notation We use the standard notation Sym + (R N ) for the space of symmetric N × N , nonnegative defined matrices. Moreover, the following function spaces will be of constant use in this note: and Lip 0 (Ω) = Lip(Ω)/R; ) -the space of R d -valued (resp. positive) Borel measures compactly supported in Ω. We shall abbreviate the notation to M(Ω) when d = 1; valued bounded Radon measures compactly supported in Ω; • M b (Ω; Sym + (R N )) -the cone of nonnegative defined, symmetric matrix-valued, bounded Radon measures supported in Ω; • M 0 (Ω) -the space of signed Radon measures µ supported in Ω such that dν = 0. It is endowed with the Kantorovich norm Throughout the paper we shall also use the following notation: if Q ∈ M(Ω) and u ∈ D(Ω) or u ∈Lip 1 (Ω) then Q, u denotes the standard action of Q on u. If Q ∈ M 0,1 (Ω) and p ∈ M(Ω, R N ) is such that div p + Q = 0, then for u ∈ Lip 1 (Ω) the notation p, ∇u shall be understood in the sense described by Proposition 2.3, i.e., the formula (2.2). The dot · denotes the scalar product of vectors in R N . Auxiliary results We draw upon two results, which are crucial for this paper, hence we state them fully. The first one is a general version of a minimax theorem. The other one is the characterization of M 0,1 (Ω). Minimax theorem Proposition 2.1 (cf [21]). Let X be a compact convex subset of a topological vector space, let Y be a convex set, and let L be a real function on the product set X × Y . Assume that Here we briefly present results of [8] which provide key tools for our reasoning. We start with a notion of tangent space to a measure and tangent (vector) measure. The former definition has been discussed in many contexts, see, e.g., [6,7,8]. Let K ⊂ R N be a compact set. In this section we use the notation C ∞ (K). Since we assume the boundary of Ω to be Lipschitz, it should be understood as the space of smooth functions restricted to the set K (see [14] for detailed discussion of all possible definitions). Let µ be a nonnegative Borel measure compactly supported in K. We set Here, by writing v n → ξ, in σ(L ∞ µ , L 1 µ ) we mean convergence of v n in the weak * topology. The orthogonal complement of N in L 1 µ (K; R N ), defined as is a closed subspace of L 1 µ (K; R N ). The tangent space T µ to the measure µ is then defined by the local characterization provided below. Proposition 2.2 (see [8] Prop. 3.2). There holds: 1. There exists a µ-measurable multifunction T µ from K to the subspaces of R N such that where P µ (x) denotes the orthogonal projection on T µ (x) can be extended in a unique way as a linear, continuous operator where Lip(K) is equipped with the uniform convergence on bounded subsets of Lip(K), and L ∞ µ (K; R N ) with the weak- * topology. We can introduce the space of tangent vector measures: (2.1) An important ingredient of our reasoning is the following characterization of a space M 0,1 (Ω). In this case, writing λ = σµ as in (2.1), we have for every u ∈ Lip(Ω): Moreover, the following equality holds between subsets of D ′ (Ω): Furthermore, for any f ∈ M 0,1 (Ω), there exists λ ∈ M T (R N , R N ) such that Remark 2.1. We stress that there are many ways to represent a measure as div σ, due to a non-trivial kernel of the operator div . Main results Let Q ∈ M 0,1 (Ω) be fixed. We will denote The thermal compliance functional J is defined by (1.5). Its optimal value Y (for a fixed Λ 0 > 0) is defined by (1.6). The two Propositions below, 3.1 and 3.2 give characterization of Y They provide details which we use to construct the optimal tensor C -see Theorem 3.1. Proof. Due to relaxed definition of J(C), the value of Y takes the form of (1.6), i.e., In order to apply the minimax argument presented in Proposition 2.1 let us introduce For all α ∈ R and u 0 ∈ D(R N ) the set {C ∈ M Λ 0 : L(C, u 0 ) ≥ α} is closed and convex. For all C 0 ∈ M Λ 0 , the function L(C 0 , ·) is convex on D(R N ). Moreover, D(R N ) is a convex set, while M Λ 0 is convex and compact in the topology of weak convergence. Application of Proposition 2.1 yields Let us concentrate on calculating sup C∈M Λ 0 C, ∇u ⊗ ∇u for a fixed u ∈ D(R N ). We shall use the notation introduced in (1.3). For a fixed u, for any C ∈ M Λ 0 we have The equality holds if and only if A∇u = λ∇u for a non-negative number λ. If we recall that A = 1, then we see that λ ∈ [0, 1]. Since Λ 0 ≥ tr C(Ω), we see that maximization of λ requires Hence, we will make λ maximal and equal to 1 when the other eigenvalues of A are zero. This means that matrix A has the following form As a result we showed that for a certain measure µ which remains unspecified, yet. Taking into account the constraint (3.2), we observe that µ is a finite measure, i.e. Observe also that we always have with the equality attained only if |∇u(x)| = max x∈Ω |∇u(x)| ≡ ∇u L ∞ for µ-a.e. x ∈ Ω. Therefore, for a fixed u ∈ D(R N ), the supremum sup C∈M Λ 0 C, ∇u ⊗ ∇u is achieved exactly when C is supported in the set {x : |∇u(x)| = ∇u ∞ } and has values proportional to ∇u ⊗ ∇u. Thus Substituting a function tu with t ∈ R instead of u in (3.3), we find a maximum of a second degree polynomial with respect to t, which leads to a conclusion that As a consequence we see that it is sufficient to calculate the supremum in (3.4) over the space Lip 1 (Ω). Moreover, the supremum in (3.3) is attained. Indeed, let us suppose that u n ∈ D(R N ) is a maximizing sequence and x 0 ∈ Ω is fixed. Then, the sequence u n − u n (x 0 ) is also maximizing and bounded in C(Ω). Hence, ∇u n ≤ 1 and the Arzela-Ascoli Theorem imply that there is a uniformly convergent subsequence u n k − u n k ⇒û. Moreover,û ∈ Lip 1 (Ω). According to Proposition 2.3 we note so the claim follows. Also, the Proposition 2.3 yields the existence ofp ∈ M T (R N , R N ) minimizing (3.6). A description of the optimal tensor C is given by the following result. Moreover σ = ∇ µû µ-a.e. Proof. Letp be the optimal element provided by Proposition 3.2. By Propositions 3.1 and 3.2 for any 1-Lipschitz function u we have Next, letû be a 1-Lipschitz function satisfying ∇û,p = Q 1 . Since then optimality implies that ∇ µû = σ (or equivalently, ∇ µû · σ = 1) µ-a.e. and part 1. follows. Now, we proceed to prove part 2. First note that trĈ = Λ 0 Q 1 |p| and hence trĈ(Ω) = Λ 0 , as required. Moreover, the functional E(Ĉ, u) takes the form Since for all real x we have 2x This already shows that sup u E(Ĉ, u) ≤ Y . Now, we take t = Q 1 Λ 0 , we see that u := tû, yields an equality in the previous estimates, which means thatĈ is optimal. Remark 3.1. It is interesting to check if the optimal tensor measureĈ and measure µ satisfy the assumptions of the theory developed in [20]. A few comments are in order. We solved here problem (1.2) for an optimalĈ without knowing that µ is a multijunction measure, which was the assumption underlying analysis of [20]. At the same time we established a regularity result, i.e.ũ ∈ Lip 1 (Ω) ⊂ H 1 µ . We do not have any analogue in [20]. Remark 3.2. It is also interesting to see the optimal heat flux for the optimalĈ. Namely, we may now calculatep =Ĉ∇ µũ and we can see thatp =p. Indeed, Examples Here, we present a series of examples, which are illustrations of the main Theorem 3.1. We will follow a uniform style of exposition, starting from the definition of Q and setting Λ 0 . We denote by e i , i = 1, 2, the unit vectors of the coordinate axes. Then, we (a) check that Q is an element of M 0,1 (Ω); (b) findû yielding Q,û = Q 1 , see (3.1); (c) findp ∈ Σ Q , which minimizes |p|(Ω) among elements of Σ Q , see (3.5); (d) write outĈ, defined in (3.7), which minimizes J(C). We would like to emphasize that in the characterization of M 0,1 (Ω) provided by Proposition 2.3 the convexity of Ω was not mentioned. Indeed, the authors of [8] remark that the geodesic distance may be used. We will see the consequences in the example below. where 2 (e 2 + e 1 ); Proof. Since Q, u = u(A) − u(B), we deduce that Q ∈M 0,1 (Ω). Now, Ω is not convex and the geodesic distance is defined as the infimum of lengths of curves joining A and B. Suppose that γ is a Lipschitz path connecting A and B and u is any element of Lip(Ω), B = γ(1), A = γ(0). Then, because Ω is star-like with respect to 0 and we can see that dist (A, 0) = √ 2 2 = dist (B, 0). We can easily check that the function defined in part (1) above turns inequality in (4.1) into equality. Hence, Q 1 = √ 2. We will use Theorem 3.1, part 1. to find the optimalp, which must satisfy, Proof. We first notice that Q ∈ M 0,1 (Ω). However, we proceed in a different way than we did in the proof of Theorem 3.1. We first construct the optimalp, then we will look forû. We recall that p ∈ Σ Q if and only if Q + div p = 0. If p = σµ, where µ is a positive Radon measure and σ ∈ L ∞ µ (Ω; R N ), then the distributional definition of div p is In case p ∈ M T (Ω), then the above definition may be extended to ϕ ∈ Lip(Ω): This means that in our case, σµ ∈ Σ Q if and only if We have just stated that div (σµ) = −Q. We may read the above identity in a different way by using the theory of traces of measures, see [9] and [20] in the context of the present paper. Namely, we may write, where p · ν denotes the trace of the normal component of measure p on ∂Ω. If we pick a candidate for an optimal solution, we should check that indeed it belongs to M T (Ω; R N ). It is well-known, see [17], that the minimization problem is equivalent to where D denotes the distributional derivative od u. Here, h = ∂f ∂τ and τ a tangent vector, ν the outer normal are such that (ν, τ ) is positively oriented. In the present case Since f in (4.6) is continuous, we deduce from [22] that there is a unique solution to (4.6). Moreover, if v is a solution to (4.6), then after writingp = ∇v ⊥ , where ⊥ denotes the rotation by − π 2 , we obtain a solution to (4.5), see [17,Theorem 2.1]. The solution to (4.6) (with f given by (4.7)) is well-known (it is the Brothers' example). It is given by the following formula, see [18], Thus, after computing ∇v ⊥ , we want to writep in the following formp = σµ, where |σ| = 1 for µ-a.e. x ∈ Ω. We find σ(x 1 , x 2 ) according to (4.3) and µ = ρL 2 Ω, where ρ is given by (4.4). Now, we have to find a maximizerû of Q, u . We keep in mind thatû is such that |p|(Ω) = p, ∇û . We notice that (4.3) yields that the scalar product of vectors σ and ∇ µû is equal to We see thatp is an element of M T (Ω) and due to the absolute continuity of |p| we obtain, In particular, |p|((− However, the equality above is possible if and only if ∂û We determine in a similar way the values ofû in the rest of Ω. The only restriction is that functionû is in Lip 1 (Ω). In particular, we may setû as in (4.2). Then, one can check that Q,û = p that confirms that the duality gap between problems (3.1) and (3.6) vanishes Finally, we have to define the optimalĈ. Due to formula (3.7), we obtain, In the next example we have the source Q supported on a set of finite one-dimensional Hausdorff measure. Then,û Proof. If u ∈ Lip 1 (Ω), then We notice that the equality above is achieved forû(x 1 , We wish to determinep, we will use Theorem 3.1, part 1. for this purpose, i.e. our choice ofp should be such that |p|(Ω) = p, ∇û , where We also want that Q be represented as Q = −divp, i.e. On the set C the vector field σ must be equal to ∇û. We pick a simple choice for µ = kL 2 C. If we take into account the form of ν, the normal vector to ∂C, then by the Gauss formula we see that Since p = Q 1 , we deduce that k = √ 2. Thus,p = √ 2sgn (x 1 )e 2 L 2 C, as desired. Finally, we findĈ = Λ 0 2 e 2 ⊗ e 2 L 2 C. Remark 4.1. During the presentation of the above Example, we noticed that we could choose C 1 instead of C. In this way we would obtain a different solution to the minimization problem. As a result, we see that there is no unique solution to (1.6). Proof. In the present case, for u ∈ Lip 1 (Ω), we have Q, u = 1 H 1 (S) S (u(r, θ) − u(0)) dH 1 ≤ 1 R(θ 1 − θ 0 ) S x 2 1 + x 2 2 dH 1 = R. Since divp = 1 r ∂ ∂r (rλ(r))), we deduce that λ = k r , where k ∈ R. Then, (4.9) takes the form Thus, k = Q 1 /H 1 (S) = (θ 1 − θ 0 ) −1 andp has the form we claimed. The choice ofĈ follows from (3.7) and the elements we have. where g = e r · ν and ν is the outer normal to K at S. Set K is defined as above and a = S g dH 1 .
5,794.6
2021-09-29T00:00:00.000
[ "Mathematics" ]
Design and Simulation Analysis of Multi-Agent Online Dissemination Model on the Basis of " The Spiral of Silence " Theory This article studies the classical theory of communications—the Spiral of Silence. The key factors are extracted for multi-agent modeling and simulation with Netlogo. It turns out that opinion climate is the external formation of the Spiral of Silence. Besides, the size of opinion climate has no direct effects on the Spiral of Silence. Introduction The innovation of "The Spiral of Silence" theory has attracted many researchers under today's network communication environment, some scholars believe that the the Spiral of Silence in cyberspace is weakening which plays a new role in network media. GAO Xianchun et al. think that: "Mass media and specific individual communities make a difference in the new media environment constructed by Internet, mobile phones and others which are in the construction of specific event. And they both trigger the double helix of silence. In other words, mass media and individual communities form spirals respectively, meanwhile, both games lead to opinion cassette which is the factor affecting individual perception. As a result, the double helix of silence influences individual expressions of opinion. " (GAO Xianchun et al,2014) Yang Zhibiao supports the thory despite of its assumptions and limitations. In his opinion, it is still available in network environment. What is more, "the spiral of silence" is alive with "Against the Spiral of Silence". However, other scholars believe that the Spiral of Silence is still spinning in the new era. To explore the views of "Gay Bullying", Sherice Gearhart et al. investigate 760 participants. It was found that the the Spiral of Silence still exists in social networks, besides, personal ideology will also have an impact on the willingness of expression (Sherice Gearhart et al, pp. 18-36, 2014. ). Li Chuansheng points out that the Spiral of Silence still works because of the fear of isolation and group pressure in the network. What is more, the Internet has a character of anonymity, but net users' behavior is not unrestricted and complete freedom does not exist which due to group norm and social norm in the network community. That is the reason why the Spiral of Silence is still in the spin (Li Chuansheng, pp. 4-12, 2014). Na Yeon Lee et al. carry out a series of studies on controversial topics among journalists who work for nine national newspapers and two networks broadcasting corporation through the empirical research. It turns out that the Spiral of Silence still exists, especially in the social networking site-Twitter (YL Na, pp. 443-458, 2015). Cheng Yao also points out that because of unconverted habits of netizens, squelch on the Internet and attention which the country begins paying to supervising public opinion online, the Spiral of Silence will coexist with public opinion on the network for a long time in a more subtle way. Moreover, according to a report released by the Pew Research Center and the Rutgers University, social media such as Twitter and Facebook will contain the diversification of views, hinder public affairs debates and limit people to state their views. Especially when they find that their opinions are different the fear of isolation; climate of opinion, and quasi-statistical sensewith friends'. The report also said that very few people who use social media regularly express different points of view offline. In order to study the impacts mechanisms of the Spiral of Silence, Scholars adopt empirical research through questionnaire to explore the impact of " the climate of opinion " -one factor of the Spiral of Silence. In Chang Ning's research, it is found that post-90 college students adhere to the "expression"of the highest degree in real life. Unexpectedly, under the anonymous network, post-90 college students tend be more silence. The study also shows that the anonymity of the network is not the main factor to promote the expression. The agent changes, in includes th in the prog "radius". P certain "p "count-aro Simula (1) Propert Vol. 9, No. 4;2016 Most Internet users will try to avoid isolation caused by separate attitudes and views. When agents find their own views are different from those of most Internet users or those of the mainstream media, they will not tend to express their opinions. Instead, they may choose to change views to correspond to the mainstream views online. (2) Settings of initial state The slider--initial-chance is set to represent the distribution of initial state. The proportion of positive and negative opinions will vary when setting different ratios. Besides, The "initial-people" stands for the number of people who know the event and express their views. The "radius" is on behalf of the size of the climate of opinion in which the agent can be affected. According to the spiral of silence, the fear of isolation is inherent nature of the people, thus the initial values of "r-afraid-index" and "g-afraid-index" are set to 50 each. ( ]of turtles in-radius radius)] end b. Changing Attitudes: when the value of the fear of isolation reaches a certain critical value, Internet users may change their views. What is more, when corresponding to the majority, agents will be willing to express their opinions and try to influence other agents. That's why there is a process called "hatch", here's the code. to change-color if red-turtles > green-turtles [ if color = green and random r-afraid-index < 35 [set color red fd random 10 hatch 1 [set color red fd random 10 set r-afraid-index 50 ]] end c. The ultimate source--"fear of isolation": the fear index will change accordingly when the individual is affected by the surroundings. In addition, different agents have different degrees of changes on the fear index. This part of the code is as follows: to change-afraid-index if red-turtles > green-turtles [set r-afraid-index(r-afraid-index -random 10) set g-afraid-index(g-afraid-index + random 10)] if red-turtles < green-turtles [set g-afraid-index(g-afraid-index -random 10) set r-afraid-index(r-afraid-index + random 10)] end Analysis of Experimental Results The initial value of " initial-people" is set to 880 which represents the number of people knowing the event at the first time. The initial value of " radius" is set to 1. The percentage of two sides is randomly decided and different percentages have direct impacts on future climate trends. In this paper, the values of initial-chance are 0.1, 0.3, 0.5, 0.7 and 0.9 respectively. Parameters are shown in Table II.. No. initial-people initial-chance radius r-afraid-index g-afraid-index Then, Pres to change, follows. org Experim ss the "go" but , and the corre As can be ratios in th opinion re silence do that is to s picture abo minority o In order to the climate the study,
1,545.6
2016-10-01T00:00:00.000
[ "Computer Science" ]
Protein moonlighting elucidates the essential human pathway catalyzing lipoic acid assembly on its cognate enzymes Significance Lipoic acid is an enzyme cofactor found throughout the biological world that is required for key steps in central metabolism. In humans, defective lipoic acid synthesis results in defective energy production, accumulation of toxic levels of certain amino acids, and early death. The different pathways for lipoic acid synthesis put forth have not been validated by direct analysis of the postulated enzyme reactions, excepting a protein called LIPT1. Unfortunately, the enzyme activity reported for LIPT1 is misleading and seems to be an evolutionary remnant. We report that LIPT1 has a second “moonlighting” enzyme activity that fully explains the physiology of individuals lacking LIPT1 activity. We also document the postulated activity of LIPT2, another essential enzyme of the pathway. The lack of attachment of lipoic acid to its cognate enzyme proteins results in devastating human metabolic disorders. These mitochondrial disorders are evident soon after birth and generally result in early death. The mutations causing specific defects in lipoyl assembly map in three genes, LIAS, LIPT1, and LIPT2. Although physiological roles have been proposed for the encoded proteins, only the LIPT1 protein had been studied at the enzyme level. LIPT1 was reported to catalyze only the second partial reaction of the classical lipoate ligase mechanism. We report that the physiologically relevant LIPT1 enzyme activity is transfer of lipoyl moieties from the H protein of the glycine cleavage system to the E2 subunits of the 2-oxoacid dehydrogenases required for respiration (e.g., pyruvate dehydrogenase) and amino acid degradation. We also report that LIPT2 encodes an octanoyl transferase that initiates lipoyl group assembly. The human pathway is now biochemically defined. The lack of attachment of lipoic acid to its cognate enzyme proteins results in devastating human metabolic disorders. These mitochondrial disorders are evident soon after birth and generally result in early death. The mutations causing specific defects in lipoyl assembly map in three genes, LIAS, LIPT1, and LIPT2. Although physiological roles have been proposed for the encoded proteins, only the LIPT1 protein had been studied at the enzyme level. LIPT1 was reported to catalyze only the second partial reaction of the classical lipoate ligase mechanism. We report that the physiologically relevant LIPT1 enzyme activity is transfer of lipoyl moieties from the H protein of the glycine cleavage system to the E2 subunits of the 2-oxoacid dehydrogenases required for respiration (e.g., pyruvate dehydrogenase) and amino acid degradation. We also report that LIPT2 encodes an octanoyl transferase that initiates lipoyl group assembly. The human pathway is now biochemically defined. lipoic acid | mitochondrial disorder | inborn errors | glycine cleavage system | 2-oxoacid dehydrogenases A lthough lipoic acid was discovered over 60 y ago as a covalently bound enzyme cofactor required for aerobic metabolism (1)(2)(3), it is only in recent years that the mechanisms of its biosynthesis have become understood (4)(5)(6). The importance of protein lipoylation is illustrated by disorders of this mitochondrial pathway, which result in grave metabolic defects and early death. Lipoic acid biosynthesis is best described as an assembly process because lipoyl moieties are constructed on the enzyme subunits of the cognate enzymes via a markedly atypical pathway (7) (Fig. 1). Lipoic acid is an eight-carbon fatty acid in which sulfur atoms replace the hydrogen atoms of carbons 6 and 8 of the acyl chain (oxidation of the resulting disulfide gives lipoic acid). Genetic and biochemical studies in Escherichia coli showed that an octanoate moiety diverted from fatty acid synthesis by the LipB octanoyl transferase becomes attached to the e-amino group of a specific lysine residue of the cognate enzyme proteins (4). The octanoylated proteins then become substrates for sulfur insertion by the S-adenosyl-L-methionine radical enzyme, LipA (Fig. 1). The lipoyl-modified proteins are the GcvH protein of glycine cleavage (8) and the small universally conserved protein domains located at the amino termini of the E2 subunits of the 2-oxoacid dehydrogenases required for aerobic metabolism and other reactions (4). The LipB-LipA pathway (Fig. 1A) is the simplest, but not the only lipoyl assembly pathway (4). Another bacterium, Bacillus subtilis, requires four proteins for lipoyl assembly rather than the two that accomplish the task in E. coli (4,9,10) (Fig. 1B). In contrast to E. coli where the lipoyl assembly pathway directly modifies each of the cognate proteins, B. subtilis assembles lipoyl moieties only on the H protein of the glycine cleavage system (4,9,10). The other lipoate-dependent enzymes obtain lipoyl moieties only upon transfer from the H protein. Essentially, the same pathway has recently been documented in Staphylococcus aureus (11). Thus, the small H protein (127 residues) has two functions in central metabolism: the glycine cleavage pathway of single carbon metabolism and lipoylation of the 2-oxoacid dehydrogenases required for aerobic metabolism and branched chain fatty acid synthesis (12). Indeed, B. subtilis strains that lack the H protein are unable to grow without lipoate (or supplements that bypass function of the key 2-oxoacid dehydrogenases) and cannot cleave glycine to serve as the sole nitrogen source (9,12). The yeast, Saccharomyces cerevisiae, is thought to have a similar pathway (13), although little in vitro enzymology has been done due to the intractable nature of the yeast proteins. In mammals, all proteins involved in lipoyl assembly are located in the mitochondria. In humans, the first indicator of defective lipoyl assembly is generally the presence of abnormally elevated levels of lactate (derived by reduction of pyruvate accumulated due to pyruvate dehydrogenase deficiency) in urine and plasma. Subsequent measurements of glycine levels in body fluids allow these individuals to be divided into two groups (14,15). Normal glycine levels indicate that the glycine cleavage system is functional, and thus the glycine cleavage H protein (GCSH) is lipoylated, whereas abnormally high glycine levels indicate a lack of GCSH lipoylation. Elevated brain glycine levels result in a host of neurological disorders, including neurodegeneration, encephalopathy, and neonatal-onset epilepsy (14,15), whereas the lack of 2-oxoacid dehydrogenase lipoylation short-circuits function of the tricarboxylic acid cycle, resulting in Significance Lipoic acid is an enzyme cofactor found throughout the biological world that is required for key steps in central metabolism. In humans, defective lipoic acid synthesis results in defective energy production, accumulation of toxic levels of certain amino acids, and early death. The different pathways for lipoic acid synthesis put forth have not been validated by direct analysis of the postulated enzyme reactions, excepting a protein called LIPT1. Unfortunately, the enzyme activity reported for LIPT1 is misleading and seems to be an evolutionary remnant. We report that LIPT1 has a second "moonlighting" enzyme activity that fully explains the physiology of individuals lacking LIPT1 activity. We also document the postulated activity of LIPT2, another essential enzyme of the pathway. Human individuals having severely decreased levels of all lipoylated proteins have mutations in either LIAS, which encodes a lipoyl synthase known to functionally replace E. coli LipA (16), or LIPT2, which is proposed to encode an octanoyl transferase (14,15). The patients who selectively retain GCSH lipoylation have mutations in a third gene, LIPT1 (14,15). For decades, the reported LIPT1 enzymatic function has muddled interpretation of the human disorders. The reported activity for LIPT1 protein is transfer of a lipoyl moiety from lipoyl-adenylate to both GCSH and to the 2-oxoacid dehydrogenase E2 subunits (17,18) (Fig. 1D). Modification of both acceptor proteins directly conflicts with the LIPT1 biochemical phenotype because individuals lacking LIPT1 activity should lack GCSH lipoylation whereas GCSH lipoylation (hence glycine cleavage) is normal in LIPT1 patients. A second argument against the physiological relevance of the reported LIPT1 "half-ligase" activity is that there seems to be no valid source of the lipoyl-adenylate required for the reaction. ACSM1, an extraordinarily promiscuous acyl-CoA synthetase (19), was reported to synthesize lipoyladenylate (20). However, LIPT1 utilizes both isomers of lipoate, whereas lipoylated proteins contain only the R isomer (20), which strongly argues against a role for ACSM1 in lipoate attachment. Despite these shortcomings, LIPT1 and ACSM1 have been ascribed roles in disorders of human lipoate metabolism, generally in uptake and attachment of dietary lipoic acid (21,22). However, dietary lipoic acid supplementation has no effect on survival of mammals or tissue culture cells defective in lipoyl assembly (see below). The close analogy of the human lipoate metabolism defects to those of B. subtilis mutant strains defective in lipoylation led to the hypothesis that the relevant LIPT1 activity is transfer of lipoyl moieties from lipoyl-GCSH to the 2oxoacid dehydrogenase subunits (4). In this scenario, LIPT1 would have lipoyl amidotransferase activity that parallels that demonstrated for B. subtilis LipL (9). It would follow that the lipoyl-adenylate activity would be an evolutionary remnant, as often seen in moonlighting proteins (23)(24)(25). It should be noted that recent models of the human disorders include a step in which LIPT1 somehow transfers lipoyl groups from GCSH to the 2-oxoacid dehydrogenase E2 subunits without invoking a mechanism for this transfer (15,26,27). We report biochemical evidence obtained by reconstruction of the human assembly pathway in E. coli plus direct biochemical assays with purified enzymes and acceptor proteins that demonstrate LIPT1 to be a lipoyl amidotransferase that catalyzes transfer of lipoyl moieties from GCSH to a 2-oxoacid dehydrogenase domain. We also demonstrate that purified LIPT2 is an octanoyl transferase and that LIPT2 functionally replaces the E. coli octanoyl transferase. These data define the biochemical mechanism of the human lipoyl assembly pathway. Results LipT1 Has Lipoyl Amidotransferase Activity. In our first experiments to test LipT1 amidotransferase activity (Fig. 2), we constructed three compatible plasmids, each of which expresses one of the relevant human proteins: GCSH, LIPT1, or a hexahistidinetagged lipoyl domain (LD) derived from the E2 subunit of pyruvate dehydrogenase. Expression of each codon-optimized protein was placed under a tightly controlled promoter. The plasmids were transformed into an E. coli ΔlipB strain to attempt construction of the human lipoyl transfer pathway in this bacterium (Fig. 2). We first induced GCSH expression from an isopropyl-β-D-thiogalactopyranoside (IPTG)-inducible promoter in the presence of lipoic acid to allow accumulation of lipoyl-GCSH (lipoate attachment was via the host LplA lipoate ligase). These cells were then washed free of lipoate and IPTG and suspended in fresh medium lacking lipoate and IPTG (supplementation with acetate and succinate allowed growth to proceed). Expression of LipT1 and the PDH inner E2 domain were then induced with arabinose to provide the enzyme and its putative substrate for lipoyl transfer from GCSH to the E2 domain. The simplest assembly pathway is that of E. coli where only two enzymes are required (4). LipB transfers an octanoyl moiety from octanoyl-ACP to each of the cognate protein substrates. The LipA radical S-adenosyl-L-methionine enzyme then inserts two sulfur atoms to produce dihydrolipoyl moieties. (B) A more complex pathway is found in Firmicute bacteria such as Bacillus subtilis (10,12,68) and Staphyloccus aureus (11). In this pathway, lipoyl moieties are assembled on the GcvH protein of the glycine cleavage system and then transferred to the lipoyl domains (LDs) of the 2-oxoacid dehydrogenases. This pathway requires a lipoyl amidotransferase called LipL and a distinct octanoyl transferase called LipM. LipA catalyzes sulfur insertion as in A. (4). Strain XC.127 transformed with the plasmid encoding the His 6 -LD acceptor protein was additionally transformed with the GCSH plasmid plus the LIPT1 plasmid, the GCSH plasmid alone, or the LIPT1 plasmid alone. The resulting cultures were induced with IPTG in the presence of lipoate to allow the host LplA ligase to synthesize lipoyl-GCSH (if present). The cells of each culture were then collected by centrifugation and washed to remove lipoate and IPTG. After resuspension in glycerol minimal medium containing acetate and succinate, arabinose was added to the three cultures to induce expression of LIPT1 and His 6 -LD. The cultures were incubated to allow further growth and accumulation of the His 6 -LD. The cells were then collected and lysed, and the His 6 -LD of each culture was purified by Ni 2+ chelate chromatography. The purified samples were then submitted for mass-spectrometric analysis and the proteins expressed in each sample are summarized in B, where + or − denotes expression. The electrospray mass-spectrometric scans for each culture are given in C-E. (C) Mass-spectrometric analysis of the His 6 -LD acceptor accumulated in the absence of the LIPT1 plasmid. The LD remained in the apo form (13,796.2 Da). Note that the apo LD mass was 18 Da less than that calculated (13, The E2 domain was then purified by Ni-chelate chromatography and analyzed by mass spectrometry. The apo-E2 domain (m/z of 13,814.3) was converted to lipoyl-E2 domain (m/z of 14,001.5) (Fig. 2E). The delta mass between the two forms was 187, whereas a lipoyl moiety is 188. The 14,001.5 species was not present when either the LIPT1 or GCSH-encoding plasmid was omitted ( Fig. 2 C and D), indicating that the washing steps effectively removed lipoate and thereby rendered the host LplA lipoate ligase inactive. LipT1 Has Lipoyl Amidotransferase Activity but Lacks Octanoyl Amidotransferase Activity. Given these encouraging results, we expressed each of the genes in E. coli and purified the proteins (Fig. 3A). The substrates needed to assay lipoyl amidotransferase activity were also prepared in pure form. These consisted of an acceptor domain consisting of the inner LD of the human pyruvate dehydrogenase E2 subunit and a lipoyl (or octanoyl) donor protein consisting of the modified human GCSH protein. Lipoyl-GCSH was synthesized from the apo protein using the B. subtilis LplJ lipoate ligase (Fig. 1D), ATP, and lipoic acid, whereas octanoyl-GCSH labeled with 14 C in the octanoyl moiety was prepared with [1-14 C]octanoic acid using the same enzymatic reaction. Lipoyl-GCSH purified free of ATP and LipJ was incubated with LIPT1 plus the LD acceptor protein. Three different assays were used (Fig. 3). Fig. 3C shows a mobility shift assay using gel electrophoresis in the presence of urea. In this assay, modification of the LD results in an increased rate of migration of the protein due to loss of the positive charge of the modified lysine (both LDs and glycine cleavage H subunits are unusually acidic proteins). In reactions that contained all of the reaction components, the LD was largely converted to a faster-migrating species, whereas no such shift was seen when a reaction component was omitted. A second assay evaluated transfer of lipoyl moieties from purified lipoyl-GCSH to the human LD. These reaction products were separated by SDS/PAGE followed by Western blotting with an anti-lipoate antibody (Fig. 3B). A species having the same mobility as the lipoyl-LD standard was formed in the complete reaction mixture but not when a component was omitted. Finally, a portion of the complete reaction mixture was analyzed by electrospray mass spectrometry. The two peaks observed were the remaining unmodified apo-LD substrate and lipoyl-LD. The difference in mass values of the two peaks was 188.5, whereas a lipoyl moiety has a mass of 188. Note that traces of LD modification were seen in the absence of lipoyl-GCSH (Fig. 3 B and C). Since prior workers demonstrated that LipT1 purified from E. coli contained bound lipoyladenylate (28) and can transfer the lipoyl-adenylate lipoyl moiety to LD domains (17), this low level of LD modification is attributed to bound lipoyl-adenylate that accompanied LIPT1 through purification of the protein. We also tested the ability of LIPT1 to transfer [1-14 C]octanoyl moieties from octanoyl-GCSH to the human LD (Fig. 4) and found no detectable transfer. Note that the LD preparation was the same as that used in the lipoyl transfer experiments and was active with LplJ. The Putative LIPT2 Octanoyl Transferase Catalyzes Transfer from Octanoyl-ACP to GCSH. Human genetics investigators have postulated that LIPT2 is an octanoyl transferase (5,15,27) based on sequence alignments with the octanoyl transferases of E. coli (29,30) and Mycobacterium tuberculosis (31) (Fig. 5A). However, sequence alignments within Pfam PF03099 protein family are not a trustworthy predictor of function and must be viewed with considerable caution (Fig. 5B). For example, the B. subtilis genome has been annotated as encoding three lipoate ligases (www.microbesonline.org). However, only one protein had ligase activity; the other two proteins catalyzed octanoyl transfer and lipoyl amidotransfer (9, 10). These considerations indicated that validation by direct assay of the postulated LIPT2 activity was required. The lack of such evidence became a more serious shortcoming when the first LIPT2 mutations were detected in human patients (27). 4). The calculated difference in mass (delta mass) between the apo and lipoyl forms was 188, whereas the observed delta mass was 188.5. Note that, in A and B, trace levels of LD lipoylation were seen in the absence of GCSH, which is attributed to LIPT1-bound lipoyl-AMP that survives purification and crystallization (28). The traces of lipoylation that appears without LIPT1 in B seems likely to be due the lipoate assembly pathway of the wildtype E. coli strain used for protein production. We began by asking whether LIPT2 could functionally replace the E. coli LipB octanoyl transferase, an enzyme essential for lipoyl assembly (29,32). A synthetic LIPT2 gene with codons optimized for E. coli expression was inserted into the medium copy vector pBAD322A to be transcribed from the arabinoseinducible araBAD promoter and translated using the vector ribosome binding site. The mouse LIPT2 sequence was used because an unambiguous human sequence was not available when this work was initiated. The gene encoding the primary translation product was used because the E. coli and Arabidopsis LipBs are inactivated by small N-terminal truncations (29,33). The LipT2 plasmid was transformed into an E. coli strain carrying deletions of the genes encoding both LipB and the LplA lipoate ligase. The latter mutation was included to avoid bypass of LipB by LplA mutations (34). The ΔlipB ΔliplA strain was grown on glycerol minimal medium containing acetate and succinate (which bypass the lack of lipoyl proteins) and then streaked on glycerol minimal medium plates containing various supplements. Growth proceeded only when the medium was supplemented with arabinose, the inducer of the araBAD promoter (Fig. 6). Slow growth was observed, which we attribute to the requirement that LIPT2 function with three bacterial protein substrates: ACP and the two 2-oxoacid dehydrogenase E2 subunits. These complementation results argued strongly that LIPT2 was indeed an octanoyl transferase and that E. coli octanoyl-ACP and E. coli E2 LDs should function in vitro as the octanoyl donor and acceptor, respectively. We then expressed a hexahistidine-tagged version of LIPT2 in E. coli and purified the enzyme to homogeneity (Fig. 3). LIPT2 readily transferred the octanoyl moiety from E. coli [ 14 C]octanoyl-ACP to human GCSH but was unable to transfer an octanoyl moiety to the human LD (Fig. 7A). Note that the human LD domain was an excellent substrate for the B. subtilis LplJ ligase and as an acceptor in lipoyl amidotransfer assays, indicating that it had native structure. As expected from the observed functional replacement of E. coli LipB, LIPT2 modified an E. coli LD and also the GcvH proteins of E. coli and two other bacteria (Fig. 7B). In the LipB reaction, an octanoyl moiety is transferred from ACP to the LipB active-site cysteine thiol (30). This acyl-enzyme intermediate is then attacked by the e-amino group of the target lysine reside to give the octanoylated acceptor protein (30,35,36). Based on the alignments of Fig. 5, Cys185 of LIPT2 was expected to be the site of acyl enzyme formation. To test this hypothesis, Cys185 was replaced with either alanine or serine. As expected from prior results with LipB (30), both mutant proteins lacked octanoyl transferase activity (Fig. 7C). Finally, the results obtained using the radioactive assay were validated by mass-spectroscopic analysis of Alignment of LIPT2 with the enzymatically characterized LipB and LipM proteins. Unweighted sequence alignments were performed using T-Coffee (69) at the European Bioinformatics Institute website (https://www.ebi.ac.uk) using the default settings and displayed using Jalview. The sequence name indicates the enzyme type, the Uniprot code indicates the organism of origin, and the numbers indicate the amino acid residues displayed. Positions having 50% or greater identity are highlighted in blue. The catalytic cysteine residues of the LIPT2, LipB, and LipM are boxed and highlighted in black, as is the conserved lysine residue. The leucine-toarginine mutations found in the human LIPT2 patients (27) are given in red. The edges of the alignment were trimmed using Jalview (70), so only the catalytic domain is shown. (B) Phylogeny of the LipB_LplA_LipM family (PF03099) was determined with sequences retrieved from the Pfam database (71). Multiple sequence alignments was done using ClustalW (72). The poorly conserved LplA N and C termini were removed. The phylogenetic tree was constructed using the maximum-likelihood method with the PhyML program (73,74). PHYLIP Interleaved was used for alignment. Bootstrap analysis was set to 1,000 replicates. As first seen with the mouse LIPT2 (Fig. 6), a synthetic gene expressing the full-length human LIPT2 (84% identical to the mouse LIPT2) restored growth of the E. coli ΔlipB ΔlplA strain in the presence of arabinose (Fig. 8). However, the putative protease-processed mature form of human LIPT2 recently reported (26) (a deletion of residues 2-31) failed to complement (Fig. 8). Hence, although the cell death observed (26) was attributed to a lack of mitochondrial targeting, the LIPT2 construct also lacked activity. Indeed, LIPT2 is not processed. Several mass-spectral analyses of the human proteome report detection of LIPT2 peptides corresponding to the N-terminal seven residues plus residues 14-24 and 29-40 of the primary translation product (37-39) (the peptide data are collected at www.proteomicsdb.org), which would be lacking in the putative mature form (26). Note that, consistent with human LIPT2, loss of the 22 N-terminal E. coli LipB residues inactivated the protein (29). This might be explained by the similarities (underlined) seen in the N-terminal sequences of LIPT2 (residues 5-19, AVRLVRLGR VPYAEL, and LipB residues 5-14, LVRQLG-LPYEPI). Discussion Our evidence that LIPT1 has lipoyl amidotransferase activity renders inoperative the models of the human disorders that include the partial ligase activity. This, plus the demonstration that LIPT2 is an octanoyl transferase, defines a straightforward pathway that fully explains each of the phenotypes of the human disorders. Individuals having mutations that result in loss of function of LIAS or LIPT2 are unable to assemble lipoyl moieties, and hence all cognate proteins remain in their unmodified and inactive apo forms. These individuals suffer high levels of body fluid glycine, lysine, and branched chain amino acids plus defective energy metabolism (14,15). In contrast, although patients carrying LIPT1 mutations have normal lipoyl-GCSH and glycine cleavage levels, they suffer from defective energy metabolism plus high levels of lysine and branched chain amino acids (14,15). Note that decreased 2-oxoacid dehydrogenase activities would additionally result in severely decreased levels of acetyl-CoA and succinyl-CoA, and thus modification of histones and other proteins could be compromised (40). Our data plus prior work on mitochondrial fatty acid synthesis put the pathway (Fig. 1C) on a solid basis. Following demonstration that mammalian mitochondria contain soluble ACP in addition to the ACP molecules that become subunits of respiratory complex I (41), a complete type II fatty acid synthesis pathway in mammalian mitochondria was demonstrated by Smith and coworkers (42)(43)(44), who went on to show that this pathway provides the octanoyl-ACP required for lipoyl moiety synthesis (42)(43)(44). Moreover, these workers showed that blocking mitochondrial fatty acid synthesis in transgenic mice blocked protein lipoylation and resulted in a variety of serious physiological abnormalities (including early death) due to disruption of energy metabolism (45). A similar but less severe decrease in lipoyl protein assembly due to deficient mitochondrial fatty acid synthesis was recently reported in human patients (46). GCSH is the only gene of the pathway in which no human disorder maps, although mutations in the other genes specific to glycine cleavage (AMT and GLDC) are fairly abundant (47). This disparity seems due to the small size of the GCSH coding sequence (173 codons), a comparatively small mutational target relative to the coding sequences of AMT (403 codons) and GLDC (1020 codons). Mutations inactivating GCSH should have the same phenotype as mutations in LIAS or LIPT2 and thus would have more profound effects that those individuals having AMT or GLDC mutations, which retain normal energy metabolism and protein modification ability (47). Our finding that LIPT1 has two mechanistically discrete enzyme activities fits the concept of enzyme evolution called "moonlighting" that has received strong support in recent years (23)(24)(25)(48)(49)(50)(51)(52), including its importance in diagnosis of inborn errors of metabolism (53,54). It has been shown that a protein can acquire a second (moonlighting) function without concomitantly losing all or part of its original function. That is, mutations can enhance the moonlighting function without necessarily eliminating the ancestral function (23)(24)(25). The original ancestor of LIPT1 seems likely to have encoded a fully functional lipoate ligase analogous to E. coli LplA and B. subtilis LplJ (Fig. 1) rather than the present defective (half) ligase. In this scenario, evolution of the LIPT1 gene has resulted in a protein that has lost the ability to activate lipoate while acquiring amidotransferase activity and (temporarily?) retaining the lipoyl transfer activity of the defective ligase. Indeed, LIPT1 is a member of Pfam PF03099, a group of enzymes that are constructed on the same structural scaffold (4), albeit from diverse sequences and extra domains in the ligases. Surprisingly, despite Fig. 6. Complementation of the lipB deletion mutation (ΔlipB) of E. coli strain QC145 by expression of mouse LIPT2. A synthetic gene encoding the full-length mouse LIPT2 coding sequence was inserted into plasmid vector pBAD322A, which provided the transcription and translation sequences required for expression of the protein to give plasmid pCY754. Transcription was from the vector araBAD promoter, which is induced by arabinose and repressed by glucose. The plasmid was introduced into strain QC145 (ΔlipB:: cml ΔlplA::kan) (68), which carries a deletion of the lplA lipoate ligase gene in addition to the ΔlipB deletion. The lplA mutation was included because mutant LplA proteins can bypass the ΔlipB defect by scavenging cellular octanoic acid (34). The medium was M9 minimal salts with glycerol as carbon source (glycerol allows basal expression of the araBAD promoter) and 100 μg/mL sodium ampicillin to select for plasmid maintenance. Derivatives of strain QC145 carrying either the LIPT2 plasmid pCY754 or the empty vector were grown on glucose minimal salts containing 5 mM each of sodium acetate and sodium succinate, which bypasses the lipoylation requirement. These cultures were then streaked onto glycerol minimal salts plates supplemented with arabinose, glucose, or acetate plus succinate as given on the figure. The plates used are divided into sectors by plastic walls to prevent cross-feeding. The left sector of each plate contained the strain with the LIPT2 plasmid pCY754, whereas the right sector contained the empty pBAD322A vector. No growth was seen on plates that contained only glycerol (basal expression). The plates were incubated for 2 d at 37°C. their conserved structural scaffold, these enzymes perform chemically distinct reactions: They can be lipoate (or biotin) ligases, octanoyl transferases, or lipoyl amidotransferases (4). Even PF03099 enzymes that catalyze the same reaction via the same chemical mechanism can be divergent. The LipB and LipM octanoyl transferases share almost no sequence conservation, and their active-site cysteine residues are found on different loops of the common scaffold (35) (Fig. 5B). It would be interesting to produce a mutant LIPT1 that lacks the partial ligase activity while retaining its amidotransferase activity (assuming that the same lipoate binding site is used in both reactions). A straightforward approach would be to mutate the LIPT1 residues that are hydrogen bonded to the lipoyl adenylate adenosine moiety. However, this is problematical because those bonds are primarily formed with backbone atoms (28). It should be noted that, although both LIPT1 reactions involve transfer of a lipoyl moiety, the energetics of transfer are strikingly different. Lipoyl-adenylate contains a "highenergy" mixed anhydride linkage, and thus lipoyl transfer is extremely facile. Indeed, adenylates are known to readily modify protein amino groups without enzymatic assistance (55,56). In contrast, the amide linking the lipoyl moiety to GCSH is among the lowest of "low-energy" linkages found in biology, and thus lipoyl transfer from this linkage is kinetically and chemically challenging. Although modification of lipoyl proteins by incorporation of exogenously supplied lipoic acid has been invoked in models of human lipoate disorders (21,22), there is a large body of evidence indicating that mammals are unable to use exogenous lipoic acid to bypass loss of the lipoyl assembly pathway. Dietary lipoate readily enters the bloodstream and tissues. Radioactive lipoic acid has been administered to mammals and its fate followed (57,58). The labeled cofactor was quickly reduced and degraded by the β-oxidation pathway and no evidence for attachment of exogenously fed lipoate to proteins was reported. Moreover, studies of homozygous lipoyl synthase (LIAS) knockout mice (59), of LIAS, LIPT1, and LIPT2 patients (plus fibroblasts derived from patients) (14-16, 27, 60), and mammalian tissue culture cells blocked in synthesis of the lipoate backbone (42) invariably report that lipoic acid supplementation is without benefit. Moreover, lipoic acid supplementation did not significantly increase the levels of lipoylmodified 2-oxoacid dehydrogenases or GCSH (14)(15)(16)60). Our finding that LIPT2 is unable to catalyze octanoyl transfer from octanoyl-GCSH to the human pyruvate dehydrogenase LD (Fig. 7A) is expected from the phenotype of the LIPT1 disorder. If octanoylation of the 2-oxoacid dehydrogenase E2 subunits did occur, this could provide a substrate for LIAS-catalyzed sulfur insertion as suggested (26). However, if this were the case, loss of 8. Complementation of the ΔlipB deletion of E. coli strain QC146 by expression of human LIPT2. Synthetic genes encoding either the full-length human LIPT2 or a derivative that lacked the first 31 residues (Δ31, a methionine codon replaced residue 31 to permit translation) were inserted into vector pBAD322A as in Fig. 6 to give plasmids pCY1110 and pCY1108, respectively. The plasmids were introduced into strain QC146 (ΔlipB ΔlplA) (75), and transformants were streaked onto plates containing 0.02% arabinose or another carbon source as given in Fig. 6. Note that, unlike the mouse LIPT2, 0.2% arabinose gave rapid growth, but growth soon halted, suggesting high-level expression of human LIPT2 is toxic. LIPT1 activity would be bypassed and no LIPT1 metabolic disorder would exist. Experimental Procedures Chemicals and Growth Media. The antibiotics and most chemicals used in this study were purchased from Millipore, Sigma, and Fisher, unless noted otherwise. American Radiolabeled Chemicals provided [1-14 C]octanoic acid. DNA manipulation enzymes were from New England Biolabs). DNA sequencing was performed by AGCT. Invitrogen provided the Ni 2+ -agarose column. Growth media were as in prior publications. (Table 1). Human genes synthesized with optimized E. coli codons encoding GCSH, the inner LD of the E2 component of the human pyruvate dehydrogenase complex (E2p), LIPT1, and human or mouse LIPT2 were from Epoch Life Science or Integrated DNA Technologies. All constructs were verified by sequencing. The human LD and LIPT1 genes were inserted into vector pET28b (Table 1) to generate plasmids pXC.064 and pXC.065 using restriction sites NdeI plus BamHI and NdeI plus HindIII, respectively. Mouse LIPT2 was amplified with primers oXC288/oXc289 and inserted into the NdeI plus SalI restriction sites of vector pQE-2 to give pXC.066. The human GCSH gene was amplified with primers oXC159/oXC160 (Table 2), which added BspHI and HindIII sites to allow ligation into NcoI plus HindIII-cut vector pEH1 downstream of an IPTG-inducible promoter to give pXC.067. Plasmid pXC.068 was generated by excising LIPT1 directly from pXC.065 with restriction enzymes XbaI plus HindIII and ligation into pBAD33 using the same restriction sites. Plasmid pXC.068 carries the pET28b ribosome binding site. Plasmid pXC.069 was generated by excision of the human His 6 -LD gene from pXC.064 via the NcoI and HindIII sites followed by ligated into the same sites of pBAD1031G (Table 1). Construction of the lipoate auxotroph strain XC127 (MG1655 ΔlipB) was performed as previously described using pKD3 (Table 1) as the template and P1-P2 as the primers (Table 2) (61,62), and the chloramphenicol marker was excised by pCP20-encoded Flp recombinase encoded by pCP20 (Table 1) to yield XC.127. The ΔlipB construct was verified by sequencing a PCR product obtained using primers oXC134 and oXC135 (Table 2). Plasmids and Bacterial Strains Protein Expression and Purification. Hexahistidine-tagged versions of Homo sapiens GCSH and LIPT1 were expressed in E. coli BL21, whereas Mus musculus LIPT2 was expressed in DH5α. These strains were grown in 1 L of LB medium containing the antibiotics required for plasmid maintenance. Expression was induced by the addition of 25 μM IPTG at the start of the culture. Cells were harvested by centrifugation after incubation at 30°C for 22 h. The proteins were purified by nickel affinity and anion exchange chromatographic steps as previously described (63). Protein concentrations were determined both by the Bradford assay (64) and at 280 nm using extinction coefficients calculated from the ProtParam program of the ExPASy tool website. Protein purity was monitored by SDS/PAGE. To purify the hexahistidine-tagged human LD (E2p) in the purely apo form, plasmids pXC.064 and pTARA were cotransformed into the lipoic acid auxotrophic strain QC146 to yield strain XC.184. The strain was grown at 30°C in M9 minimum medium with 0.8% glycerol, 5 mM acetate, and 5 mM succinate (pH 7.0), and 0.2% arabinose was added at culture initiation. IPTG was added to 100 μM when the culture reached an absorbance of 0.6 at 600 nm. The culture was incubated for another 6 h before the cells were frozen at −80°C. The protein was purified by Ni 2+ affinity chromatography followed by anion exchange chromatography as previously described (65). Bacillus subtilis LipM and GcvH, E. coli holo-acyl carrier protein (ACP) and LD, the Vibrio harveyi AasS, and the Streptomyces coelicolor GcvH and LD proteins were purified as described previously (12,35,(65)(66)(67). Electrospray mass spectrometry was carried out as described previously (65). After incubation at 37°C for 1 h, each reaction was loaded on a 15% native polyacrylamide gel containing 2 M urea, and separated by electrophoresis. The gel was stained with Coomassie R-250, soaked in Amplify (GE Healthcare), dried on filter paper, and exposed to preflashed Biomax XAR film (Kodak) at −80°C for 24 h. Western Blot Analysis of LIPT1 Amidotransferase Activity. Anti-lipoyl protein primary antibody was utilized to probe protein lipoylation, as described previously (65). Briefly, LIPT1-catalyzed amidotransfer reactions (20 μL) were loaded onto SDS/PAGE gel and transferred by electrophoresis to Immobilon-P membranes (Millipore) for 30 min at 60 V. The membranes were preblocked with TBS buffer (100 mM Tris base and 0.9% NaCl, pH 7.5) containing 0.1% Tween 20 and 5% nonfat milk powder. The membranes were probed for 1 h with an anti-lipoyl protein primary antibody (Calbiochem) diluted 1:10,000 in the above buffer. Following incubation with anti-rabbit secondary antibody (diluted 1:5,000; GE Healthcare Life Sciences), the labeled proteins (Human LD) were detected using Quantity One software. ACKNOWLEDGMENTS. This work was supported by National Institutes of Health Grant AI15650 from the National Institute of Allergy and Infectious Diseases.
8,080.8
2018-07-09T00:00:00.000
[ "Biology" ]
Neuropsychological Evaluation of Cognitive Failure and Excessive Smart Phone Use: A Path Model Analysis Smartphones and other mobile-related technologies are commonly viewed as indispensable tools for enhancing human cognition; prolonged use of these devices may have a detrimental and long-term effect on users’ abilities to think, recall, and pay attention. The purpose of this study is to determine the effect of phone usage on people’s cognitive capacities. Excessive smartphone use may have a detrimental effect on an individual’s mental health. It has the ability to affect an individual’s memory, capacity for effective thought, and cognitive and learning capacities. The purpose of this study is to determine the effect of smartphone use on people’s cognitive abilities. Excessive smartphone use and cognitive failures were measured using the Smartphone Addiction Scale (Kwon et al., 2013) and the Cognitive Failures Questionnaire (Broadbent et al., 1982; revised by Wallace et al., 2002), which were used to collect data from 200 young adults using a purposive sampling strategy. Pearson’s product-moment correlation was used to measure the strength of the relationship between the variables, and regression analysis was used to measure the function relating to the variables. The results of the study conclude that excessive smartphone use is related to forgetfulness, distractibility, and false triggering. Hence, it can be concluded that excessive use of smartphones may be prone to cognitive failures such as forgetfulness, distractibility, and false triggering. Excessive smartphone use has been linked to a higher risk of cognitive impairment. The phone has grown from a device used solely for communication-calls-to a computer-replacement device capable of online surfing, gaming, instant communication via social media platforms, and work-related productivity applications. Within the last decade, the Western world has witnessed a remarkable expansion in the use of mobile technologies. While smartphones have become a more integrated part of everyday life, they have also become more capable of enhancing, if not entirely replacing, critical cognitive processes. With the ability to act as a telephone directory, appointment scheduler, tip calculator, navigator, and gaming device, smartphones appear capable of doing an unlimited number of cognitive activities and satisfying a significant number of our affective demands on our behalf. 1 While smartphones clearly keep us connected, many people have developed an unhealthy obsession with them. Smartphones have a tendency to immediately capture the attention of those engaged in an activity that is unrelated to the smartphone. [2][3][4] A three-second distraction such as reaching for a cell phone is sufficient to divert attention away from a cognitive task 5, and it has been demonstrated that the mere possession of a smartphone has a detrimental effect on working memory capacity, fluid intelligence, and attentional processes. [4][5][6] Additionally, compulsive use of these devices can cause disruptions in the workplace, school, and personal relationships when individuals spend more time on social media or gaming than on actual human interaction. Also, a recent study reveals that psychological dependence on a mobile phone diminishes the effect of smartphone presence on cognitive function. 4 Cognitive failures are considered a normal occurrence in everyday life and typically manifest as lapses in attention. Exogenous variables, such as distracting stimuli, or endogenous processes, such as ruminations or daydreaming, may contribute to these attentional deficiencies. 7 This distraction is detrimental to following cognitive tasks, resulting in increased errors as the distraction interval lengthens, which is especially noticeable in the classroom where students who use their phones in class take less notes 8 and perform poorly academically. 9-10 Students at college are likely to suffer unfavorable repercussions as a result of these exogenous disruptions. [11][12][13][14] Indeed, recent research indicates that smartphone use during class and study time is distracting and reduces information retention. Similarly, the sound of a notification from the smartphone or even the mere existence of a smartphone can impair a college student's ability to concentrate on a lecture, and both the ringing of a cell phone during class and the mere presence of a smartphone, negatively affect college students' performance. [10][11][12]15 While smartphones and other mobile technologies have the potential to alter a wide variety of cognitive areas, empirical research on the cognitive effects of smartphone technology is currently fairly limited. This is acceptable, given that technology is still continually evolving. As a result, it is critical to understand how smartphones impact us in order to take the required precautions to avoid harmful outcomes. 1 The aim of the present study is to investigate the correlation between smartphone use and different dimensions of cognitive failure in daily life among young adults. Materials and Methods Hypotheses: Based on the review of the literature, the following alternative research hypotheses were formed: H 1 There will be a significant relationship between smartphone use and forgetfulness. H 2 There will be a significant relationship between smartphone use and distractibility. H 3 There will be a significant relationship between smartphone use and false triggering. sample description An ex-post facto survey research design was used, where the researchers examined the operation of variables without manipulating them to assess the relationship between smartphone use and cognitive failure variables in young adults. A total of 200 young adult students from Chennai, India, were included in the study out of which 99 are male and 101 are female. The average was 22.54± 2.75 years. A purposive sampling technique was used in the study where participants between the ages of 18 and 35 were included, and participants above age 35 were excluded. Similarly, participants diagnosed with severe cognitive impairment and other psychiatric disorders were excluded. tools Used The Smartphone Addiction Scale by Kwon et al., 2013 is a 33-item self-report measure of problematic smartphone use habits. 17 The measure has a six-point Likert scale response format, with a maximum total score of 198. Responses range from "1" (strongly disagree) to "6" (strongly agree). In the initial validation research, the measure revealed a relatively good level of internal consistency (Cronbach's alpha = 0.967). A high score implies increased smartphone use and is associated with a greater risk of smartphone addiction. The Cognitive Failures Questionnaire by Broadbent et al., 1982, 17 focuses on cognitive failures in daily life and was developed to examine cognitive failures in three critical areas: perception, memory, and motor performance. Later, researchers recognized that the CFQ has multiple additional elements, with Wallace and his colleagues presenting a four-factor solution. 18 They discovered four factors: Memory, Distractibility, Blunders, and: Names. Later, Rast and his colleagues did a study and found that CFQ has three factors: Forgetfulness, Distractibility, and False Triggering. 19 Subscale scores representing these aspects are obtained by adding the scores from all relevant items. The simplest way to score the scale is to add up the ratings of the 25 separate items, providing a score between 0 and 100. Procedure Online and offline versions of the Smartphone Addiction Scale and Cognitive Failure Questionnaire were accessible to the participants, and the participants chose whether to fill out the questionnaire online or not based on their comfort and desires. Before the questionnaire was distributed, participants signed an informed consent form, and only after consent was obtained the Smartphone Addiction Scale and Cognitive Failure Questionnaire were distributed for data collection. The participants were informed that "There is no correct or incorrect response. Kindly circle/click the values between 1 and 7 that most accurately reflects your memory assessment. Consider your options carefully and strive to be truthful. Your responses will be kept private. Kindly respond to all questions." Additionally, participants were informed of their choice to withdraw and guaranteed that the information gathered would be kept confidential and utilized for research purposes only. 140 individuals completed the Smartphone Addiction Scale and Cognitive Failure Questionnaire, and 60 completed the data via the Smartphone Addiction Scale and Cognitive Failure Questionnaire's online form. data analysis Following data collection, it was analyzed using SPSS 20.0 (Statistical Package for the Social Sciences). Descriptive statistics and Pearson's product-moment correlation were utilized to determine the strength of the association between the variables. Also, regression analysis was used to measure the function relating to the variables. resUlts and discUssions Smartphone usage levels are observed among young adults as the increased amount of smartphone use is associated with a greater risk of smartphone addiction. From the results, we can see that 55% of young adults' smartphone usage is at a normal level, and about 45% of young adults' smartphone usage is at a high level. Several dimensions of cognitive failure, such as forgetfulness, distractibility, and false triggering, are observed in young adults. The mean and standard deviation of forgetfulness is 14.60 Table 3. According to Table 4, the most frequent cognitive failure at a high level of smartphone usage was distractibility (18.13 ± 3.10), followed by false triggering (15.07 ± 3.32) and forgetfulness (14.82 ± 3.71). Similarly, it was discovered that the most frequent cognitive failure at a normal level of smartphone usage was distractibility (17.68 ± 3.10), followed by false triggering (15.01 ± 3.23) and forgetfulness (14.35 ± 3.22). Figure 1. represents the participants' dimension-specific responses and the overall mean score of cognitive failure. Error bars convey a broad sense of how precise a measurement is or how far the reported value may be from the true or errorfree value. Preliminary results revealed that there is a significant moderate relationship between smartphone usage and dimensions of cognitive failure (forgetfulness, distractibility, and false triggering). This indicates that forgetfulness, distractibility, and false triggering increase as smartphone usage increases. Similarly, there is a significant relationship between dimensions of cognitive failure as well. There is a significant weak relationship between false triggering and forgetfulness. In other words, when false triggering occurs, the individual tends to forget what they were remembering or were trying to recall. Similarly, there is a significantly moderate negative relationship between false triggering and distractibility. This indicates that when false triggering occurs, distractibility is reduced among individuals (Table 5). Hence, we can conclude that H 1 (There will be a significant relationship between smartphone usage and forgetfulness), H 2 (There will be a significant relationship between smartphone usage and distractibility), and H 3 (There will be a significant relationship between smartphone usage and false triggering) are accepted. Figure 2 depicts the path analysis model and the significant correlation between Excessive Smartphone Use and Cognitive Failures in domains such as forgetfulness, distractibility, and False triggering. While previous studies show that cell phone addiction is related to negative emotional effects, very intermittent study has looked into the relationships between mobile phone use and cognitive outcomes related to daily cognitive functioning. The prevalence of smartphones nowadays is a topic of discussion for healthcare practitioners, mental health professionals, educators, parents, and anyone who regularly uses a smartphone. A recent study reveals that smartphone use has an effect on the brain, while the long-term ramifications are unknown. 20 Many people check their phones when they wake up, use them on the way to work, and keep an eye on them at all times while at work. Many people's last sight before falling asleep is a phone screen. These habits have become so engrained in people's lives that they rarely take a step back to assess the effects on their bodies and brains. 21 Interaction with technology typically requires mental shifts from one context to another. Phones make individuals always available for contact and information from various aspects of life. Work-related emails, personal instant chats, social media posts, news, and entertainment are all mingled and intertwined in a never-ending stream of beeps, pings, and flashing notification symbols. 22 People have evolved in various ways during the last decade to deal with this onslaught of information. The most noticeable feature is media multitasking, which involves continually juggling media and non-media tasks while often utilizing numerous digital devices linked to various internet sites. 23 Diffusion MRI was employed in a recent study to examine white matter structural connectivity, and it revealed a link between activity in the right amygdala and excessive smartphone use in adolescents. 24 Several reviews have been published in recent years that have evaluated the impact of whether excessive smartphone use may be considered a kind of behavioral addiction. 25 Studies have also investigated whether there are any distinctions between excessive smartphone use and Internet use disorder and additional research is underway. 26 A number of recent research have found that excessive smartphone use is related to mental health concerns and a reduction in psychological well-being. There is persistent evidence of a link between excessive smartphone use and other mental diseases, such as depression, anxiety, OCD, and ADHD, in a manner similar to the link between Internet addiction and other psychiatric disorders. 27 Our study was supported by previous studies indicating the possible influence of smartphone use which might have a high risk of causing cognitive failures. Though the overall correlation between smartphone use and cognitive failure domains has a significant relationship, the linear regression also predicted that smartphone use might cause cognitive failures. This study will have its uniqueness in the research area on cognitive psychology, digital well-being, and additive behaviors towards technology. The study's findings can serve as a guide and source of evidence for developing and implementing pertinent intervention strategies aimed at reducing cognitive failure issues and excessive smartphone use, which have a detrimental effect on mental health and cognitive performance, thereby alleviating the burden on family caregivers, the healthcare system, and society as a whole. However, the number of participants analyzed in this study was adequate for the problem we investigated, considering this as pilot work. However, future studies should focus on larger participants with various geographical locations, ages, gender, and working style that might influence the predictors and relationships between the variables. conclUsion This study establishes a link between excessive smartphone use and forgetfulness, distractibility, and false triggering. According to this study, need-based smartphone use may be causing cognitive failure in young individuals, including forgetfulness, distractibility, and false triggering. Excessive smartphone use has been linked to a higher risk of cognitive impairment. acknowledgMents The First author wishes to thank and acknowledge Prof. Dr. O. T. Sabari Sridhar, Head-Department of Psychiatry, Sri Ramachandra Institute of Higher Education and Research, for their insightful comments and assistance throughout the work. She also wishes to express her gratitude to the study's participants for their unwavering support and cooperation throughout the course of the study. This work was carried out as a part of the Ph.D. research by the first author; hence she is highly thankful to the Chettinad Academy of Research and Education for the Junior Research Fellowship. Conflict of Interest There is no conflict of interest. Funding sources The research was funded by Chettinad
3,402
2022-12-20T00:00:00.000
[ "Psychology", "Computer Science" ]
A novel method for causal structure discovery from EHR data and its application to type-2 diabetes mellitus Modern AI-based clinical decision support models owe their success in part to the very large number of predictors they use. Safe and robust decision support, especially for intervention planning, requires causal, not associative, relationships. Traditional methods of causal discovery, clinical trials and extracting biochemical pathways, are resource intensive and may not scale up to the number and complexity of relationships sufficient for precision treatment planning. Computational causal structure discovery (CSD) from electronic health records (EHR) data can represent a solution, however, current CSD methods fall short on EHR data. This paper presents a CSD method tailored to the EHR data. The application of the proposed methodology was demonstrated on type-2 diabetes mellitus. A large EHR dataset from Mayo Clinic was used as development cohort, and another large dataset from an independent health system, M Health Fairview, as external validation cohort. The proposed method achieved very high recall (.95) and substantially higher precision than the general-purpose methods (.84 versus .29, and .55). The causal relationships extracted from the development and external validation cohorts had a high (81%) overlap. Due to the adaptations to EHR data, the proposed method is more suitable for use in clinical decision support than the general-purpose methods. direction exists and is not masked by data artifacts, CSD algorithms can have difficulty distinguishing the cause from the effect due to statistical equivalence 12 . Leveraging the longitudinal nature of EHR data and incorporating time information as part of the causal discovery process can enhance the identification of edge orientation. In this paper, (1) we propose a data transformation procedure that distinguishes new incidences from preexisting conditions, which allows us to determine the temporal order of the disease-related events despite the inaccurate (or rather noisy) timestamps in the EHR data. (2) We then present modifications to an existing CSD method, (Fast) Greedy Equivalence Search (GES) 13,14 , to infer the direction of causal relationships more robustly using longitudinal information and takes the above study design considerations into account. We demonstrate this methodology through the clinical example of type-2 diabetes mellitus (T2D), its risk factors and complications. T2D is an exceptionally well-studied disease with numerous clinical trials having produced a vast knowledge base, making the evaluation of the methodology possible. The goal of this work is not to uncover new causal relationships in diabetes, but to present a novel methodology for discovering causal relationships from EHR data that are sufficiently robust to support model development for clinical decision support tools. While we use T2D as our use case, we expect our methods to generalize to other diseases, typically chronic diseases, that exhibit similar characteristics and suffer from the same EHR shortcomings. Methods Study source and population. This retrospective cohort study utilized EHR data sets from two inde- for FV were defined. Dates for the time windows differed between MC and FV due to data availability. We extracted diagnoses, prescriptions, laboratory results, and vital signs from the two EHR data sets with the same inclusion and exclusion criteria: patients must have at least two blood pressure measurements, one before the first time window and one after the second time window; aged 18 + at the end of the first time window; and sex and age must be known. Figure 1A shows an overview of the study design of MC EHR (the study design for FV is similar). We used the MC EHR as the development cohort. Variables. Diagnosis codes are aggregated into the disease categories of obesity, hyperlipidemia, pre-diabetes, type 2 diabetes mellitus, coronary artery disease, myocardial infarction, heart failure, chronic renal failure, cerebrovascular disease, and stroke based on ICD-9 and codes following our previous work 15 . Medications indicated for the above conditions were rolled up into NDF-RT therapeutic subclasses. Relevant laboratory results and vital signs were categorized based on cutoffs from the American Diabetes Association guidelines 16 . www.nature.com/scientificreports/ Causal structure discovery. A relationship between two events is causal if manipulating the earlier event causes the other (later) event to change. For example, prescribing a medication reduces the probability of downstream events (complications). Causation differs from association. For example, blood sugar is associated with risk of stroke: diabetic patients with higher blood sugar have a higher risk of stroke; however, this relationship is likely not causal in diabetic patients since attempts to reduce the risk of stroke by reducing blood sugar consistently failed in clinical trials 17,18 . If two events share a common cause (a confounder) and are not otherwise causally related, then manipulating one event will not affect the other variable as long as the common cause remains unchanged. The confounder can be observed or latent. The term causal structure refers to the set of all existing causal relationships among all events and can be visualized as a graph. The causal graph consists of nodes, which corresponds to events, and the nodes are connected by edges that denote causal relationships. General-purpose CSD methods are designed to work with observational data to derive a causal structure that are consistent with the joint probability of the data. Several general-purpose CSD algorithms have been proposed and the interested reader is referred to the Supplements II where we present an overview of the major methods. In this work, we focus on (Fast) Greedy Equivalence Search (FGES) as the comparison method, because we previously found it to outperform other CSD methods 19 . Briefly, FGES finds the optimal causal graph by a greedy search guided by a goodness-of-fit score (e.g. BIC or BDeu) over all possible graphs. Particularly, it starts with an empty graph, and iteratively adds individual edges that maximize the score given the current graph, until adding edges no longer improves the score. Then, FGES iteratively removes individual edges that maximizes the score, until edge removal ceases to improve the score. The output of FGES is a pattern, which can contain undirected edges, where the causal effect direction could not be determined due to statistical equivalency. FGES has good mathematical properties and been shown to be consistent under a set of assumptions 14,20 . Proposed methods. The workflow of the proposed methods is described in Fig. 1B, method 3 (colored in orange). We propose two methods, a data transformation and a causal search method. The former method transforms the longitudinal EHR data into disease-related events, so that we can determine the temporal ordering of events (diseases) despite inaccuracies in the EHR data and extracts all pairs of diseases where a clear precedence ordering exists. The search method constructs the causal graph using the transformed data and the set of precedence pairs. Data transformation method. A disease-related event is defined as a diagnosis, a prescription, an abnormal lab result, or abnormal vital sign. An event is incident if it occurs in the second time window but is not present in the first time window although the patient is observed in the first time window. Conversely, a disease event is pre-existing if the patient presented with it in or before the first time window. An event A precedes another event B if among patients who have both A and B in the second time window, B is significantly more likely to be incident than A. Note that precedence implies neither causation nor association; however, if a causal effect exists, it must follow the precedence direction. Formal mathematical definitions of these concepts can be found in the Supplement I. The output from this step is (i) an event-based data set consisting of the incident and pre-existing conditions for each patient in each of the two time windows, (ii) a set C of precedence relationships of all pairs v i , v j of events for which event v i clearly precedes v j . The proposed CSD search Algorithm. Given C , we construct the causal graph G by iteratively adding edge v i , v j from C that maximizes the goodness of fit of G . The orientation of this edge must be consistent with the precedence relationship, namely from v i to v j . The goodness of fit is defined by the BIC criteria. Let X (1) , X (2) denote the data sets collected in the two distinct time windows, where X (2) follows X (1) s is the observation vector for subject s at the cross-section t; v (t) s is the observation of variable (event) v for subject s at the cross-section t; and pa(v, G) (1) s is the observation vector for the parents of v in the causal structure G , at cross Sect. 1 for subject s. The algorithm estimates P v s using logistic regression on the subjects that do not have v at the first cross section and are under observations for both cross sections. For subjects who have v at the first cross section, the probability of having v at the second cross section is 1. Since G represents the transition graph, the term P x (1) s |G is a constant. Finally, the BIC score is where n is the number of observations that are common in the two cross sections, and |G| is the number of edges in the causal structure G. Algorithm 1 describes the proposed algorithm for constructing the causal graph G . G is a directed acyclic graph (DAG), with nodes representing variables and edges representing causal effects between a pre-existing and an incident variable. Statement of human rights and informed consent. The study was approved by both Mayo Clinic and University of Minnesota Institutional Review Board (IRB). Informed consent was obtained from all patients. All relevant guidelines and regulations were followed. Evaluation Clinical evidence. The standard way to evaluate CSD methods is to compare the resulting graph to a gold standard graph. However, such a gold standard graph does not exist and possibly many relationships are unknown. However, there exists (1) Associative Evidence: a large body of observational studies documenting risk factors and outcomes for diabetes. Results from these studies have already been distilled into summaries 21 . (2) Clinical trials can support both the existence (positive) and also the lack (negative) of hypothesized causal relationships. We compiled a list of causal relationships from clinical trials considering 175 clinical trials with a primary endpoint of any of the conditions we studied, including composite end points. We excluded trials with inclusion criteria that are too strict (trial results would not generalize to our population) and the interventions that are out of the scope of our study. 14 trials remained yielding 19 positive and 18 negative causal relationships. These trials and the evidence they produced are listed in Supplement III, Table S1. These relationships are used as causal evidence to compute recall. Internal evaluation. We evaluated the method and the resulting graphs from the following four perspectives. Stability. We run 1000 bootstrap replicas on the development cohort. An edge has ambiguous orientation if it is present in at least half of the 1000 graphs (edge is not noise) and both orientations appear in at least 30% of the graphs that contain this edge (it does not have a dominant direction). We report the percentage of ambiguous edges. Precision. Based on the causal graph derived from the training cohort, an edge is incorrect if there is no associative evidence of a relationship between the two events; or if causal evidence specifically indicates the lack of a causal relationship. We define precision as one minus the proportion of incorrect edges among the discovered edges. Causal recall. Causal recall is computed on a single graph discovered from the training cohort, quantifying the percentage of the known causal relationships discovered. A known causal relationship from A to B is discovered if there is a node in the graph that maps to A, another node that maps to B and (a) a direct causal relationship A → B in the graph exists or (b) a causal path A → X → B exists and no causal evidence states that in patients with X, A does not cause B. For example, if the evidence states that blood pressure (without specifying whether it is systolic or diastolic) increases the risk of stroke, then the path sbp → cevd → stroke would satisfy this relationship. Associative recall. Associative recall is also computed on a single graph discovered from the training cohort and it quantifies the percentage of known associative relationships that can be explained by the discovered causal www.nature.com/scientificreports/ graph. An associative relationship between A and B is explained by the graph if there is a node in the graph that maps to A, another node that maps to B, and a path between A and B exists in the graph. External validation. We performed 1000 bootstrap replications on both data sets independently using the proposed method. On each data set, all edges from the 1000 graphs were pooled, resulting in two sets of pooled edges. We compared these two sets and pointed out the edges that were discordant between the MC and FV data, as shown in Fig. 1C. Method comparison. Figure 1B depicts an overview of the method comparison. Three methods are compared, (1) FGES + raw FGES is applied directly to the raw data; (2) FGES + transf data is transformed using the proposed transformation method and FGES is applied to the transformed data; and (3) Proposed the proposed search algorithm is applied to the transformed data. Comparing FGES + raw and FGES + transf isolates the effect of the proposed transformation method, and comparing FGES + transf and Proposed highlights the effect of the proposed search algorithm. Results Baseline characteristics. Table 1 presents descriptive statistics for the MC and FV data sets at the end of the first time window and incidence rates for the diseases in the second window. Differences between datasets are tested through the t-test (for age) and the chi-square test (all other variables). Directional stability. The proposed data transformation reduced the percentage of ambiguously oriented edges from 45 to 24%, and finally, the proposed search method eliminated ambiguously oriented edges ( Table 2). Table 3 shows the precision, associative recall and causal recall of the graphs discovered by the three methods. All three methods achieved almost perfect recall; FGES + raw achieved the lowest precision of 0.294: less than third of the events reported as causally related are even associated. By using the proposed transformation, the precision increased to 0.55, but almost half of the reported causal relationships are still incorrect. Finally, the proposed method achieved a precision of 0.838. We present the causal graph discovered by the proposed methods in the Fig. 2. Incorrect edges are colored in red. External validation. We compared the graphs discovered from the MC and FV data sets. There are 74 distinct edges that were discovered from at least one of the data sets. Sixty (81%) edges coincided across the two datasets, while 14 (19%) differed. Table 4 shows the discordant edges, the percentage of bootstrap iterations in which the edge was present and the main reason for the discordance. There are three broad reasons for differences in edges. The main reason, affecting half of the edges was that of policy differences. These include preferred lab results (A1C vs FPG) and decisions regarding therapeutic interventions. The second reason, affecting four edges, is a lack of clear precedence in the relationships among the events. For example, the abnormal Trigl → HL treatment edge was not discovered at FV because the first abnormal Trigl precedes or follows the HL treatment in statistically equal proportions. The final reason, affecting the remaining three edges, is differential degree of confounding between the two sites. For example, SBP is a confounder of CHF and MI. When the algorithm fails to detect the SBP → MI edge, the effect of SBP on MI was shown through CHF (which depends on SBP more than MI). For the HL diagnosis → Trigl edge, the common cause is BMI, and for the HL treatment → CAD edge, it is LDL. The reason for differential confounding was likely a combination of population and institutional differences as well as data artifacts. Discussion We proposed a new data transformation method and a new search algorithm specifically designed for EHR data. We showed how the resulting graph achieved close to 90% precision (90% of the edges were correct), almost 100% recall (the graph could explain all known associations and almost all known causal relationships), and the graph was remarkably stable in face of data perturbation (no edge disappeared or changed direction). Due to its built-in facility, our method outperformed general purpose methods by a large margin. While the two graphs from the two independent health systems are reassuringly similar, small differences exist. None of these differences implies an incorrect physiological or pathophysiological effect. Among the 14 edges that differed, seven captured differences between the population and the institutions, such as institutionspecific triggers for prescriptions and the use of different laboratory tests for the same purpose (fasting plasma glucose versus A1c). Depending on the goal of the modeling, it may be desirable to include such differences. We believe that the discovered causal graphs offer adequate information about causal (including confounding) factors to support the development of clinical decision support models and can also support clinical research efforts. The proposed algorithm achieved such high performance because it could compensate for errors in the EHR data and it incorporated study design considerations. Problems caused by incorrect time stamps and diseases appearing in the reverse order are alleviated by reducing the overall reliance on time stamps. The study design with its two-year windows allows for (even large) errors in the time stamp and once a disease is recognized as pre-existing by the data transformation method, its subsequent time stamps are irrelevant. Time stamps that appear in the reverse order tend to have a small gap (time to schedule and complete a diagnostic procedure), so they likely fall into the same two-year window. Study design considerations, namely that billing codes do not distinguish between incident and pre-existing conditions as well as whether a patient is under observation or not, are addressed through the data transformation method. The ability of the search algorithm to produce a DAG www.nature.com/scientificreports/ is achieved through using precedence relationships to orient edges that have equal probability in both orientations. Precedence relationships in turn rely on the pre-existing/incident status of the disease as determination by the data transformation method. www.nature.com/scientificreports/ Table 1. www.nature.com/scientificreports/ Generalizability beyond diabetes. The proposed method was demonstrated on type 2 diabetes, but it can generalize to other applications as long as the target application benefits from some of the improvements: reducing the impact of inaccuracies in the EHR data, accounting for the temporal ordering of events and distinguishing pre-existing and incident conditions. The method assumes that pre-existing diseases persist during the second time window. Future work. The algorithm requires longitudinal data with at least two time windows. Different diseases and their symptoms might manifest at different rates, incorporating this knowledge into the discovery process may enhance the performance of the algorithms. Secondly, the proposed methods may be able to capture the effect of medication changes when a study design of multiple (more than two) time windows is applied. The current implementation assumes a single incidence of a disease, or that the diseases persists during the study period. Another possible extension could relax this assumption, allowing for transient conditions that can have multiple incidences in the study period. Thirdly, variable semantics (such as SBP and DBP being measures related to hypertension) is an essential component of the proposed algorithm, but it is not always available in a computable form. Further, both datasets in this study are from the Midwest with a predominantly white patient population. The generalizability of the discovered causal relations can be further tested by examining a broader patient population. Conclusions We have demonstrated that the graph produced by the proposed transformation and search algorithm is more stable across bootstrap iterations and as complete as other methods yet it contained substantially fewer errors (had higher precision) than graphs produced by general-purpose methods. The resulting graph was successfully validated using longitudinal EHR data from an independent health system. We conclude that the proposed method is more suitable for use in clinical studies using EHR data. Data availability The data that support the findings of this study are not publicly available since they contain patient health information. Authorization to access patient data can be requested from the Mayo Clinic and University of Minnesota Institutional Review Board.
4,802
2021-10-25T00:00:00.000
[ "Medicine", "Computer Science" ]
Linguacultural Features of Teaching English for Specific Purposes in Cross-Cultural Interactions Teaching English for specific purposes to non-linguistic students presents some challenges due to several factors. They are determined by the growing demands of the global economy towards the level of professional competence, diversification of the employers' demands to the employees and the need to account for the cultural peculiarities of regional economies. The authors present a study of organizing the process of English for specific purposes teaching considering linguacultural features. The authors assess the role of English in the modern world and the global economy. English continues to change the language behaviour of people around the globe and is currently the primary tool for large-scale bilingualism. The article analyses the main linguacultural features of different regional business communities’ representatives. The article outlines the primary courses design requirements: the communicative orientation and integrating the linguacultural features of professional communication. The authors substantiate the need to develop a methodological, theoretical and practical basis for the implementation of the linguacultural component of the course for the university students. Introduction The importance of the international communication in the modern society is directly related to the global issues of developing science, technologies, economies and maintaining reasonable relations and understanding across different cultures.Our society is subject to all kinds of misunderstandings on many levels, from a personal relationship to business and politics. The goal of linguistic training of university students is to satisfy those social needs that are associated with the active integration of our specialists into world science and professional activities.Success in professional activity on the global market is treated as an achievement of professional goals [4,6].For modern professionals, the ultimate goal of studying a foreign language is "the formation of a linguistic, socioeconomic competence that assumes an adequate use of foreign language code in the implementation of all types of verbal and non-verbal professional intercultural communication in the economic sphere" [4]. The objectives of the University course of foreign language include the development of language, speech, and socio-cultural competence, i.e., mastering the system of a foreign language for scientific and professional communication, proceeding in the cross-cultural interactions with representatives of a different cultural realm [9]. English, as one of the main universally recognized languages of international professional communication, is assumed to ensure the equal access of graduates of Russian universities to the international professional community.Teaching English for special purposes (ESP) for students of non-linguistic profiles presents some challenges due to several factors.They are determined by the growing demands of the global economy to the level of professional competence of graduates, by the diversification of employers' demands for workers, as well as by the need to consider cultural characteristics of the regional economies. Apparently, for successful interaction with other people, choosing the appropriate trajectory of relations requires extensive social knowledge.At least, it includes: assessing the situation, assessing the interlocutor, potential relationships with him and his behaviour, strategic planning of the communication style, assimilating the experience of social interaction, social intuition [2,6].Such experience of social and crosscultural interaction can and should be integrated into the process of university teaching English.Also, the main emphasis in such training should be placed on the ability to take into account the linguacultural differences of representatives of different professional and business communities depending on the region. Theoretical frameworks In recent decades, many studies have been carried out that contain ideas about the need of transition to "regional" varieties of the English language.In the methodology of teaching foreign languages, new tendencies have emerged, in which communicative models of language behavior of communicants are described in terms of their national affiliation (Prokhorov Y.E., Sternin I.A., Turunen N.). These ideas facilitate the necessity to redeem the contents and the techniques of the ESP teaching course considering linguacultural features and preparing students for the productive cross-cultural interaction. Materials and Methods The research has involved over 800 students of Orenburg State University from humanitarian (2%), economical (29%), juridical (4%), technical (14%), automotive (33%), and aerospace (18%) specialties of the full-time department.The representatives of Russia, the CIS, the EU, Northern and Central Africa, and China studying here have participated as the testees to encourage the real-life cross-cultural interaction survey. The experimental diagnostics of the incoming data included general supervision, different questionnaires, self-analysis essay, CV and professional profile scrutiny. The forming experiment also induced a somewhat permanent control and feedback employing regular tests (at least, once a week).The latter comprised both students' educational and learning achievements, as well as some complementary external factors, such as psychological stability, social and cultural activity, motivation, interest, tolerance index to different cultures, readiness for cross-cultural interaction. The resulting data was obtained by similar tests, questionnaires, and interviews.Also, students' final projects (mainly, Microsoft PowerPoint Presentations) helped to gain vital information on the substantial outcomes of the experiment. The educational process has been organized in the traditional forms: lectures, seminars, practical classes enhanced by the project method and the interactive techniques supplied on the parity basis and following the dialogue principle. The online interactions have amplified both the students' learning, social and cultural activity, together with the accessibility of its feedback providing valid optional and additional data for a personal multi-sided analysis.The online sessions of teaching English for specific purposes implied the use of Moodle and AIST testing system on www.osu.ruportal along with some specials of Massive Open Online Courses at www.coursera.org.Some popular social networks (Facebook, Instagram, VK, YouTube, LinkedIn, etc.) with each student's personal permission to include any facts found there into their diachronic profile have complemented to the generalized portrait of every individual involved. Besides all the empirical methods, only common mathematical ones for processing any data obtained have been used.No previously unknown or author-specific statistics methods have been engaged. Gender and age variables have demonstrated no significance in cross-cultural interactions, neither in the ESP teaching process and have not been taken into account. No political, force-major factors or any other impact in a short-time period have been considered or registered in their simultaneous connection or influence on the educational process, and despite any nationality or any side interested and involved. Results Since we accept communication as a central competence in the ESP teaching process, it must be carefully facilitated.More importantly, the process of acquiring communication skills and cultural consciousness should go hand in hand with the basic stages of developing communication skills. While the current theoretical and practical research Discussion The main components of the ESP course, such as analysing the needs of students and pragmatic focus of the course content, should be adjusted taking into account the cultural characteristics of potential professionals and language users. According to some authors, correct identification of the students' language needs is a crucial issue in the design of the course and continues to be a professional challenge for ESP teachers around the world.They suggest a flexible approach to the development of the course and the implementation of teaching methodology based on a socio-cultural approach.These conditions can make ESP teaching attractive and practical [3,7,9,10]. The authors dwell that students should gain a deeper understanding of cultural values, corporate culture norms and even professional jargon, which are inherent in representatives of different cultures.They should also consider and master some verbal and non-verbal norms of behaviour, which helps to avoid misunderstandings or even conflict situations associated with critical discrepancies in professional etiquette. Let's consider the basic requirements, which, in our opinion, are necessary to implement when designing both primary (compulsory) ESP course and additional (optional) language courses for university students.In particular, such courses can be introduced into the optional training program "Interpreter in the professional communication sphere" realized at the Orenburg State University for the students of non-linguistic profiles. The first requirement for the content of the course is suggested to be the development and intensification of the communicative focus of both content and teaching technologies.Such a requirement results from the need to teach not just language as such, but language as an instrument of communicative activity in conditions of intercultural professional communication.Communication acts both as a training goal and as a means to achieve this goal.Communicative approach implies maximum convergence of the teaching process by its nature with the process of communication.The communicative approach involves the mastery of various speech functions, that is, the ability to express communication intents (request, consent, invitation, refusal, advice, reproach, etc.).This understanding of the communicative approach allowed the researchers (M.Canale, G. Yule, B. Stephen, E. I. Passov, et al.) to describe its main features:  communication focus of the teaching and learning process realized in various types of speech activity;  focus not only on the content of communication but also on its forms;  functionality in the selection and organization of the material: linguistic and speech material should be selected following the functions and communicative intentions of the speaker;  situations as critical forms in the selection of material and organization of training: language and speech material should be selected concerning certain situations of communication and practiced in typical situations as well;  use of authentic materials, which include language forms typical for the expression of a particular communicative intention, authentic texts and communication situations, as well as various verbal and non-verbal means characteristic of native speakers;  use of genuinely communicative tasks that contribute to the formation of communication skills, and practice format adequate to the conditions of real communication (pair and group work);  individualization of the teaching process: the use of person-centred approach, that is, considering the students' needs in the planning and organization of classes, reliance on individual cognitive styles and learning strategies, the use of students' personal experience. Thus, it is evident that the teaching process should be focused mainly on communicative activity, and not on the grammatical component.This does not mean ignoring grammatical accuracy.However, recent studies indicate a tendency to simplifying the grammatical structures used in ESP and the transition to more basic linguistic structures [3,7,8,11]. The next requirement is considering linguacultural features in teaching English for special purposes. First, it is necessary to consider more deeply the special status of English in the modern world and its unique influence on the global intercultural communication processes.Regarding the population of the Earth, English is superior to the extent of the spread of Latin during the medieval period, to Sanskrit in the ancestral territory of South Asia, as well as to Spanish, Arabic, and French.In other words, English continues to change the linguistic behaviour of people around the globe and is currently the main tool for large-scale bilingualism, since being bilingual now means speaking English as a second language, a language of broader communication, along with one or more of its own languages region.The influence of English penetrates through literary works, international media, cinema and now also through electronic media and the worldwide network.The hegemony of the English language in different countries is manifested in the spheres of business, science, education, management, literature, international communication.However, more importantly, it manifests itself in relation to the English language by its users.This is the only natural language ever in history, which is spoken as a second (or foreign) language by a significantly bigger number of people than it is spoken by native speakers [13].It is a high number of the so-called "non-natives" worldwide who now determine the processes of spreading and teaching English, its functioning as a universal communication tool.Communicating in English outside the Western countries is mainly communicating with non-native speakers [14]. Thus, the scale of the spread of the English language and its influence on other cultures and languages is a unique phenomenon in history. Thus, speaking about requirements to the ESP course, we can assume that the integration of knowledge about the language behaviour of non-native language users, in line with the dynamics of the development of language norms inherent in native speakers and their integration into the content of training, guarantees the authenticity of academic requirements for students. When organizing the educational intercultural communication, it is necessary to consider both the individual cognitive space of the participants and their inner cultural level.It is known that even in case of a common cognitive base and belonging to one nation, misunderstanding can occur because of the divergence of personal cultures. Intercultural communication can be effectively realized on the levels of the language, culture, and personality of the subjects of communication.The organization of optimal intercultural professional communication in the teaching process implies several preconditions.These include: sufficient knowledge of the national characteristics of representatives of another linguacultural community, the appropriate motivation for communication (the creation of a zone of intersection, the readiness to assimilate the cultural stereotypes of another ethnos or society, expand their cognitive base and perceive the communicative (verbal and nonverbal) behaviour of others).Also, finally, successful intercultural interaction is achieved by the expedient and effective behaviour of communicants who establish and maintain professional communication [2,11,12]. If we consider the essential linguacultural features inherent in business communications among representatives of international business communities, the most distinctive features can be observed in two main categories: cultures based on interpersonal relations (relationship-based cultures) and cultures based on social norms (rule-based cultures).Each type of culture corresponds to its style of communication, in particular, communication in the professional sphere. Researchers define these styles by these two categories: 1. Relationship-based cultures correspond to highcontext communication style. 2. Rule-based cultures correspond to low-context communication style. Communicative behaviour in the professional sphere in relationship-based cultures is regulated by the relations "subordinate -superior."Persons of higher rank require explicit (expressive, verbal and nonverbal) linguistic expression of respect.Thus, a natural communicative environment for the expression of interpersonal relations in the professional sphere is formed.It implies implicit norms of communicative behaviour in every professional situation, which tends to a high context of communication [2, 5,6]. Communicative behaviour in the professional sphere in rule-based cultures is based on the social norms and the rules that are adopted and implemented by the entire business community.These rules are presented in the explicit form and stand above interpersonal relations, which do not matter for professional contacts.This phenomenon has an impact on the behaviour of communicants, which is more regulated, i.e., tends to low context communication. According to the materials of the Council of Europe program, the teaching of a foreign language and culture includes training foreign language teachers concerning mastering the necessary repertoire of professional and communicative roles [1].The latter include such roles as an active, competent and experienced language user; an organizer and active "participant" of educational and communicative events in the classroom, a master instructor, a cooperating partner, an interactive interlocutor, a sympathetic listener, an experienced colleague, a consultant, a source of information and ideas, a researcher, an experienced connoisseur, capable of stimulating language learning and acquiring new cultural and behavioural skills and their effective use. In fact, the ideas of accounting for linguistic and cultural features can be built into the ESP course program: careful selection of relevant articles from the press, as well as from various business reviews, will provide the necessary basis for the practice of communication and analysis of the situations.Both linguistic and cultural awareness should be built into the communication process in a classroom organized by a competent teacher. Conclusions The ESP course should help students learn both linguistic and cultural features that are reflected in the communicative behaviour of business partners.There already exist both methodological and theoretical developments and sources that provide the necessary information for each stage of mastering competencies within each specific professional situation. We understand that creating a coherent system of nationally oriented ESP teaching program is still a matter of some perspective.Nevertheless, we hope that the existing theoretical foundations will serve as a basis for continuing scientific research in this field, as well as for the practical use in the compilation of textbooks, teaching manuals and recommendations. The level of competence for the ESP students may be significantly increased under the condition of creating a nationally oriented language learning paradigm aimed at forming a bicultural identity. Table 1 . Stages of the ESP teaching process.arrangement of the teaching process, considering the determined linguacultural features, we have designed the main stages of the ESP teaching process: essential stage, modelling stage, and productive stage.Each stage includes three main components: cognitive, functional and motivational.The stages of the teaching process are presented in table 1.
3,781.4
2018-01-01T00:00:00.000
[ "Linguistics", "Education" ]
Studying regional low-carbon development: A case study of Sichuan Province in China The unavoidable option for socially sustainable development is a low-carbon economy. One of the essential steps for China to attain high-quality development is reducing carbon emissions. It is necessary to realize low-carbon development in Sichuan, as it is not only an important economic zone but also an ecological protected area. The concurrent relationship among energy consumption, carbon emissions, and economic growth was examined in this study using the Tapio decoupling indicator, and the factors affecting energy consumption and carbon emissions in Sichuan were broken down using the logarithmic mean Divisia indicator (LMDI). The findings demonstrate a fundamental relative decoupling relationship between Sichuan’s energy use and carbon emissions. Analysis of energy consumption and carbon emissions in Sichuan Province from 2005 to 2020 shows distinct patterns. From 2005 to 2012, in 2014, and from 2016 to 2020, the relationship between energy use and carbon emissions was relatively decoupled, with decoupling values ranging between 0 and 1. Absolute decoupling occurred in specific years: 2010, from 2013 to 2018, and in 2020. These periods are characterized by economic growth alongside reductions in carbon emissions. Factors affecting energy consumption and carbon emissions were consistently analyzed, showing similar impacts throughout the study periods. We find that population and economic growth are the main driving forces of these effects. The effects of energy intensity and industrial structure mainly play restraining roles, and the latter has a slightly weaker effect than the former. Introduction The greenhouse effect, significantly driven by human activities, has emerged as the principal contributor to global warming over the past century, with carbon dioxide emissions at the forefront [1].This relentless warming trend poses a formidable challenge to the sustainable development of societies worldwide [2,3].As main carriers of human activities, the cities are deviating from sustainable development goals under the effect of urban heat islands (UHIs) [4].It makes sense to reduce temperatures, enhance biodiversity and sequester carbon by optimizing city planning and development strategies considering urban climate, especially expanding urban green spaces [5,6], but the fundamental measure is to reduce carbon emissions at the source.In response, nations globally have initiated national programs aimed at curbing carbon emissions.Amidst this backdrop, China, witnessing a surge in carbon emissions due to escalating energy consumption and an inefficient energy structure, has emerged as the world's leading carbon emitter, accounting for 30.9% of global emissions in 2021 [7,8].In a significant move, China announced its ambitions for achieving a "carbon peak" by 2030 and "carbon neutrality" by 2060, with a commitment to a gradual decline in carbon dioxide emissions post-2030.The strategic adjustment of its energy consumption structure and the reduction in energy consumption intensity underscore China's dedication to fostering a lowcarbon economy. Sichuan Province is not only an ecological protected area but also an important economic zone located in southwest China (97˚21 0 E-108˚12E 0 , 26˚03 0 N -34˚19 N), with a total area of 486,00 km 2 Sichuan Province is located on the Sichuan-Yunnan ecological divide between the Tibetan Plateau and the Lesser Plateau, with greatly different landscapes and vastly complex terrain.It serves as a major center for the conservation of biodiversity worldwide in addition to being a key water supply area for the upper reaches of the Yangtze River and a key recharge area for the upper reaches of the Yellow River.It is a crucial element of the nation's plan for environmental safety.Meanwhile, the economy of Sichuan Province has grown rapidly in recent decades, according to the National Bureau of Statistics.Sichuan's gross domestic product (GDP) in 2021 was 5.39 trillion yuan, ranking sixth in China.However, energy consumption has been growing rapidly during the economic development process of Sichuan Province.Its average growth rate over the last five years has been approximately 2.96%.The total energy consumption in Sichuan Province will reach 230 million tonnes of standard coal in 2021.Of the total energy consumption, 25.9% will be coal combustion consumption, 17% will be oil fuel consumption, 16.7% will be natural gas consumption, and 40.4% will be primary electricity and other energy consumption (as shown in Fig 1).The province's non-fossil energy consumption accounts for 39.5% of total energy consumption, and thus it more than 20 percentage points higher than the country as a whole, ranking first in China.These data show that Sichuan Province has been experiencing a high level of energy consumption and an irrational energy mix during the economic growth period.In terms of carbon emissions, Sichuan Province showed a fluctuating upward trend from 2005 to 2012, followed by a slowly declining period.However, it was still 78.9086 million tonnes by 2020 (as shown in Fig 2).Therefore, to support the growth of a low-carbon economy in Sichuan Province, it is of utmost importance to research the interactions between the use of energy, the release of carbon dioxide, and economic development.The findings of our study will provide useful inspiration for the development of low-carbon economies in ecologically protected areas around the world. Decoupling analysis is frequently used to examine the relationship between carbon emissions and economic growth and decompose the factors affecting carbon emissions [9], even though it was initially used to describe the relationship between energy consumption and economic growth [10][11][12].In terms of the subject, some scholars have highlighted this topic with a certain country [13][14][15][16], or a certain region [17][18][19].From the perspective of methodology, Tapio decoupled elasticity coefficient theory quantitatively analyses the situation where the pollution emissions growth rate changes as the economic growth rate changes, utilizing the decoupling factors proposed by the Organization for Economic Co-operation and Development (OECD) [20,21].Meanwhile, index decomposition analysis (IDA), structural decomposition analysis (SDA), and production-theory decomposition analysis (PDA) are the main decomposition techniques for carbon emission components.LMDI is widely used in factor analysis and is regarded as a preferred IDA decomposition method [22].The existing studies show that although there are many academic studies on the relationship between energy Research directions within the realm of environmental economics and sustainable development are diverse, unveiling the intricate interplay between economic activities and environmental outcomes.A pivotal focus among current research trajectories is the investigation of the relationship between economic growth and environmental degradation, often contextualized within the framework of the Environmental Kuznets Curve (EKC) hypothesis [23][24][25].This hypothesis posits an inverted U-shaped relationship between environmental pollutants and per capita income, suggesting that pollution levels increase with economic growth up to a certain threshold, beyond which they begin to decline.Through the application of econometric analyses, incorporating time-series, cross-sectional, and panel data, scholars assess the dynamics between economic development and various forms of environmental impacts.Sustainable energy transition emerges as another central theme, emphasizing the shift from fossil fuelbased energy systems to those relying on renewable sources and exhibiting lower carbon intensity.Research in this domain typically leverages scenario analysis and modelling techniques, such as the Long-range Energy Alternatives Planning (LEAP) system [26,27] or Integrated Assessment Models (IAMs) [28][29][30], to explore future energy pathways and their implications for climate change mitigation and energy security.Concurrently, resource efficiency and the circular economy are increasingly garnering attention.Studies in this area apply methodologies like Life Cycle Assessment (LCA) [31,32], Material Flow Analysis (MFA) [33], and Input-Output Analysis (IOA) [34,35] to comprehend the environmental impacts of products, services, and economic sectors throughout their entire life cycles.These methods aid in identifying opportunities for enhancing efficiency and implementing circular economy principles.Another significant research direction involves exploring the concept of decoupling, which refers to the capacity for an economy to grow without a corresponding increase in environmental pressure.Studies on decoupling utilize indicators such as the Tapio decoupling indicator to analyze the extent of economic growth relative to environmental degradation.This methodology is instrumental in evaluating the effectiveness of sustainable development policies. However, these research endeavors are not without limitations.Firstly, many studies predominantly focus on the macro-level, with insufficient analysis at regional or provincial scales, overlooking inter-regional disparities.Existing research often emphasizes a single indicator or methodology, lacking attempts to comprehensively assess and analyze using a variety of tools and methods.Regarding strategies and measures proposed for effectively achieving a balanced and sustainable development between energy consumption, carbon emissions, and economic growth, further in-depth exploration and validation are necessary.Moreover, there is little research on the decoupling of energy use and carbon emissions from economic growth or, more specifically, on the decoupling of energy use and carbon emissions in Sichuan Province.Thus, our goal is to close this gap in the literature.The study contributes to the current work on the development of the regional low-carbon economy in three major aspects.To the best of our knowledge, this study sets precedence by analyzing the factors affecting energy consumption and carbon emissions and decoupling the connection between energy consumption, carbon emissions, and economic growth.The research field of regional low-carbon economic development will be expanded in this study.Second, this paper takes Sichuan Province as the research object.The special feature of Sichuan Province is that it is not only an important economic zone but also an ecological protected area.Many things are different from other regions in terms of balancing economic development and environmental protection.Therefore, our study enriches the case studies of regional sustainable development.Finally, as far as we know, this is the first study to find that the population and economic growth are the main driving forces for low-carbon economic development in Sichuan Province, and the effects of energy intensity and industrial structure mainly play restraining roles.The findings will not only offer a direction for Sichuan Province in terms of its pursuit of a low-carbon economy, but they will also provide a piece of advice for other environmentally sensitive emerging nations and regions.Besides, this study aims to delve into the concurrent relationships between energy consumption, carbon emissions, and economic growth, particularly focusing on the specific case of Sichuan Province in China.Based on the aforementioned analysis, targeted policy recommendations are proposed to promote harmonious development between the economy and the environment in Sichuan Province and beyond while simultaneously reducing energy consumption and carbon emissions.It is hoped that this research will provide valuable insights and strategies for achieving sustainable development at both regional and global levels. The remaining sections of this study are arranged as follows.Section 2 provides the methodology.The results of the decoupling analysis on the consumption of energy, emissions of carbon dioxide, economic growth, and the decomposition of factors are described in Section 3. We critically discuss the results and put forward policy recommendations and scope for future research in Section 4. Section 5 concluded our study. Methods In order to analyze the development of the regional low-carbon economy in Sichuan Province, the Tapio decoupling index is adapted to test the decoupling relationship between energy use, carbon emissions, and economic growth, and the LMDI is used to decompose the variables affecting energy consumption and carbon emissions.The former can visualize the degree of decoupling between energy use, carbon emissions, and economic growth, and the latter can analyze the main factors affecting energy consumption and carbon emissions in Sichuan Province, as well as the degree of influence of each factor.The methods are introduced as follows. Tapio decoupling index The Tapio elasticity coefficient method was first created as a decoupling indicator using the concept of decoupling elasticity in the study of the decoupling of carbon dioxide emissions from the transportation sector and economic growth in the European Union and Finland [20].We use the decoupling model to analyze the decoupling relationship between energy consumption, carbon emissions, and economic growth.The calculation formula of the decoupling index is as follows: D E,Y and D C,Y are the elasticities of change in energy consumption and carbon emissions concerning GDP growth, that is, the change in energy consumption and carbon emissions per percentage point change in GDP.ΔE 0 , ΔE, and E are the rate of change, the amount of change and the initial value of energy consumption, respectively.ΔC 0 , ΔC, and C are the rate of change, the amount of change and the initial value of carbon emissions, respectively.ΔY 0 , ΔY, and Y are the rate of change, the amount of growth and the initial value of GDP, respectively. The decoupling states of Tapio's decoupling model can be categorized into eight categories: strong decoupling, weak decoupling, recessive decoupling, strong negative decoupling, expansive negative decoupling, weak negative decoupling, recessive Coupling, and expansive Coupling, as shown in Table 1.Among them, strong decoupling is the most desirable state, and strong negative decoupling is the least desirable state. LMDI model construction Decomposition analysis is an excellent way to visualize economic growth, and the LMDI is highly adaptable compared to other methods [36].LMDI allows us to precisely quantify the role of the level of technology, industrial structure, inflation, and population growth in terms of economic growth over several years.The results can be used to summarize or confirm economic development patterns and provide guidance for future investment or reform policies.There are two main approaches to decomposition models: IDA (INDEX decomposition analysis) and SDA (structural decomposition analysis).The former is a progressive and improved version of the latter.The latter can be divided into two methods: the Laspeyres index decomposition method and the Divisia index decomposition method [37].As one of the most frequently used carbon emission factor decomposition techniques, LMDI can produce a fair factor decomposition, and the findings do not include inexplicable residual terms.The number of decomposition factors mainly includes four-factor and five-factor decomposition.For example, Wang et al. (2005) used LMDI to decompose China's carbon emissions into four factors: population, GDP per capita, energy intensity, and energy consumption structure [38].Ma et al. (2008) decomposed the change in China's carbon dioxide emissions into the effects of population, GDP per capita, carbon-free energy incorporation, biomass substitution, and fossil fuels [39].The calculation formula for LMDI is given below. In the above equation, C denotes regional carbon emissions, n denotes the number of industrial sectors, C i denotes carbon emissions from sector i or industry, E i denotes energy use in sector i, GDP i denotes gross domestic product in sector i, and POP denotes population.I i ¼ E i GDP i denotes energy use efficiency, that is, energy consumed per unit of GDP. S i ¼ GDP i GDP denotes the industrial structure, that is, the proportion of output in sector i to total output.P ¼ GDP POP denotes GDP per capita. According to the full decomposition model, the change in carbon emissions, (4C), can be differently decomposed into energy efficiency (intensity) impact (4I), industrial structure impact (4S), population size impact (4P) and economic growth impact (ΔGDP) from the base period "0" to the period "t".Its calculation formula is as follows. The decomposition of the influence factor on the right-hand side of the above equation can be expressed as follows. Data sources The information on population, GDP, total primary energy output, and total energy consumption used in this study is obtained from the Sichuan Statistical Yearbook.The primary industries are those related to farming, forestry, animal husbandry, and fishing; the secondary industries are those related to industry and construction; and the tertiary industries are those related to transportation, storage, postal services, wholesale, retail, lodging, and catering.The material balance technique is used in the computation of carbon dioxide emissions, and the data include information on the consumption of coal, oil, and natural gas.Because the energy consumption obtained in the statistical yearbook has been converted into standard coal data, it needs to be converted back into physical quantity.We use the conversion coefficient of standard coal published by the National Development and Reform Commission to calculate the physical quantity of various materials used in primary energy consumption.The carbon emission factors are calculated using information from the US Department of Energy (DOE), the statistical agency Energy Information Administration (EIA), the Japan Energy Research Institute (JERI), and the National Science Council Climate Project.The average values from these well-known institutions listed above are shown in Table 2.This study did not use any kind of human participants or human data, which requires any kind of approval. Decomposition of factors affecting changes in energy consumption According to the LMDI factor decomposition of energy consumption (Fig 4 ), the average contributions of population size and economic growth are positive, indicating that population and economic growth are driving factors for the sustained growth of energy consumption demand.The effect of energy intensity is negative, suggesting that the decline in energy consumption intensity has slowed down with the growth of total energy consumption.The impact of industrial structure has been negative since 2011.From 2005 to 2010, the industrial structure had a positive driving effect on energy consumption, it shifted to a negative inhibitory effect after 2011. Discussions To gain more experience in promoting low-carbon economic development in the ecologically protected areas of developing countries, we will review and discuss the empirical results in this section. Decoupling of energy consumption and influencing factors of energy consumption change Overall, energy consumption and carbon emissions in Sichuan Province exhibit a mix of weak and strong decoupling from economic growth, showcasing a favorable trend.The study based on LMDI results reveals a consistent increase in energy demand driven by population and economic growth in Sichuan Province from 2005 to 2020.The effects of energy intensity and industrial structure have predominantly been negative until 2011, impeding the overall growth of energy consumption.However, post-2011, the industrial structure has played a significant role in boosting energy demand.These findings align with earlier research [40], indicating that the energy landscape in Sichuan Province is evolving under governmental and other influences.These empirical findings are closely tied to Sichuan's enhanced policy framework for energy consumption and the stringent implementation of energy conservation plans during the Eleventh and Twelfth Five-Year Plans.In terms of primary energy production, the share of coal is decreasing while direct electricity and natural gas percentages in clean energy are rising.Crude oil shares remain stable at 0.1% to 0.2%.This shift is attributed to the aggressive development of Sichuan's hydropower sector during the Twelfth Five-Year Plan, culminating in peak installed capacity exceeding 12 million kilowatts in 2013.Sichuan Province leads in China with an installed hydropower capacity of 89.04 million KW by the end of 2021.The growth rates of natural gas consumption, primary electricity, and other clean energy sources surpass those of coal, reflecting a growing market for renewable energy and changing consumer preferences toward clean and low-carbon energy sources.Despite advancements in energy consumption and production, the challenge of rising energy demand persists. In Sichuan Province, population dynamics primarily exert a positive driving force on carbon emissions, albeit at a reduced level in recent years.Demographic factors interact with economic, resource, and environmental factors, with urban expansion and infrastructure development contributing to increased energy consumption [41].While the implementation of new fertility policies has commenced, strengthening the positive impact of population size on energy consumption remains a challenge.The slowdown in population growth post- The average effect of industrial structure on carbon emissions resulting from energy consumption in Sichuan Province is -13.25%.Initially a positive driver of energy consumption from 2005 to 2010, the impact of industrial structures transitioned to an inhibitory effect after 2011.This trend mirrors Sichuan's industrial evolution, where the proportion of secondary industry initially rises before declining, leading to increased energy consumption per unit of GDP [42].Similar conclusions have been drawn in previous studies [43], highlighting the evolving dynamics of industrial structure and its impact on energy consumption in the region. Combining energy flows by sector and type for four time spans: 2005, 2010, 2015, and 2020, as shown in Figs 6-9.Based on these four graphs, it can be found that for the secondary sector in Sichuan Province, coal and electricity account for the largest share of energy consumption.The tertiary sector, including transport, storage, postal, wholesale, retail, accommodation, and tertiary industries, has a relatively fragmented primary energy consumption.The transport, storage, and postal sectors consume more oil due to the long distances involved in transport.In contrast, the household sector consumes mainly electricity.Among the three main sectors, the primary sector, mainly agriculture, forestry, and fishery, consumes less energy.The secondary sector, especially manufacturing and construction, generally suffers from overcapacity and very aggregated energy consumption, with significantly higher energy consumption than overall energy consumption will require further restructuring of the secondary sector and a significant reduction in energy demand from traditionally energy-intensive industries.Overall, there were no significant changes in energy flows across sectors. As an indicator of energy consumption efficiency, energy consumption intensity can reflect the technical level to a certain extent [44].The energy consumption intensity of Sichuan Province has maintained a downward trend from 2005-2020, from 1.6421 to 0.4359, and this trend is closely related to the consistent adherence to the promotion and application of low-carbon technologies [45].Since 2020, energy consumption has increased in addition to the conventional statistics of coal, electricity, oil, and natural gas.Other types of energy are used for industrial and residential consumption in Sichuan Province, so the energy structure and variety have been optimized.In recent years, Sichuan Province has introduced several policies, such as the "Work Plan for Controlling Greenhouse Gas Emissions in Sichuan Province" and the "Action Plan for Energy Saving and Low Carbon Development in Sichuan Province (2014-2015)", to encourage more research and development on low carbon technologies.Meanwhile, particular actions on energy saving and emission reduction technologies are vigorously carried out to promote green development. To establish a "National Clean Energy Demonstration Province," Sichuan Province should propose more reasonable planning for the energy sector to improve the efficiency of conventional energy consumption effectively.Some new types of energy technology should be accelerated, and a large amount of clean energy has been put into enterprise production and residents' lives [46]. Decoupling of carbon emissions and influencing factors of carbon emissions Based on carbon emissions and economic growth estimates, the decoupling effect varies widely between 1.2 and 4 in Sichuan.An examination of the decoupling of carbon emissions and economic growth in Sichuan Province and the factors influencing it shows a decoupling value of 1.1104 in 2009, indicating a strong correlation with economic growth.The growth rate of CO 2 emissions was higher than that of GDP, mainly due to the high coal consumption during these years.The higher the coal consumption is, the higher the energy consumption.Inefficient coal burning led to high emissions of carbon dioxide, soot, and dust, resulting in an increase of 10.99 million tonnes of carbon dioxide emissions in Sichuan. Sichuan Province has responded positively to the "double decarbonization" objective in terms of policy during recent years [47].They optimally tried their best to vigorously promote energy savings, reduce energy consumption, and increase carbon sinks.Meanwhile, low-carbon development is promoted through industrial systems, production methods, lifestyles, and consumption patterns. As seen from the carbon emissions of the three fossil energy sources in Sichuan Province ( Fig 10), raw coal carbon emissions dominate the total fossil energy emissions, and controlling coal consumption has always been the key to curbing the intensity of carbon emissions [48].However, carbon emissions from crude oil and natural gas consumption in Sichuan Province are gradually increasing.Specifically, carbon emissions from raw coal, crude oil, and natural gas fluctuated from 2005 to 2010, with total carbon emissions rising and carbon intensity exceeding 0.5 tons per 10000 yuan.An analysis of the reasons behind this reveals that the period 2006-2010 was the period of the 11th Five-Year Plan, a critical period for building a moderately prosperous society, during which Sichuan Province focused on economic development and vigorously implemented the strategy of strengthening the province through industry, resulting in carbon emissions from significant fossil energy sources not being reasonably controlled.During 2012-2020, carbon emissions from raw coal gradually decreased from 70,042,787 tonnes in 2012 to 42,768,403 tonnes in 2020 due to the 12th and 13th Five-Year Plans.Sichuan Province has continued to promote energy conservation and emission reduction and actively built a regional carbon emission reduction mechanism, thus leading to a planned regulation of coal consumption and a slow decline in carbon emissions from raw coal.At the same time, the carbon emission intensity has dropped to below 0.5 tons per 10,000 Yuan and has shown a continuous downward trend.However, given the acceleration of economic development, Sichuan Province's energy consumption demand will not drop significantly shortly, and the energy conservation and emission reduction tests are still severe.From the perspective of the decomposition factors, it is evident that the driving effects of economic growth and population have significantly weakened after 2011, although economic growth and population have a positive impact on the development of carbon emissions.During the process, the effect of economic growth and population increased from 2083.7438 and 2193.5464 in 2011 to 254.2551 and 352.0754 in 2020, respectively.This is a positive outcome of Sichuan Province's response to national efforts to control greenhouse gas emissions and promote a low-carbon consumption pattern, production, and lifestyle.Additionally, with the further implementation of the "Chengdu-Chongqing twin-city economic circle" strategy, Sichuan Province will gradually transform and upgrade its economic development, adhere to the lowcarbon development path, and continuously improve the quality of its economic development [49].It is noteworthy that the economic growth and population size effect values increased again in 2017-2018, most likely due to the rapid economic growth rate and the lack of reasonable limits on carbon emissions in economic development.In terms of energy efficiency, the effect of energy efficiency remains negative throughout the statistical years.The energy efficiency effect had a minimum value of -1927.6002 in 2013, indicating that energy efficiency had the most inhibiting impact on carbon emission growth in 2013.Subsequently, the energy efficiency effect increased between 2014 and 2020, resulting in a decrease in the inhibiting effect of energy efficiency on carbon emissions growth, further highlighting the existing issues in recent years regarding energy use in Sichuan Province and the necessity to enhance the conversion and efficiency of energy use in Sichuan [50].With a few exceptions, the industrial structure impact has a negative value from 2005 to 2020, largely preventing Sichuan Province's carbon emissions from increasing.The industrial structure effect particularly exhibited positive values in the early years of 2005-2010 and 2013, indicating that the industrial structure played a role in promoting the rise of greenhouse gases during this period.The secondary sector accounted for the majority of the province's total energy consumption during this period, while the primary and tertiary sectors accounted for a smaller percentage.This aligns with the findings of the national sample [51].The industrial structure of Sichuan Province has a weak inhibitory influence on the rise of carbon emissions, as evidenced by the negative effect of industrial structure in the remaining years, with effect values ranging between -213 and -31. Policy implications In summary, in the context of Sichuan Province, the empirical findings reveal a complex relationship between energy use, carbon dioxide emissions, and economic growth.The findings underscore the effectiveness of "low carbon" strategies implemented since 2011 and the potential of industrial structure adjustments to limit the growth of carbon dioxide emissions.Theoretically, this study enriches the research content of regional economics by illustrating the intricate and dynamic relationship between economic growth, energy consumption, and carbon emissions through a detailed case study of Sichuan Province.In addition, this study contributes to an in-depth understanding of the mechanisms behind energy efficiency and industrial structure optimization, providing valuable insights for formulating sustainable development policies.More specifically.The following four recommendations are made for the future development of Sichuan Province, considering the "peak carbon" and "carbon neutral" targets and the requirements for low-carbon development in Sichuan Province. First, the government should improve the top-level design further and effectively plan for energy conservation and emission reduction.Sichuan Province should dovetail with the national development strategy, listen to different opinions from various parties, formulate a scientific energy conservation and emission reduction plan, and establish an energy planning supervision system to ensure the plan's implementation. Second, Sichuan Province can fully play a role in the incentives and constraints of finance and taxation.Fiscal subsidies or tax concessions should be given to enterprises that use recycled and alternative resources, meet energy-saving and emission reduction targets, or carry out energy-saving renovations; heavy taxes should be levied on high energy-consuming and high-polluting industries to achieve a reverse reduction in production costs and guide enterprises to follow the path of a low-carbon economy. Third, the government should deepen the monitoring and accountability mechanism while implementing supervision and inspection of energy-saving and emission-reduction work.Establish a long-term effective monitoring mechanism, investigate and publish the list of illegal units to ensure the effective implementation of emission reduction laws and regulations. Finally, Sichuan Province should not only establish a platform for sharing information and technology on energy conservation but also rely on existing innovation platforms such as the Chengdu-Chongqing Comprehensive Science Centre and the Western (Chengdu) Science City to speed up vital standard technologies and steadily improve energy processing and energy use efficiency. Scope for future research Although this study has conducted an in-depth analysis of the relationship between energy consumption, carbon emissions, and economic growth in Sichuan Province using the Tapio decoupling index and the LMDI decomposition method, there are certain limitations to the research.Firstly, the temporal and geographical scope of the study restricts its universality and currency.Secondly, while the methodologies employed are effective, they have inherent limitations that may not fully reveal the more complex dynamics of economic, social, and technological changes.Therefore, future research needs to expand the temporal and spatial scope, employ more diverse methodologies, and pay closer attention to the role of policy and technological progress to more comprehensively understand and explain the driving factors behind energy consumption and carbon emissions. Conclusions This study delves into the necessity of a low-carbon economy for sustainable social development, with a particular focus on Sichuan Province in China.By utilizing Tapio decoupling indicators and LMDI analysis, the research reveals a relative decoupling relationship between economic growth and energy consumption as well as carbon emissions in Sichuan Province.In Sichuan Province, from 2013 to 2018, there was a decoupling trend observed between energy consumption, carbon emissions, and economic growth, with slower growth in energy use and carbon emissions having a lesser impact on the overall decoupling process.The key decoupling factors influencing the yearly changes in energy consumption in Sichuan Province are population and economic growth, while industrial structure and energy intensity act as crucial limiting factors.The decline in energy intensity negatively affects total energy consumption growth, whereas population and economic growth positively drive stable energy demand growth.Changes in carbon dioxide emissions drivers in Sichuan Province have been analyzed, with economic growth and population abundance playing positive roles, and energy intensity acting as a significant limiting factor.Industrial structure has shown a moderating effect on carbon emissions growth, particularly after 2011, indicating the impact of modernization and optimization efforts on curbing emissions growth.The paper emphasizes the importance of balancing economic growth with carbon emissions reduction and puts forward targeted policy recommendations to promote coordinated development between the economy and the environment.While the study has made contributions, it also has limitations, such as the need for more in-depth regional analysis and a focus on comprehensive evaluation methods.Future research directions could explore further strategies to achieve sustainable development and address the gaps in understanding the dynamics of decoupling between energy consumption, carbon emissions, and economic growth. Fig 5 Fig 5 presents the effect values of energy intensity, industrial structure, population size, and economic growth on carbon emission changes in Sichuan Province.Overall, economic growth and population size in Sichuan Province from 2005 to 2020 have shown positive impacts, indicating their significant roles in the development of carbon emissions.Energy intensity consistently exhibits a negative effect, acting as a suppressor in carbon emission growth.The influence of industrial structure has been relatively minimal, playing a weak role in carbon emission changes. Table 1 . Criteria for judging the degree of decoupling of economic output from carbon emissions. This suggests that while energy consumption in Sichuan Province increased alongside GDP during this period, it did so at a slower rate than GDP growth, aligning the expansion of GDP with the decrease in energy consumption.The decoupling effect value of Sichuan Province's carbon emissions fluctuates significantly, ranging from 1.2 to -4, with notable fluctuations.Specifically, carbon emissions in 2006-2008, 2010, 2012, and 2019 exhibited weak decoupling states, where carbon emissions increased alongside GDP growth but at a slower rate.In contrast, carbon emissions in 2011, 2013-2018, and 2020 demonstrated strong decoupling states, indicating a decrease in carbon emissions alongside GDP growth.The decoupling value in 2009 was 1.1104, reflecting expansive decoupling.It is important to highlight that the decoupling index for carbon emissions decreased to -3.559 in 2015. Fig 3 illustrates the decoupling of energy consumption, carbon footprint, and economic growth in Sichuan Province.Despite a weak decoupling, Sichuan Province's energy consumption and economic growth exhibit overall strength.The decoupling value of energy consumption typically ranges between 0 and 0.8, signifying a weak decoupling state.However, in 2013 and 2015, it decreased to -0.6104 and -1.5752, respectively, indicating a strong decoupling state.
7,571.2
2024-05-29T00:00:00.000
[ "Environmental Science", "Economics" ]
Diffractive dijet photoproduction at the EIC We present a first, detailed study of diffractive dijet photoproduction at the recently approved electron-ion collider (EIC) at BNL. Apart from establishing the kinematic reaches for various beam types, energies and kinematic cuts, we make precise predictions at next-to-leading order (NLO) of QCD in the most important kinematic variables. We show that the EIC will provide new and more precise information on the diffractive parton density functions (PDFs) in the pomeron than previously obtained at HERA, illuminate the still disputed mechanism of global vs. only resolved-photon factorization breaking, and provide access to a completely new quantity, i.e. nuclear diffractive PDFs. Introduction From 1992 to 2007, hadron-electron collisions at DESY's circular installation HERA provided a wealth of data and information on the strong interaction and the partonic structure of the proton, not only in deep-inelastic scattering (DIS), where the virtuality of the exchanged photon Q 2 1 GeV 2 is large, but also in photoproduction, where Q 2 ≤ 1 GeV 2 . While electrons (or positrons) of energy 27.5 GeV mostly collided with protons of first 820, then 920 GeV, the last months of operation were dedicated to lower proton beam energies of 575 and 460 GeV in view of better access to the longitudinal structure function F L and therefore the gluon dynamics at small momentum fractions x. While inclusive DIS precisely pinned down the quark parton distribution functions (PDFs), jets (which for dijet invariant masses larger than 16 GeV make up 10 − 20% of the inclusive DIS cross section) and photoproduction provided additional constraints on the running of the QCD coupling constant α s and the gluon PDF in the proton [1,2]. The two general purpose detectors H1 and ZEUS were supplemented with forward taggers to identify diffractive processes ep → eXY , which -rather surprisingly -accounted for a substantial fraction (10 − 15%) of all events. In DIS, where QCD factorization was proven to hold [3], they could be interpreted in terms of diffractive structure functions. JHEP05(2020)074 Under the additional assumption of Regge factorization [4], the flux of pomeron (IP ) and higher Regge trajectories like the reggeon (IR) can be parametrized, pomeron PDFs could be extracted [5][6][7] and their universality tested in other types of collisions. It turned out that in hadron-hadron collisions at the Tevatron and the LHC, factorization was broken by a factor of 0.1 [8,9] to 0.025 [10], depending on the collision energy, the order of perturbative QCD calculations and simplifying assumptions about the non-diffractive structure function [11] and in qualitative agreement with calculations based on multi-pomeron exchanges in a two-channel eikonal model [12,13]. For dijet photoprodution, these calculations suggest that factorization should hold for the DIS-like direct-photon processes, but be broken by a factor of 0.34 for the resolvedphoton processes, where the photon fluctuates before the hard interaction into qq pairs and their vector-meson dominated (VMD) bound states [14]. It is, however, well-known that at next-to-leading order (NLO) of QCD [15][16][17][18] and beyond [19,20] direct and resolved processes are connected through the factorization of collinear initial-state singularities, which to preserve factorization-scale independence should also be suppressed [21]. Also, the H1 [22] and ZEUS [23] data can be described by not only suppressing the resolved (and direct initial-state) contribution, but also by a global suppression factor of 0.42 to 0.71, even though this factor then depends on the transverse jet momentum and is subject to large theoretical uncertainties from scale variations and hadronization corrections [24]. It is also possible that the resolved-photon suppression factor depends on the parton flavor [25]. Elucidating the mechanism of factorization breaking in dijet photoproduction is therefore one of the important desiderata of the HERA program. Since there is currently no electron-hadron collider in operation, new experimental information can in the short term only be obtained from ultraperipheral collisions (UPCs) at the LHC [26], which have contributions from both photoproduction [27] and diffraction [28]. Dijet photoproduction at the LHC might even provide novel constraints on nuclear PDFs [29] or first information on the yet unknown diffractive nuclear PDFs [28,30]. In the medium term, the recently approved electron-ion collider (EIC) at BNL [31] has the potential for detailed studies of jets in DIS [32][33][34][35][36][37][38] and photoproduction [39][40][41][42] in the clean environment of an electron-nucleus collider, which was planned for Run 3 at HERA, but never implemented. In this paper, we explore in detail the EIC potential for diffractive dijet photoproduction. We begin by reviewing in section 2 our analytical approach based on the factorization of hadronic and partonic cross sections, the extraction of diffractive PDFs from HERA, our NLO QCD calculation of the partonic dijet cross section, and theoretical models for factorization breaking. Section 3 contains a large variety of results for diffraction on protons, starting with NLO QCD predictions for the EIC with colliding beams of 21 GeV electrons and 100 GeV protons and the corresponding K factors as well as studies of the scale evolution of the pomeron PDFs, the range of cross sections predicted from different HERA fits of the diffractive PDFs, of the cross section with a larger range in the longitudinal pomeron momentum fraction x IP and of the corresponding increase of the sub-leading reggeon contribution. We then demonstrate the advantage of a higher proton beam energy of 275 GeV and conclude this section with numerical predictions based on the different approaches to factorization breaking. In Section 4 we address the diffraction on nuclei. We start by reviewing the theoretical definition of nuclear diffractive PDFs and the leading-twist model of nuclear shadowing, before we make numerical predictions at NLO for diffractive dijet photoproduction on various nuclei and discuss again the different approaches to factorization breaking. Our conclusions are given in section 5. Analytical approach At the EIC, like at HERA, electrons e of four-momentum k will collide with protons p of four-momentum P at a squared center-of-mass system (CMS) energy S = (k + P ) 2 . For nuclei, the relevant quantity is the squared CMS energy per nucleon and is typically (i.e. for heavy nuclei) smaller by about a factor of Z/A ≈ 0.4, where Z is the nucleus charge and A is the number of nucleons. In photoproduction, the virtuality Q 2 = −q 2 = −(k − k ) 2 of the radiated photon γ is small (typically less than Q 2 max = 0.01−1 GeV 2 ), and its spectrum can be described in the improved Weizsäcker-Williams approximation [43] f γ/e (y) = α 2π (2.1) Here, α is the electromagnetic fine structure constant, k is the four-momentum of the scattered electron, y = (qP )/(kP ) is its longitudinal momentum transfer and m e its mass. Diffractive processes are characterized by the presence of a large rapidity gap between the central hadronic system X and the forward-going hadronic system Y with fourmomentum p Y , low mass M Y (typically a proton that remained intact or a proton plus low-lying nucleon resonances), small four-momentum transfer t = (P − p Y ) 2 , and small longitudinal momentum transfer x IP = q(P − p Y )/(qP ) (see figure 1). In dijet photoproduction, the system X contains (at least) two hard jets with transverse momenta p T 1,2 , rapidities η 1,2 and invariant mass M 12 , as well as remnant jets from JHEP05(2020)074 the diffractive exchange, dominated by the pomeron IP as the lowest-lying Regge trajectory, and from the photon, when the latter does not interact directly with the proton or nucleus, but first resolves into its partonic constituents. Assuming both QCD and Regge factorization, the cross section for the reaction e + p → e + 2 jets + X + Y can then be calculated through ab . (2. 2) The x I P dependence is parameterized using a flux factor motivated by Regge theory, where the pomeron trajectory is assumed to be linear, α IP (t) = α IP (0) + α IP t, and the parameters B IP and α IP and their uncertainties are obtained from fits to H1 diffractive DIS data [5]. The longitudinal momentum fractions of the parton a in the photon x γ and of the parton b in the pomeron z IP can be experimentally determined from the two observed leading jets through M γ and M IP are the factorization scales at the respective vertices, and dσ (n) ab is the cross section for the production of an n-parton final state from two initial partons a and b. It is calculated in NLO in α s (µ) [15][16][17][18], as are the PDFs of the photon and the pomeron. For the former, we use the GRV NLO parametrization, which we transform from the DIS γ to the MS scheme [44]. Our default choice for the diffractive PDFs is H1 2006 Fit B [5], which includes proton dissociation up to masses of M Y < 1.6 GeV and is integrated up to |t| < 1 GeV 2 and x IP < 0.03. We identify the factorization scales M γ , M IP and the renormalization scale µ with the average transverse momentump T = (p T 1 + p T 2 )/2 [24]. Diffraction on protons In this first numerical section, we focus on electron-proton collisions at the EIC with an electron beam energy of E e = 21 GeV and a proton beam energy of E p = 100 GeV, which will in the next section also be used as the beam energy per nucleon for electronnucleus collisions. We assume detectors that have the same kinematic acceptance as H1 for diffractive events, i.e. the capability to identify a large rapidity gap and/or a leading proton in a Roman pot spectrometer. We also allow for proton dissociation up to masses of M Y < 1.6 GeV, a four-momentum transfer of |t| < 1 GeV 2 and a longitudinal momentum transfer of x IP < 0.03. Photoproduction events are assumed to be selected with (anti-)tagged electrons and photon virtualities up to Q 2 < 0.1 GeV 2 , assuming full kinematic coverage of the longitudinal momentum transfer 0 < y < 1 from the electron to the photon. Jets are defined with the anti-k T algorithm and a distance parameter R = 1, where at NLO jets contain at most two partons [45]. Given the limited EIC energy and experience from HERA, we assume that the detectors can identify jets above relatively low tranverse JHEP05(2020)074 energies of p T 1 > 5 GeV (leading jet) and p T 2 > 4.5 GeV (subleading jet). This will, however, require a good resolution of the hadronic jet energy scale and subtraction of the underlying event. The latter will also be important to avoid large hadronization corrections of the partons, which are particularly prominent at large x γ and have so far obscured the interpretation of the observed factorization breaking. Note also that asymmetric jet p T cuts allow one to avoid an enhanced sensitivity to soft radiation [46]. Rapidities are a priori accepted in the range η 1,2 ∈ [−4; 4]. We find, however, that in diffractive photoproduction most jets are central and have an average rapidityη = (η 1 + η 2 )/2 ∈ [−1.5; 0]. This range is enlarged to [−1.5; 1] at higher proton beam energy or for a larger range in x IP , see below. jet average transverse momentum (top left) extends only to 8 GeV, while at HERA with its larger CMS energy of 300 − 320 GeV it extended to about 15 GeV. Consequently, the total cross sections (full black curves) are dominated by contributions from direct photons (dotted green curves) and point-like quark-antiquark pairs, as one can also see from the accessible range in x obs. γ > 0.5 (top right). It will thus not be easy at this CMS energy to distinguish global factorization breaking from a breaking in only the resolved-photon contribution. Due to the limited available energy, the cross section also requires the largest longitudinal momentum fraction allowed by the kinematic cut of the proton to the pomeron (bottom left) and is dominated by large momentum fractions of the partons (mostly gluons, dashed blue curves) in the pomeron (bottom right). Contribution from gluon in pomeron verse momentum (top left), jet rapidity difference (bottom left) and observed longitudinal momentum fractions in the photon (top right) and pomeron (bottom right). At these low scales, the NLO corrections amount to about a factor of two and are thus sizable. At the kinematic edges, i.e. for large rapidity differences ∆η = η 1 − η 2 or values of z obs. IP , they become even larger. For inclusive jet photoproduction at HERA, the corrections at approximate next-to-next-to-leading order (aNNLO) increase the cross section for p T = 20 GeV by another 12%, improving the description of the considered ZEUS data [47]. However, at the same time the scale uncertainty is considerably reduced [19,20]. This demonstrates that the perturbative expansion remains reliable despite seemingly large K-factors at NLO. Evolution of gluon in pomeron contribution Although the range in p T is limited at √ S = 92 GeV, it would be nice to observe the evolution of the diffractive PDFs in the pomeron with the energy scale, set here by the average jet transverse momentum. We therefore compare in figure 4 the fractional contribution of the gluon in the pomeron in the lowestp T bin from 5 to 6 GeV to the one in the next-highest bin from 6 to 7 GeV. Clearly, this restricts the range in z obs. relative increase of the rather constant gluon vs. the falling quark singlet density at large z obs. IP , as shown in figure 11 of ref. [5]. Dependence on diffractive PDFs More important than the evolution of the diffractive PDFs, which should in principle be predictable from perturbative QCD, is the z IP dependence itself, which must be determined from experimental data and which is therefore, despite the considerable progress at HERA, still subject to large uncertainties. In figure 5 we therefore compare our NLO QCD predictions for the EIC using three different fits of the pomeron PDFs to diffractive DIS at HERA: our standard prediction with the frequently used H1 2006 Fit B (full black), the accompanying Fit A (dotted green) [5], and ZEUS 2009 Fit SJ (dashed blue curves) [7]. Since the latter has been obtained from leading protons, i.e. without dissociation contributions, the corresponding cross sections must be and have been multiplied by a factor of 1.23. The main differences between H1 2006 Fits A and B are the starting scales Q 2 0 = 1.75 GeV 2 and 2.5 GeV 2 , respectively, and the gluon parametrization at large z IP , which is singular JHEP05(2020)074 IP [pb] ep → e + 2 jets +X +Y @ S =92 GeV Pomeron Gluon in pomeron Direct photon Reggeon Figure 6. Same as figure 2, but now with an extended range in x I P < 0.1. In addition, also the contribution from the subleading reggeon is shown (dot-dashed red curves). in Fit A and -up to the small-z IP exponential term -constant in Fit B. More precisely, both the gluon and singlet quark densities are parametrized at the starting scale as where C g = −0.95±0.20 in Fit A and C g is fixed to 0 in Fit B. Attempts have subsequently been made to reduce this uncertainty by adding to the inclusive data also jet production data as in H1 2007 Fit Jets (not used) [6] and ZEUS 2009 Fit SJ [7]. The former uses again Q 2 0 = 2.5 GeV 2 and results in C g = 0.91 ± 0.18, the latter uses Q 2 0 = 1.8 GeV 2 and results in the smallest uncertainty on C g = −0.725 ± 0.082. Note, however, that C g is intimately linked to the other parameters in the gluon and quark singlet fits, including the pomeron flux factor, so that they cannot be directly compared. What one can observe from figure 5 Range in x I P and reggeon contribution The observations of the rather limited range in transverse momentum and the overwhelming importance of the direct photon contribution, that leaves little hope for resolving the JHEP05(2020)074 question of factorization breaking, motivate us to consider also a larger range in x IP . In figure 6 we therefore extend it from x IP < 0.03 to x IP < 0.10. This immediately enlargens the reach inp T from 8 to 14 GeV (top left) and the momentum fraction in the photon from 0.5 down to 0.1 (top right), so that now also resolved photons contribute substantially. Furthermore, the PDFs in the pomeron can now be probed in the entire range of z IP from 0.1 to 1 (bottom right). The increase also seems to be sufficiently large, as the distribution in x IP is no longer peaked at the cut, but around 0.06 (bottom left). This is important since the contribution from the subleading reggeon trajectory increases from less than about 2% at x IP ≤ 0.03 to 10 − 35% at x IP ≥ 0.06 − 0.10. In fact, to obtain a good description of the HERA diffractive DIS data, H1 and ZEUS include an additional sub-leading exchange (IR), which has a lower trajectory intercept than the pomeron and which contributes significantly only at large x I P and low z IP . This contribution is assumed to factorize in the same way as the pomeron term, such that the diffractive PDFs take the form The flux factor f IR/p takes the form of eq. (2.3), normalised via a parameter A IR in the same manner as for the pomeron contribution and with fixed parameters α IR (0), α IR and B IR obtained from other H1 and ZEUS measurements [5][6][7]. The parton densities f i/IR of the sub-leading exchange are taken from fits to pion structure function data. We choose the GRV NLO parametrization [48]. Other pion PDFs give similar results [5]. EIC beam energy dependence Like RHIC, the EIC will accelerate bare charged protons to higher energies per nucleon than it will accelerate nuclei that comprise also neutrons, i.e. not only to 100 GeV, but even to E p = 275 GeV. At the same time, the most recent plans envisage an electron beam of energy E e = 18 GeV rather than 21 GeV [49]. In figure 7, we therefore repeat our studies for this accelerator design and compare the reach in the different distributions with our default predictions. While a different hadron beam energy will make the extraction of nuclear effects from comparisons of the bare proton baseline with heavy nuclei more difficult, it increases the reach in the kinematic variables relevant for diffraction studies on protons alone. A first example is the reach in average transverse momentump T , which is extended from 8 to 12 GeV (top left). Interestingly, an increase in the cut on x IP to 0.1 had a larger effect, extending the reach to 14 GeV (cf. figure 6). A combination of both will therefore lead to an even larger reach. The x IP -distribution itself (bottom left) is now broader and its maximum near the cut at 0.0225 less sharp. Similarly to what we observed in the previous section with a larger x IP range, the distributions in longitudinal momentum fraction in the photon (top right) and pomeron (bottom right) also span now larger regions, here from 0.2 (rather than 0.1) to 1. In addition, the corresponding differential cross sections are now larger by one to two orders of magnitude, leading to increased statistics and precision in the corresponding measurements. JHEP05(2020)074 Factorization breaking Factorization breaking in diffractive dijet photoproduction is a result of soft inelastic photon interactions with the proton, which populate and thus partially destroy the final-state rapidity gap. This effect is usually described in the literature by a rapidity gap survival factor S 2 ≤ 1. Since the magnitude of S 2 decreases with an increase of the interaction strength between the probe and the target, the pattern of the factorization breaking can be related to various components of the photon [50]. In the laboratory reference frame, the highenergy photon interacts with hadronic targets by fluctuating into various configurations (components) interacting with the target with different cross sections. These fluctuations contain both weakly-interacting (the so-called point-like) components and the components interacting with large cross sections, which are of the order of the vector meson-proton cross sections. This general space-time picture of photon-hadron interactions at high energies is usually realized in the framework of such approaches as the vector meson dominance (VMD) model and its generalizations [51] or the color dipole model [52,53]. It is also used in the language of collinear factorization, where the photon structure function and JHEP05(2020)074 parton distribution functions (PDFs) are given by a sum of the resolved-photon contribution corresponding to the VMD part of the photon wave function and the point-like (inhomogeneous) term originating from the γ → qq splitting, see, e.g., ref. [44]. Note that the direct-photon contribution to diffractive dijet photoproduction corresponds to the configurations interacting with very small cross sections of the order of 1/p 2 T , which preserves factorization. The mechanism of factorization breaking in photoproduction is one of the important desiderata of diffraction studies at HERA and could be one of the physics goals of the EIC. The key question is whether factorization still holds for pointlike photons, similarly to DIS, where it has been proven [3], and is only broken for resolved photons [21], similarly to hadron-hadron scattering, where it is known to be broken [8][9][10][11][12][13], whether it is broken globally to a significant [22] or only a small extent [23], as the H1 and ZEUS data seem to indicate [24], and whether pointlike photon fluctuations into all quark-antiquark pairs [21] or rather those of light quark flavors only [25] rescatter and destroy the rapidity gap. This last point can be related to the mass scheme employed in the photon structure function, i.e. the applicability of dimensional regularization and the zero-mass variable flavor number scheme (ZM-VFNS) vs. the fixed flavor number scheme (FFNS), where the heavy quark mass serves as a regulator. Unfortunately, current photon structure function data do not yet allow to determine the corresponding kinematic ranges. Theory and experience from proton PDFs tells us that the FFNS should be used when p T ≤ m q , while the ZM-VFNS should be used when p T m q . In diffractive photoproduction both at HERA and the EIC, we find ourselves in the transition region p T ≥ m q , so that (in particular charm) mass effects are indeed still relevant and the charm contribution does not seem to be suppressed. A precondition for an important contribution from the EIC to settle these questions is a sufficiently large range in direct and resolved photon contributions, i.e. in x γ . For this reason, we continue to base our numerical studies here on the accelerator design studied in the last subsection with its higher proton beam energy of E p = 275 GeV. To avoid the reggeon contribution, which could obscure the situation further, we use the lower cut on x IP < 0.03. In figure 8 we compare three different schemes of factorization breaking, i.e. a global factorization breaking by a factor of 0.5 as determined in the low-p T measurement of H1 (full black curves) [22], a breaking of the resolved-photon components by a factor of 0.34 as predicted by the two-channel eikonal model (dotted green curves) [14], which is not substantially altered by the finite remainder of collinear quarks and antiquarks [21], and a scheme that interpolates the suppression for photon components of different size as a function of x γ [25]. In this last scheme, one expects S 2 ≈ 0.34 for the hadron-like component of the photon at small x γ , and S 2 ≈ 0.53 − 0.75 for the gluon and quark contributions at large x γ corresponding to small, but non-negligible factorization breaking due to the point-like component of the resolved photon [50,54]. Therefore, we interpolate the effect of factorization breaking with γ , the interpolated scheme is instead similar to global suppression, while for the pointlike region it is again similar to the resolved-only scheme. Since the distributions fall by two orders of magnitude from x obs. γ = 0.85 to 0.3, the differential cross section must be represented on a logarithmic scale and measurements at the EIC will require a high level of precision to distinguish between the different schemes. This should indeed be possible with the planned luminosities up two orders of magnitude larger than at HERA [49]. The shape of thep T distribution is also known to be sensitive to different schemes of factorization breaking [24], and this is also true for the global and JHEP05(2020)074 resolved-only schemes at the EIC ( figure 8, top left). Interestingly, the interpolation scheme described above differs from the global scheme mostly in the larger normalization, which can be attributed to the fact that the cross section remains dominated by direct photons in the entirep T range. As expected, the distributions describing the momentum transfers from the proton to the pomeron (bottom left) and from the pomeron to the hard process (bottom right) have similar shapes for all three suppression schemes and differ again only in normalization. Diffraction on heavy nuclei In the collider mode, it is rather straightforward to measure coherent diffraction on nuclei by selecting events with a large rapidity gap and requiring that no neutrons are produced in the zero-angle calorimeter (ZDC). Practically all events satisfying these requirements would correspond to coherent diffraction. However, measurements of the t-dependence would require the use of Roman pots at unrealistically small distances from the beam [30]. Nuclear diffractive PDFs and nuclear shadowing Nuclear diffractive PDFs are defined similarly to those for nucleons as matrix elements of well-defined quark and gluon operators between nuclear states with the condition that the final-state nucleus does not break, carries longitudinal momentum fraction 1 − x IP , and that the four-momentum transfer squared is t. As in the case of usual nuclear PDFs, nuclear diffractive PDFs are subject to nuclear modifications. In particular, at small x nuclear diffractive PDFs are expected to be suppressed compared to the coherent sum of free nucleon diffractive PDFs due to nuclear shadowing. In the model of leading twist nuclear shadowing [30], nuclear diffractive PDFs f D i/A are obtained by summing a series corresponding to coherent diffractive scattering on one, two, . . . , A nucleons of the nuclear target, which gives in the small-x IP limit Here B diff = 6 GeV −2 is the slope of the t-dependence of the ep → e Xp differential cross section and η = 0.15 is the ratio of the real to imaginary parts of the corresponding scattering amplitude; is the nuclear density [55] and b is the transverse position (impact parameter) of the interacting nucleon in the nucleus; σ i soft = 30 mb is an effective cross section controlling the strength of the interaction with target nucleons, which can be estimated using models of the hadronic structure of the virtual photon. The used value of σ i soft corresponds to the scenario with the larger nuclear shadowing of ref. [30]. One can see from eq. (4.1) that an account of nuclear shadowing leads in principle to an explicit violation of Regge factorization for nuclear diffractive PDFs. A numerical analysis of eq. (4.1) shows [30] that the effect of nuclear shadowing in most of the kinematics only weakly depends on flavor i, the momentum fractions z IP and JHEP05(2020)074 x IP , and the resolution scale Q 2 . Therefore, to a good approximation, one has the following relation where R(x, A) ≈ 0.65 is a weak function of x and A and is calculated using eq. (4.1). NLO QCD predictions for the EIC Our predictions for the NLO QCD cross sections for coherent diffractive dijet photoproduction in eA → e +2 jets+X +A scattering with different nuclear beams (U-238, Au-197, Cu-63, and C-12) at the EIC in our default set-up with √ S = 92 GeV are shown in figure 9. The cross sections are shown as functions of the jet average transverse momentum (top left), the jet rapidity difference (bottom left), the observed longitudinal momentum fractions of partons in the photon (top right) and pomeron (bottom right). As naturally follows from eq. (4.2), the shapes of the nuclear cross sections repeat those for the proton shown in figure 2. The free proton diffractive PDFs as parameterized in H1 2006 Fit B [5] have been divided by a factor of 1.23 in order to take into account the fact that here we have no diffractive dissociation contributions, as the heavy nucleus is assumed to stay intact and no neutrons are assumed to be produced in the ZDC. Factorization breaking As discussed in section 3.7, collinear QCD factorization is violated in diffractive dijet photoproduction due to soft inelastic interactions with the hadronic target. While the mechanism of this factorization breaking is not yet established, it is natural to expect that the effect will be more pronounced for nuclear targets since the gap survival probability is significantly smaller for nuclei than for the proton. For instance, using the commonly used two-state eikonal model [12][13][14], one estimates [28] that the suppression factor in the relevant energy range is S 2 = 0.4 for the proton and S 2 = 0.04 for heavy nuclei. Figure 10 shows our predictions for the NLO QCD cross section for coherent diffractive dijet photoproduction with Au-197 beams at the EIC. For heavy nuclei, we unfortunately cannot enhance the resolved-photon contribution to increase the differences between competing factorization breaking schemes by assuming a higher beam energy of 275 GeV as for protons, but we remain limited to 100 GeV per nucleon. We can, however, perform our study for the larger range in x IP < 0.10 instead of 0.03, which, as we have seen in figure 6, also increases the accessible range in x γ . In figure 10, the solid lines correspond to the case where we apply a global suppression factor of S 2 = 0.5 to our calculated cross sections as in the proton case, while the dotted lines correspond to the factorization breaking scenario, where only the resolved-photon contribution is suppressed, now by a factor of S 2 = 0.04 (see the discussion above). A comparison of these two schemes shows that due to the dominance of the direct photon contribution in most of the EIC kinematics, they lead to similar shapes of the kinematic distributions that differ only in normalization, with the important exception of the x γ -distribution (top right), which below values of 0.5 is negligible for resolved-only suppression and has potentially measurable support only in the global suppression scheme. eA → e + 2 jets +X +A @ S =92 GeV U-238 Au-197 Cu-63 C-12 Conclusion To summarize, we have in this paper presented a first and extensive study of diffractive dijet photoproduction at the recently approved EIC. Using our established formalism of NLO QCD calculations, we have illuminated various aspects of this interesting scattering process. We started by determining the cross sections to be expected in the most important differential distributions as well as the size of the NLO corrections. We then discussed the sensitivity to pomeron PDFs as a function of momentum fraction and resolution scale as well as the contribution from the higher reggeon tractectory. One of our two main results is that the EIC has the potential to address conclusively the mechanism of factorization breaking, but that this will require a high proton beam energy and/or a large longitudinal momentum transfer from the proton/nucleus to the pomeron. Then the question will hopefully be answered whether and to what extent factorization breaking occurs globally in photoproduction or whether only the resolved photon contribution or some of its compo-JHEP05(2020)074 nents (light/heavy quark-antiquark pairs, VMD contributions) are suppressed. Our second main result comprises predictions for diffractive dijet photoproduction on nuclei, which might -perhaps for the first time -give access to nuclear diffractive PDFs. Here, we made numerical predictions for four different nuclei, ranging from carbon to uranium, as well as again for different factorization breaking schemes. As an outlook, let us point out that at HERA also dijet production with leading neutrons has been studied [56][57][58]. These processes have been interpreted in terms of virtual charged-pion exchanges and gave first information on the structure of (virtual) pions at previously unaccessible values of x [59,60]. It would be very interesting to continue these studies at the EIC, also in view of possible factorization breaking in these processes [61], and perhaps even extend them to dissociation processes in collisions with heavy nuclei.
7,693
2020-05-01T00:00:00.000
[ "Physics" ]
Using Kinect v2 to Control a Laser Visual Cue System to Improve the Mobility during Freezing of Gait in Parkinson's Disease Different auditory and visual cues have been proven to be very effective in improving the mobility of people with Parkinson's (PwP). Nonetheless, many of the available methods require user intervention and so on to activate the cues. Moreover, once activated, these systems would provide cues continuously regardless of the patient's needs. This research proposes a new indoor method for casting dynamic/automatic visual cues for PwP based on their head direction and location in a room. The proposed system controls the behavior of a set of pan/tilt servo motors and laser pointers, based on the real-time skeletal information acquired from a Kinect v2 sensor. This produces an automatically adjusting set of laser lines that can always be in front of the patient as a guideline for where the next footstep would be placed. A user interface was also created that enables users to control and adjust the settings based on the preferences. The aim of this research was to provide PwP with an unobtrusive/automatic indoor system for improving their mobility during a Freezing of gait (FOG) incident. The results showed the possibility of employing such system, which does not rely on the subject's input nor does it introduce any additional complexities to operate. Introduction Freezing of gait (FOG) is one of the most disabling symptoms in Parkinson's disease (PD) that affects its sufferers by impacting their gait performance and locomotion. FOG is an episodic phenomenon that introduces irregularities in the initiation or continuation of a patient's locomotion and usually occurs in later stages of PD where patients' muscles cannot function normally and appear to be still when they are trying to walk [1][2][3][4]. is makes FOG one of the most intolerable symptoms that not only affects PD sufferers physically but also psychologically, as it makes them almost completely dependent on others for their basic and daily tasks. Consequently, the patient's quality of life decreases, and the healthcare and treatment expenditures increase, as does the cost of the injuries caused [1]. It has been estimated that about 50% of PwP experience FOG incidents [5]. Moreover, it has been proven that visual and auditory cues can have a positive impact on the subject's gait performance during a FOG incident [6][7][8]. Visual cues such as laser lines can act as a sensory guidance trick that provides an external trigger, which, in turn, can initiate movement [7]. ere has been much research conducted towards implementing apparatus and systems that can provide visual and auditory cues for PwP. In work done by Zhao et al. [9], a wearable system based on modified shoes was developed in order to cast a laser-based visual cue in front of PwP. e system consisted of a 3D printed add-on that included a red laser line projector and pressure sensors that detect the stance phase of a gait cycle and turn the laser pointer on. e unit provided the option to adjust the distance between the laser light strip and the subject's foot for the optimal effectiveness, depending on the user's preferences. e research provided a simple, yet effective approach towards providing visual cues for PwP with locomotion issues. Nonetheless, like any other approaches, this too has some limitations, such as the constant need to carry the shoe add-on, the batteries needed for the device, charging the batteries, and remembering to switch them on. In another attempt [10], researchers evaluated the effect of visual cues using two different methods, including a subject-mounted light device (SMLD) and taped step length markers. It was concluded that using laser projections based on SMLD have promising effects on the PwP's locomotion and gait performance. e method required patients to wear a SMLD that some patients might find inconvenient to have or even impractical in some situations. Moreover, SMLD systems have stability issues and steadiness difficulties due to the subjects' torso movements during a gait cycle. As expected, the visual cues must be constantly enabled during a gait cycle, regardless whether they are needed or not. In [11], although the SMLD method was employed, researchers added the 10 seconds on-demand option to the "constantly on" visual cue casting. is system was more sophisticated, consisting of a backpack having a remotely controllable laptop that made the subjects' mobility even more troublesome. In other attempts [12,13], a different approach was implemented by using virtual cues projected on a pair of goggles that is only visible to the patient. In [14], the effect of real and virtual visual cueing was compared, and it was concluded that real transverse lines casted on the floor are more impactful than the virtual counterparts. Nonetheless, using virtual cueing spectacles (VCS) eliminates the shortcomings in other techniques such as limitations in mobility, steadiness, and symmetry. VCS have also the advantage of being used in an external environment when the patient is out and about. Moreover, several research studies have been conducted using virtual reality (VR) to assess the possibility of VR integration for Parkinson's related studies [15][16][17][18][19][20]. Nonetheless, as the VR technology blocks patients' view and makes them unable to see their surroundings, the usage of this is limited to either rehabilitation by implementing exercise-based games, FOG provoking scenarios, or the assessment of patients' locomotion rather than real-time mobility improvement using cues. Although they are effective to some extent, these attempts tend to restrict the user either by forcing them to carry backpacks or wear vests containing electronics, or making them rely on conventional approaches such as attaching laser pointers to a cane [21], or laser add-on for shoes. e hypothesis of this study, on the other hand, is to propose a different technique: casting parallel laser lines as a dynamic and automatic visual cuing system for PwP based on Kinect v2 and a set of servo motors suitable for indoor environments. As Kinect has been proven to be a reliable data feed source for controlling servo motors [22,23], the Kinect camera was chosen for real-time depth data feed for this study. is paper also examines the possibility of using the Kinect v2 sensor for such purposes in terms of accuracy and response time. is research uses subject's 3D Cartesian location and head direction as an input for servo motors to cast visual cues accordingly. is eliminates the need of the user intervention or trigger, and at the same time, the need to carry or wear any special equipment. Despite this approach being limited to environments equipped with the proposed apparatus, it does not require any attachments or reliance on PwP themselves, something that can be beneficial in many scenarios. e system comprises a Microsoft Kinect v2, a set of pant/tilt servo motors alongside a microcontroller based on Arduino Uno and two laser line laser pointers. A two-line projection was chosen so that the second traversed laser line could be used to indicate a set area for which the next step has to land. e system was tested in different conditions, including a partially occluded scene by furniture to simulate a living room. Methods During the initial testing phase, 11 healthy subjects were invited, consisting of both males and females ranging from ages 24-31, with the age mean of 27 and SD of 2.34, a mean height of 174.45 cm (68.68 inch) and SD of 8.31 cm (3.27 inch) ranging from 163 to 187 cm (64.17 to 73.62 inch). ey were asked to walk in predefined paths: 12 paths per subject, walking towards the camera and triggering a simulated FOG incident by imitating the symptom while having the Kinect camera positioned at a fixed location. e subjects' skeletal data were captured and analyzed by the Kinect camera in real-time. e software was written in C# using the Kinect for Windows SDK version 2.0.1410.19000. e room that was used for conducting the experiments consisted of different pieces of living room furniture to mimic a practical-use case of the device. is not only yields more realistic results but also tests the system in real-life scenarios where the subject is partially visible to the camera and not all the skeletal joints are being tracked. To test and compare the Kinect v2's accuracy in determining both vertical and horizontal angles according to the subject's foot distance to the Kinect camera and body orientation, eight Vicon T10 cameras (considered as the gold standard) were also used to capture the subject's movements and compare those with the movements determined by the Kinect. e Vicon cameras and the Kinect v2 captured each session simultaneously while the frame rate of the recorded data from the Vicon cameras was down-sampled to match the Kinect v2 at approximately 30 frames per second. At a later stage and following an ethical approval, there was a recruitment of 15 PwP (with the collaboration of Parkinson's UK) to test the system and provide feedback. is research was published separately in [22]. e more indepth analysis and information with regard to this focus group can also be checked via [24]. Kinect RGB-D Sensor. Microsoft Kinect v2 is a time-offlight (TOF) camera that functions by emitting infrared (IR) lights on objects, and upon reflection of the lights back to the IR receiver, it constructs a 3D map of the environment where the Z-axis is calculated via the delay of receiving IR light [25]. Kinect v2 introduced many features and improvements compared to its predecessor such as 1080p and 424p resolution at approximately 30 frames per seconds for its RGB and depth/IR streams, respectively, as well as a wider field of view [26]. e ability to track 25 joints of six subjects simultaneously enables researchers to employ Kinect v2 as an unobtrusive human motion tracking device in different disciplines, including rehabilitation and biomedical engineering. Angle Determination. e Kinect v2 was used to determine the subjects' location in a 3D environment and localize the subject's feet joints to calculate the correct horizontal and vertical angles for servo motors. To determine the subject's location, Kinect skeletal data were used for joints' 3D coordinate acquisition. A surface floor can be determined by using the vector equation of planes. is is necessary to automate the process of calculating the Kinect's height to the floor that is one of the parameters in determining vertical servo angle: where A, B, and C are the components of a normal vector that is perpendicular to any vector in a given plane and D is the height of the Kinect from the levelled floor. x, y, and z are the coordinates of the given plane that locates the floor of the viewable area and are provided by the Kinect SDK. Ax, By, Cz, and D are also provided by the Kinect SDK once a flat floor is detected by the camera. For vertical angle determination, a subject's 3D feet coordinates were determined, and depending on which foot was closer to the Kinect in the Z-axis, the system selects that foot for further calculations. Once the distance of the selected foot to the camera was calculated, the vertical angle for the servo motor is determined using the Pythagorean theorem, as depicted in Figure 1. e subject's skeletal joints' distance to the Kinect on the Z-axis is defined in a righthanded coordinate system, where the Kinect v2 is assumed to be at origin with a positive Z-axis value increasing in the direction of Kinect's point of view. In Figure 1, a is the Kinect's camera height to the floor that is the same as variable D from equation (1) and c is the hypotenuse of the right triangle, which is the subject's selected foot distance to the Kinect camera in the Z-axis. θ is the calculated vertical angle for the servo motor. Note that we have considered the position offsets in the X and Y axes between the Kinect v2 camera and the laser pointers/servo motors in order to have the most accurate visual cue projection. Our experiments showed that the Kinect v2 determines a joint's Z-axis distance to the camera by considering its Y-axis value; i.e., the higher the value of a joint's Y-axis is to the camera's optical center, the further the distance it has to the camera in the Z-axis. is indicates that unlike the Kinect's depth space, the Kinect skeletal coordinate system does not calculate Z-axis distance (Figure 1, variable c) in a perpendicular plane to the floor, and as a result, the height of the points, that in this case are joints, are also taken into consideration. In case of a joint being obstructed by an object, for example, a piece of furniture, the obstructed joints' 3D Cartesian coordinate location tracking was compensated and predicted using "inferred" state enumerate, a built-in feature in the Kinect SDK. By implementing the "inferred" joint state, a joint data was calculated, and its location was estimated based on other tracked joints and its previously known location. Figure 2 shows the Kinect v2 accuracy in determining a subject's joint (left foot) distance to the camera in Z-axis compared to a gold standard motion capture device (Vicon T10). It was concluded that Kinect v2 skeletal data acquisition accuracy was very close (98.09%) to the industry standard counterpart. e random noise artifacts in the signal were not statistically significant and did not affect the vertical angle determination. e subject's body direction that determines the required angle for the horizontal servo motor can be yielded through the calculation of rotational changes of two subject's joints including left and right shoulders. e subject's left and right shoulder joints' coordinates were determined using skeletal data and then fed to an algorithm to determine the body orientation as follows: servo angle � 90 ± sin −1 |shoulderA − shoulderB| . (2) In Figure 3, d is the Z-axis distance difference to the camera between the subject's left and right shoulders. Once d based in the equation (2) was calculated, the angle for the horizontal servo motor can be determined by calculating the inverse sine of θ. Depending on whether the subject is rotating to the left or right, the result would be subtracted or added from/to 90, respectively, as the horizontal servo motor should rotate in reverse in order to cast laser lines in front of the subject accordingly. FOG Detection. In previous studies, the authors have implemented the process of FOG detection in [27] using the gait cycle and walking pattern detection techniques [26,28]. Once the developed system detects a FOG incident, it will turn the laser pointers on and start determining the appropriate angles for both vertical and horizontal servo motors. After passing a user-defined waiting threshold or disappearance of the FOG incident characteristics, the system returns to its monitoring phase by turning off the Journal of Healthcare Engineering laser project and servo motors movements. Figure 4 shows the GUI for the developed system application. e left image shows a Parkinson's disease patient imitator during his FOG incident. e right window shows that the subject is being monitored, and his gait information is being displayed to healthcare providers and doctors. As it can be seen in the "FOG Status" section displayed in the bottom rectangle, the system has detected a FOG incident and activated the laser projection system to be used as a visual cue stimulus. e circled area shows the projection of laser lines in front of the subjects (according to the distance from their feet to the camera) and their body direction. e developed system also allows further customization, including visual cue distance adjustments in front of the patient. Serial Connection. A serial connection was needed to communicate with the servo motors controlled by the Arduino Uno microcontroller. e transmitted signal by the developed application needed to be distinguished at the receiving point (the Arduino microcontroller), so each servo motor can act according to its intended angle and signal provided. We have developed a multipacket serial data transmission technique similar to [29]. e data was labeled at the transmitter side, so the microcontroller can distinguish and categorize the received packet and send appropriate signals to each servo motor. e system loops through this cycle of horizontal angle determination every 150 ms. is time delay was chosen as the horizontal servo motor does not need to be updated in real-time due to the fact that a subject is less likely to change his/her direction in very short intervals. is ensures less jittery and smoother movements of horizontal laser projection. e vertical servo motor movement was less prone to the jitters as the subject's feet are always visible to the camera as long as they are not obstructed by an object. Design of the Prototype System. A two-servo system was developed using an Arduino Uno microcontroller and two class-3B 10 mW 532 nm wavelength green line laser projectors as shown in Figure 5(a); green laser lines have been proven to be most visible amongst other laser colors used as visual cues [30]. A LCD display has also been added to the design that shows all the information with regard to vertical and horizontal angles to the user. Figure 5(a) shows the laser line projection system attached to the tilt/pan servo motors. Figure 5(b) shows the top view of the prototype system including the wiring and voltage regulators. Figure 5(c) shows the developed prototype system used in the experiment at different angles including the Kinect v2 sensor, pan/ tilt servo motors, laser pointers, and the microcontroller. Figure 6 demonstrates the calculated vertical angle based on the subjects' feet/joint distance to the Kinect camera in Zaxis. Results e right foot has been omitted in the graph for simplicity. As Figure 6 demonstrates, the system provided highly accurate responses based on the subject's foot distance to the camera in Z-axis and the vertical servo motor angle. Subjects were also asked to rotate their body in front of the Kinect camera to test the horizontal angle determination algorithm, and as a result, the horizontal servo motor functionality. Figure 7 shows the result of the calculated horizontal angle using equation (2) for the left and right directions. Figure 7 shows how the system reacts to the subject's body orientation. Each subject was asked to face the camera in a stand-still position while rotating their torso to the left and to the right in turns. As mentioned before, the horizontal angle determination proved to be more susceptible to noise compared to the vertical angle calculation. is is due to the fact that as the angle increases to more than 65 degrees, the shoulder farthest away would be obstructed by the nearer shoulder, and as a result, the Kinect should compensate by approximating the position of that joint. Nonetheless, this did not have any impact on the performance of the system. Overall, the entire setup including the Kinect v2 sensor, tilt/pan servo motors, laser projectors, microcontroller, and LCD except the controlling PC will cost about £137.00, making it much more affordable than other less capable alternatives available on the market. Discussion A series of pan/tilt servo motors have been used alongside laser line projectors to create a visual cuing system, which can be used to improve the mobility of PwP. e use of the system eliminates the need to carry devices, helping patients to improve their mobility by providing visual cues. e implemented system has the ability to detect FOG using only the Kinect camera, i.e., fully unobtrusive, and provide dynamic and automatic visual cues projection based on the subject's location without the patient's intervention as opposed to other methods mentioned. It was observed that this system can provide an accurate estimation of the subject's location and direction in a room and cast visual cues in front of the subject accordingly. e Kinect's effective coverage distance was observed to be between 1.5 and 4 meters (59 and 157.48 inch) form the camera, which is within the range of the area of most living rooms, making it an ideal device for indoor rehabilitation and monitoring purposes. To evaluate the Kinect v2's accuracy in calculating the vertical and horizontal angles, a series of eight Vicon T10 cameras were also used as a golden standard. Overall, the system proved to be a viable solution for automatic and unobtrusive visual cues' apparatus. Nonetheless, there are some limitations to this approach including the indoor aspect of it and the fact that it requires the whole setup including the Kinect, servos, and laser projectors to be included in the most communed areas of a house such as the living room and the kitchen. Additionally, during the experimentation, the Kinect's simultaneous subject detection was limited to only one person. Nevertheless, Kinect v2 is capable to detect six simultaneous subjects in a scene. However, the laser projection system, in order to work properly, should only aim at one person at a time. e developed system has the ability to either lock on the first person that comes into the coverage area or distinguish the real patient based on the locomotion patterns and ignore other people. Despite that, the affordability and ease of installation of the system would still make it a desirable solution should more than one setup need to be placed in a house. Moreover, the use of a single Kinect would limit the system's visibility and visual cue projection as well. Conclusion e results of this research showed a possibility of implementing an automatic and unobtrusive FOG monitoring and mobility improvement system, while being reliable and accurate at the same time. e system's main advantages such as real-time patient's monitoring, improved locomotion and patient's mobility, and unobtrusive and dynamic visual cue projection make it, in overall, a desirable solution that can be further enhanced for future implementations. As a next step, one could improve the system's coverage with a series of this implemented system to be installed in PwP's houses to cover most of the communal areas, or areas where a patient experiences the FOG the most (i.e., narrow corridors). One could also investigate the possibility of using such systems attached to a circular rail on a ceiling that can rotate and move according to the patient's location; this removes the need for extra setup in each room as the system can cover some additional areas. Moreover, by coupling the system with other available solutions such as laser-mounted canes or shoes, patients can use the implemented system when they are at home, while using other methods for outdoor purposes. is requires integration at different levels such as a smartphone application and visual cues in order for these systems to work as intended. Finally, the system's form factor can be made smaller to some extent by removing the Kinect's original casing and embedding all the equipment in a customized 3D printed enclosure, which makes it more suitable for a commercial production. Data Availability e gait analysis data used to support the findings of this study are restricted by the Brunel University London Ethics Committee in order to protect patient privacy. Data are available from<EMAIL_ADDRESS>for researchers who meet the criteria for access to confidential data. Journal of Healthcare Engineering 7
5,377.4
2019-02-20T00:00:00.000
[ "Computer Science" ]
Micro Thermal Diode with Glass Thermal Insulation Structure Embedded in Vapor Chamber This paper reports a novel micro thermal diode with an embedded micro glass thermal insulation structure. The diodicity is given by wettability-driven one-way fluid circulation mechanism without any external force. The water circulation only occurs in "forward" mode, resulting in low thermal resistance. The glass thermal insulation structure enhances thermal insulation in "reverse" mode. The thermal insulation in forward and reverse mode, Rf and Rr, were measured as 2.06±0.34 K/W and 3.04±0.38 K/W, respectively. Therefore, the performance index, Rr/Rf, was 1.47±0.14. Introduction A thermal diode is one of important components of a micro thermal device such as a micro refrigerator and a thermal energy harvester [1]. A thermosyphon, in which one-way working fluid circulation is driven by gravity force, can be used as a thermal diode. However, the heat transfer property strongly depends on the gravity direction. In addition, the gravity force becomes quite small compared with surface tension in a microscale region, which limits the miniaturization of the thermosyphon. Therefore, surface-tension-driven fluid circulation mechanism is more suitable in a micro thermal diode. In this paper, we propose a novel micro thermal diode, in which the one-way working fluid circulation is driven by wettability difference between condensation and evaporation parts and thermal insulation in the reverse direction is realized by an embedded thick glass microstructure. Figure 1 shows the structure and working mechanism of the micro thermal diode. The device consists of two plates: an outlet-side plate with a superhydrophobic surface and an inlet-side plate with microchannels with a hydrophilic surface. The microchannels are filled with working fluid. A glass microstructure, which is embedded in the inlet-side channel plate, forms a thermal insulation chamber wall. Structure and principle of the micro thermal diode In forward mode, the inlet-side plate with the microchannels is heated, while the outlet-side plate with the superhydrophobic surface is cooled ( Fig. 1 (b)). The working fluid in the microchannels evaporates and condenses on the superhydrophobic surface, which makes small droplets on the surface. The droplet glows by coalescing, and then returns to the microchannel when it touches to the hydrophilic microchannel wall. Thus, the evaporation-condensation cycle continues like a heat pipe. The evaporation rate from microchannels is higher than that of pool boiling [2]. In addition, the heat transfer coefficient of drop-wise condensation is much higher than that of film-wise condensation [3]. From these mechanisms, the thermal resistance in forward mode is quite low. On the contrary, the outlet-side plate with the superhydrophobic surface is heated in reverse mode ( Fig. 1 (c)). The working fluid on the superhydrophobic surface evaporates and condensed on the microchannel wall. However, the condensed liquid remains in the microchannels because of their hydrophilicity, and does not return to the heating area [4]. Therefore, the heat mainly flows thorough the chamber wall, which is made of thick glass for high thermal insulation. Figure 2 shows the fabrication process. First, micromolds for the glass thermal insulation structure were formed on a Si substrate ( Fig. 2 (a.1)). A borosilicate glass plate was anodically bonded to it under vacuum condition at an applied voltage of 600 V and a substrate temperature of 400°C (Fig 2 (a.2)). The glass was reflowed at 900°C under atmospheric pressure to fill the vacuum-sealed molds ( Fig. 2 (a.3)). The unnecessary glass overlapping the surface was removed by grinding and polishing ( Fig. 2 (a.4)), and the microchannels were formed by deep reactive ion etching ( Fig. 2 (a.5)). Finally, the surface was covered with SiO 2 to increase the wettability ( Fig. 2 (a.6)). Figure 3 shows the fabricated microchannels and the embedded glass thermal insulation structure on the inlet-side plate. Device fabrication For the outlet-side plate, another Si substrate was etched to form mesa structures, and a thin film of Au/Cr was deposited ( Fig. 2 (b.2)). Then, a Ni/PTFE film was electroplated using a solution consisting of 1.2 M of Ni(NH 2 SO 3 ), 0.5 M of H 3 BO 3 , and 40 g/L of PTFE particles with an average diameter of 220 nm [5]. A pulsed electroplating method was used, where "on" time was fixed at 10 ms and "off" time was varied from 2 ms to 10 ms. The temperature of the solution was kept at 55±1°C, and the current density in "on" period was set at 0.2 mA/dm 2 . The film was annealed at 300°C in a vacuum furnace to form a network of PTFE, and Ni was etched by nitric acid to make a porous PTFE surface. Figure 4 shows the superhydrophobic surface made of porous PTFE and a water droplet on it. Figure 5 shows the contact angle of the water droplet on the superhydrophobic surface. The contact angle increased with the etching time due to lotus leaf effect. When the etching time was too long, however, the contact angle again reduced, because the PTFE particles were partially removed. The maximum contact angle obtained was about 160°. Experimental result 4 μL of water was sealed between the two plates with a silicone rubber gasket. Figure 6 shows the experimental setup. The test device was placed between two heat conduction blocks, one of which was heated by an electric heater and the other was connected to a heat sink for cooling. Heat flow through the device, Q, was measured by a heat flux sensor (FMR-200-K, Concept Engineering Gmbh, Germany), and temperature difference across the device, T, was measured by thermocouples attached on both sides of the device. The thermal resistance, R th , was then calculated as The performance index of the thermal diode was defined as where R f and R r are the thermal resistance in forward and reverse mode, respectively. Figure 7 shows measured thermal resistance in forward and reverse mode. When the heat flow was small, the S/N ratio of the data was low due to small temperature difference across the device. Therefore, we use the experimental results where the heat flow was larger than 1 W for the following discussion. The conductive thermal resistance of the glass wall was measured using the device without water. The measured thermal resistance was about 4.09±0.33 K/W, which is roughly consistent with a theoretically estimated value of 3.0 K/W. The reverse mode thermal resistance was measured as 3.04±0.38 K/W, and thus heat mainly flows through the glass wall in reverse mode. On the other hand, the forward mode thermal resistance was about 2.06±0.34 K/W, which was much smaller than that of reverse mode. This result suggests that water droplets returned to the microchannels against the gravity as intended. The performance index, R r /R f , was 1.47±0.14. Conclusion We designed and fabricated a micro thermal diode with a glass thermal insulation structure embedded in a vapor chamber. The fabrication processes of the glass micro structure and the porous PTFE superhydrophobic surface were developed. The thermal resistances in both forward and reverse mode were measured. The result confirmed the flow direction dependency of the thermal resistance, and suggested that the water circulation mechanism worked against the gravity as intended. The performance index, R r /R f , was as high as about 1.47±0.14.
1,678.6
2013-12-04T00:00:00.000
[ "Physics", "Engineering" ]
The digital labor of ethical food consumption: a new research agenda for studying everyday food digitalization This paper explores how consumers’ ethical food consumption practices, mediated by mobile phone applications (apps), are transformed into digital data. Based on a review of studies on the digitalization of ethical consumption practices and food apps, we find that previous research, while valuable, fails to acknowledge and critically examine the digital labor required to perform digitalized ethical food consumption. In this paper, we call for research on how digital labor underlies the digitalization of ethical food consumption and develop a conceptual framework that supports this research agenda. Our proposed conceptual framework builds on three interconnected analytical concepts—datafication, affordances and digital labor—that enable the study of digital labor as an infrastructural element of digitalized food consumption. We illustrate our conceptual framework through our previous research concerning Buycott, a US-based mobile app whose stated aim is to facilitate consumers’ ethical purchasing decisions. Using the walkthrough method, we consider how the Buycott app engages user-generated data and what implications this holds for consumers. The app’s infrastructure, we suggest, connects ethical consumption and digital labor. A richer understanding of the digital food economy, we propose, enables social scientists not only to elucidate how consumers engage in digital labor, but also to contribute to the development of new data governance structures in the digital food economy. We therefore call for social scientists interested in food, consumption and the digital economy to contribute to a new research agenda for studying everyday food digitalization by empirically examining how ethical consumption apps implicate ethical consumers’ work. Introduction Imagine you are shopping in a supermarket. You pull out your smartphone and begin testing a new ethical consumption app that you have downloaded recently. The app, Buycott, enables you to scan the barcodes of retail products and check if the scanned items are in conflict with your ethical consumption goals, one of which is avoiding companies that do not allow employees to form a labor union. As it happens, one of the products you intend to buy is not in Buycott's database. This means that currently there is no information available on this product. Luckily, you happen to know which corporation owns the company that produces this product, so you enter the information and some requested product information yourself. This information becomes part of Buycott's database, ready to be mobilized when another consumer scans a similar barcode. What is revealed in this encounter between ethical consumer, smartphone, mobile app, food, barcode, supermarket and database? In this paper, we argue it is important to understand the human-data assemblages (Lupton 2018) that construct this event to grasp fully what happens in this seemingly mundane practice of digitally-enabled ethical food consumption. Building on and extending previous work on 1 3 this topic (e.g., Fuentes and Sörum 2019;Hawkins and Horst 2020) we are conceptually interested in consumers' work, including practices such as uploading missing company and product information. We argue that previous research on the digitalization of ethical consumption, while valuable, fails to acknowledge and critically examine consumers' digital labor, which is required to perform digitalized ethical (food) consumption. By digital labor we refer to "formal (compensated) and informal (uncompensated) activities that take place in and through digital and mobile technologies" (Gregory 2017). In this paper, we focus on informal digital labor as an infrastructural, albeit mostly obscure, aspect of the digitalization of ethical food consumption. By attending to informal digital labor, our paper offers a new critical perspective on the relationship between crowdsourced data, ethical consumption, and corporate growth, and ultimately calls for further research on digital labor and everyday food digitalization. We argue that it is particularly important to study the informal labor that enables and facilitates everyday food digitalization in the context of a rapidly expanding digital food economy. Citizens around the globe are increasingly living within a 'digital economy' or 'platform economy', in which large technology companies (e.g., Amazon, Facebook) are creating internet-based platforms that radically change how people 'socialize, create value in the economy, and compete for the resulting profits' (Kenney and Zysman 2016, second paragraph). In fact, 'the platform has emerged as a new business model, capable of extracting and controlling immense amounts of data' (Srnicek 2017, p. 6), enabling economic growth at a time of declining manufacturing profitability (Srnicek 2017). The food economy is no exception to this development (cf. Carolan 2020; Prause et al. 2020). Examples include the increase in precision or 'smart' farming, which utilizes sensory devices to collect agricultural big data, and the emergence of mobile apps that track users' caloric intake or promise to facilitate ethical food consumption (e.g., Bronson 2018;Didžiokaitė et al. 2017). These diverse examples illustrate how food and its production, distribution and consumption are increasingly translated into digital data. The collection of these data via digital platforms holds considerable value for producers (e.g., for business optimization) as well as consumers (e.g., for self-optimization). Our paper's point of departure is that attending to everyday digital labor practices is crucial to understanding the inextricable interrelations between production and consumption in the digital food economy. Consumer-facing digital platforms are designed to enable or constrain situated and entangled practices of food, eating and datafication (Schneider et al. 2018). However, to date, very few studies have considered these practices in relation to informal digital labor. In this paper, we turn our attention to one specific form of food consumption-ethical food consumption-and explore how everyday practices mediated and facilitated by mobile phone applications (apps) are increasingly transformed into digital data. With this analytic focus, we conceptualize digital labor as an infrastructural element of digitalized food consumption. The aim of our paper is to develop a conceptual framework that supports our proposed research agenda to study how digital labor underlies the digitalization of ethical food consumption. Our proposed conceptual framework builds on three interconnected analytical concepts: datafication, affordances and digital labor, which we define and discuss in the third section of the paper. But first, to put these concepts and our aim into context, we review key literature on the digitalization of ethical consumption. Ethical consumption and the prominence of apps Consumers increasingly employ digital network infrastructures to facilitate ethical consumption practices. Ethical consumption refers to "any practice of consumption in which explicitly registering commitment to distant or absent others is an important dimension of the meaning of activity of the actors involved" (Barnett et al. 2005, p. 29). Over the past decade, digital media and mobile applications (apps) have gained prominence amongst consumers and have become key platforms for ethical food consumption. In recent work, scholars have analyzed digital food platforms as implicating care for both the consuming self and the producing/distributing other (e.g., Eli et al. 2018;Giraud 2018;Witterhold 2018). Within this body of work, ethical food consumption enabled by digital media is described as a form of 'digital food activism' (Schneider et al. 2018). Digital food activism aims to remap networks of food politics, production, distribution and consumption, transforming relationships between consumers and industrial and policy actors. However, emerging studies of this phenomenon highlight the complexities of 'apptivism' (Lewis 2018), drawing attention to the possibilities and limitations that app-based ethical consumption presents in a digital economy (e.g., Eli et al. 2016;Humphery and Jordan 2018). To understand how ethical food consumption apps operate within the digital economy, researchers have started investigating 'appified culture' (Morris and Murray 2018), referring to "apps as sociocultural and political artefacts that are created and experienced in complex relationships and networks" (Lupton 2020, p. 2). However, to date, studies on ethical consumption apps have been limited, focusing on three key aspects: (1) the concepts of ethics scripted into apps (e.g., Hansson 2017), (2) the ways in which consumers deploy these apps (e.g., Hawkins and Horst 2020), and (3) how apps as 'consumerist mediator' reconfigure the relationship between consumer and market (e.g., Soutjis 2020). We review each strand in the following sections. In her study of ethical smartphone apps, Hansson (2017) conducted an "object ethnography" of three apps-the Fairtrade app, the GreenGuide app and the Shopgun app. Drawing on concepts of socio-technical "scripting", she describes how ethics is "built in" the apps, arguing these apps "work as ethical choice prescribers" (Hansson 2017, p. 104). While each app implicates a different script of ethical consumption, what they have in common is a socio-technical materialization of ethics as realized through consumer action (Hansson 2017, p. 117). Other studies arrive at similar conclusions and further analyze the kind of ethics built into the app and the ideal users configured through these ethics. For instance, a Swedish study of three ethical consumption apps (also focusing on the Green guide, the Fair trade app and Shopgun) shows that when consumers follow these apps' scripts as part of their everyday consumption practices, the apps "put pressure on consumers to be ethical" (Fuentes and Sörum 2019, p. 149). Yet, these apps also help to resolve this pressure, in providing information to consumers eager to manage the complexity of consuming ethically, thereby 'agencing ethical consumers' (Fuentes and Sörum 2019). Smartphone apps are thus understood as catalysts for consumers to become "a new type of economic actor with the agential capabilities required to operate in the ethicalized landscape of everyday consumption" (Fuentes and Sörum 2019, p. 149). However, an analysis of the ethical consumption app Buycott, which facilitates consumer-side boycotts and buycotts of retail products, problematizes the app's promotion of an individualized, commodity-centric activism that reinforces tenets of the neoliberal market (Eli et al. 2016). Humphery and Jordan (2018) argue that digital activism replicates the problematic fragmentation of contemporary activism; in other words, rather than providing an alternative model, digital activism utilizes platforms in ways that reinforce the erosion of collective action. Research on digital activism raises important questions as to how consumers take up or reject the scripts of ethical consumption apps. As Hansson (2017) notes "[…], whether ethical smartphone apps become important market devices in shaping and promoting ethical consumption or not depends on if and how consumers use them or follow the scripts" (Hansson 2017, p. 118). Recent studies have started to explore how consumers employ ethical consumption apps in their everyday lives (Hawkins and Horst 2020;Sörum 2020). Crucial to this, as Hawkins and Horst (2020) suggest, is app design, which both shapes and limits users' actions and concepts of activism. Importantly, Hawkins and Horst (2020) draw attention to the laborious elements of app-based engagement with ethical consumption, and to the nuanced ways in which consumers deploy these apps. Such nuance, however, can run against the scripts upon which the apps' potential for ethical action is premised. This is illustrated in Fuentes and Sörum's (2019) analysis of the hybrid app-user agency implicated in ethical consumption apps, where following an app's scripts is essential to realizing one's agentic potential as an activist consumer. In this context, Sörum (2020) argues it is important to understand how consumers engage with ethical consumption apps that aim to assist with product choices and ultimately responsible consumption. Based on qualitative fieldwork in Sweden, he finds that "several respondents resisted ECAs [ethical consumption applications] because they did not provide a distinctive value, affirming the framings by spokespersons or contributing to users' identity projects" (Sörum 2020, p. 110). Moreover, many respondents in the user interviews Sörum conducted found the apps confusing and "the technologically advanced situation seemed to add perplexity to the decision-making process due to how the consumer interpreted the outcome of her product scanning" (2020, p. 107). In conclusion, the study finds that force of habit and conflicts with prevailing shopping habits and consumer norms pose key barriers for the acceptance of ethical consumption apps (Sörum 2020, p. 110). The third, emerging strand of literature on ethical consumption apps attends to apps' role as 'consumerist mediator(s)' that reconfigure the relationship between consumer and market (Soutjis 2020). Analyzing the Yuka app, popular in France, that provides users with a health rating of food products, Soutjis found that the app enables users to intervene in markets, but that the potential for intervening "is related to the openness and collection of product data in the backstage of the market" (2019, p. 116). This attention to infrastructure is a novel contribution to the literature on ethical consumption apps. It shifts scholarly attention to product data, the laborious processes of data collection and the status of data. A similar shift toward infrastructure is found in a recent interdisciplinary paper, where scholars from computer science, information studies, and science and technology studies (STS) reflect on developing a healthy eating app based on Swiss retailers' loyalty program data . Both studies foreground the datafication of everyday shopping and consumption practices and elucidate how consumers and apps become part of so-called data assemblages (Kitchin and Lauriault 2014) or human-data assemblages (Lupton 2018; see next section). This raises the question of how apps mediate between consumers and markets, in light of the reconfiguration of data assemblages and the digital economy . Taken together, while the literature has begun exploring how ethics are scripted into apps, how consumers deploy ethical consumption apps, and how apps mediate the relationship between consumer and market, it has yet to engage with the work that consumers do -their digital labor -when employing ethical consumption apps. In this paper, we argue for attention to digital labor as a key part of consumers' everyday engagements with data assemblages, the digital economy, and the digitalization of food and eating. In the next section we propose a conceptual framework that supports this research agenda with the aim of studying how digital labor underlies the digitalization of ethical food consumption. Conceptual framework: understanding the digital labor of ethical food consumption Our conceptual framework builds on three interconnected analytical concepts: datafication, affordances and digital labor. This framework allows us to explore an understudied aspect of everyday digitalization, namely, how consumers actively contribute to the successful functioning of ethical consumption apps, while wielding influence on companies' reputation and sales in the process. In developing this framework, we are inspired by the concepts of 'prosumption' (Ritzer 2014) and 'digital prosumption' (Ritzer and Jurgenson 2010). Prosumption is a neologism combining production and consumption. Prosumption research integrates studies of production and consumption processes and practices. Ritzer and Jurgenson (2010) argue that prosumption is increasingly becoming central due to a sharp increase of user-generated online content (see also research on participatory web cultures, e.g., Beer and Burrows 2010). They suggest that "[i]n prosumer capitalism, control and exploitation take on a different character than in the other forms of capitalism: there is a trend toward unpaid rather than paid labor" (Ritzer and Jurgenson 2010: abstract). This development has been captured in research attending to the 'working consumer' (Kleemann et al. 2008;Rieder and Voß 2010;Hornung et al. 2011). Studies of working consumers capture how companies try to integrate consumers' productive labor power into production processes through consumer self-service. With Web 2.0 applications, more comprehensive modes of user integration have become widespread and, thus, an extended model of 'working consumers' has been proposed (Hornung et al. 2011). However, this is not a one-directional process and it is important to emphasize the interdependency and volatility of the relationship between digital prosumers and enterprises. As Rieder and Voß observe, "Web 2.0 is not just a tool for enterprises to put customers to work. It is also a powerful instrument in the hands of customers, which may significantly influence the image and turnover of enterprises" (Rieder and Voß 2010, p. 8). The first analytical concept in our framework is datafication. By datafication we mean a "process by which subjects, objects, and practices are transformed into digital data. […] a logic that sees things in the world as sources of data to be 'mined' for correlations or sold, and from which insights can be gained about human behavior and social issues" (Southerton 2020, p. 1). Datafication is central to the digital economy as it enables the aggregation and analysis of big data sets for patterns (e.g., in behavior) that provide new business insights and, as a result, has wide-ranging effects on individual and social lives, beyond economic value creation. Following the data reveals and foregrounds so-called 'human-data assemblages' (Lupton 2018), that is, networks of humans, devices, software, data and more, which "highlight the distributed and dynamic nature of subjectivity and embodiment […]" (Lupton 2018, p. 5). Lupton's more-thanhuman approach resonates with critical data studies scholars, who emphasize that technical systems relying on data are always socio-technical systems that "are as much a result of human values, desires and social relations as they are scientific principles and technologies" (Kitchin 2021, p. 5). Sadowski (2019), who studies the political economy of smart technologies, has recently proposed that data are a form of (economic) capital rather than a commodity, as previous studies on the social, political and economic implications of data have assumed. He argues that an understanding of data as capital enables researchers to "better analyze the meaning, practices, and implications of datafication as a political economic regime" (Sadowski 2019, p. 1). Our second analytical concept is affordances. Studying what technologies and artefacts afford -that is, what actions they enable and allow -is a common approach in STS and related fields. However, sociologist Jenny Davis (2020) has proposed a shift from what technologies afford to how they afford. She suggests that "asking how instead of what objects afford shows nuanced relationships between technical features and their effects on human subjects while accounting for creative and subversive human acts" (Davis 2020, p. 10). Davis has developed the so-called 'mechanism and conditions framework' to enable a focus on how objects afford (Davis 2020). By attending to the mechanisms of affordance researchers can examine how "technologies request, demand, encourage, discourage, refuse and allow" certain actions and social dynamics to take shape (Davis 2020, p. 11). Analyzing the conditions of affordances allow us to understand the relational nature of human-technology encounters: "The conditions of affordance vary by perception, dexterity, and cultural and institutional legitimacy" (Davis, 2020, p. 11). Our third analytical concept is digital labor. Digital labor encompasses "formal (compensated) and informal (uncompensated) activities that take place in and through digital and mobile technologies" (Gregory 2017). Examples of compensated digital labor include click work done in people's homes and call-center work in large offices. Several researchers have pointed out that cheap computers and connectivity have drastically lowered the costs of some means of production, creating an enormous potential labor pool. Studies of emergent forms of digital labor emphasize that digital labor has potential to challenge distinction between public and private, amplify online outsourcing through global platforms, challenge distinctions between production and consumption and lead to new categorizations (e.g., worker vs. self-employed, employee vs. independent worker) (e.g., Scholz 2013; Gregory 2017; Graham and Anwar 2018). However, digital labor, in both its compensated and uncompensated forms, also amplifies problematic working conditions and the unfair compensation of workers in the digital economy (e.g., Graham et al. 2017;Rosenblatt 2018). As Sadowski (2019) mentions, the issue of fair compensation is difficult to resolve, particularly where users produce data without being formally employed. After all, what would be a fair price for one's personal information? Nonetheless, he suggests two ways to judge: "(1) what kind of compensation, if any, is offered for data and (2) what is the difference between the compensation for data producers and the value obtained by data capitalists?" (Sadowski 2019, p. 8). In our conceptual framework, we focus on unpaid, userbased digital labor. This includes activities that users themselves may not consider 'work', such as uploading a photo to social media, curating a public playlist on a streaming service, or posting about a recent dining experience on a review website. Whereas some researchers describe these practices as part of a participatory (media) culture (Jenkins 2006) with the potential for user input and collaboration, others warn about the exploitation of users as immaterial laborers who produce the "informational and cultural content of the commodity" (Lazzarato 1996, p. 133), thus essentially providing 'free labor' (Terranova 2000). In an overview of this debate, Postigo (2016) mentions a tendency to overcome such a dichotomous view on user-generated content (UGC) and points to approaches in new media research that analytically attend to "co-production, notions of the amateurs as entrepreneurs, attempts to theorize the political economy of Web 2.0 platforms, and work on understanding situated moral economies of meaning and participation [as] endeavors for reconciling critical perspectives with those that see UGC as empowering […]" (Postigo 2016, p. 334). Such analyses recognize the co-existence of 'work and play' while emphasizing the constitutive (but not deterministic) power of platforms. Key to digital labor is the digital platform itself. In his study of gaming culture on YouTube, Postigo suggests that platforms are "architectures of digital labor" that seamlessly and invisibly straddle labor and leisure. The concept is used to show how technological features designed into YouTube create a set of probable uses/meanings for YouTube, most of which are undertaken as social practice. These same features, however, serve YouTube's business interests and so have created a set of affordances that allow YouTube to extract value from UGC and constitute its digital labor architecture. (Postigo 2016, p. 333) Postigo therefore argues that all forms of cultural practice traversing through these architectures (shaped by algorithm and affordances) are similarly captured and converted to inventory and enter the organizational logics of platform owners, be they YouTube, Facebook, Tumblr, or Twitter. We suggest that prosumption and digital activism could also be considered digital labor, as digital activism platforms are grounded in models of user participation through unpaid data generation. Yet, as Lindtner (2020) argues, digital labor on such platforms remains hidden, because "when users participate in digital platforms […], they are celebrated as entrepreneurial agents of content creation, remix, and even social movements, masking their transformation into cocreators of economic value behind a story of empowerment" (Lindtner 2020, p. 14). In this paper, we explore how capturing consumers' ethical consumption practices, converting them into datasets, and monetizing these datasets leads to this co-creation of economic value. In the next section, we draw on our previous research and discuss an ethical consumption app, Buycott, to illustrate and highlight how datafication, affordances and digital labor turn users' data into value. Exploring digital labor on an ethical consumption app Using the conceptual framework we describe above, we are inspired by 'the walkthrough method' (Light et al. 2018) to explore and illustrate how digital labor comes into being in the everyday workings of Buycott, an ethical consumption app, based on our published research. The method "enables researchers to identify the app's context, highlighting the vision, operating model and governance that form a set of expectations for ideal use. By walking through the app's registration, everyday use and deletion, this technique allows for recognition of embedded cultural values in an app's features and functions" (Light et al. 2018, p. 896). The authors suggest that the method also enables researchers to study how apps shape users' self-expression, relationships and interactions. In the illustrative example we present below, we use the walkthrough method to attend not only to the cultural values embedded in the app, but also to its 'technoeconomic assumptions' (Birch 2017), which include claims making and the production of knowledge, as well as the staking of claims and the assertion of expertise (Birch 2017, p. 5). Our discussion of Buycott is informed by our longterm digital ethnographic engagement with the app and its 1 3 software updates since its launch in 2013. As we have argued elsewhere, long-term engagement with emerging, evolving and elusive digital technologies such as apps enables copresence with digital platforms and their interfaces, devices, users and objects. It also provides important insights into the enactment of data assemblages and shifting accountability relations within these assemblages (Schneider and Eli 2021;Schneider et al. 2022). The Buycott app Based in California, Buycott is a private company whose barcode-scanning app promotes political participation via selective consumption. Using the slogan "vote with your wallet" 1 , Buycott is premised on a logic that positions the app itself and the organization that developed it as mediators of political action, mobilizing both consumers and media support (Eli et al. 2018, p. 213). Users perform product boycotts and buycotts online through initiating and subscribing to campaigns. Each campaign is issue-specific, and themes include animal rights, civil rights, criminal justice, etc. Once a user initiates a campaign, others can click to join this campaign on the website or through the app. Subscribers are then expected to use the app to scan product barcodes. After each barcode scan, the app produces a 'family tree' of companies and parent companies, thus revealing to users whether the product belongs to companies they should either support or avoid, based on the campaigns to which they've subscribed (Eli et al. 2016). Our research on Buycott began in 2013, shortly after the app's launch. At the time, Buycott's campaigns for the labeling of genetically modified (GMO) foods and against Koch Industries received considerable media coverage from prominent outlets, such as Forbes (O'Connor 2013) andWired (2013). Interested in how Buycott might feed into everyday decision-making about food, we undertook a participatory approach akin to the walkthrough method described by Light et al. (2018). Having joined Buycott campaigns, we began using the app to scan retail products in our own homes. This offered us a first-hand experience of the app's scripts but also of its many bugs (e.g., the provision of unreliable or conflicting product data), generating further questions about how consumers were interpreting and using the app. Thus, in 2014, we shifted our focus to exploring how consumers understood Buycott's knowledge production and its ethical ramifications. Through analyzing user-generated social media posts and reviews, we found that many users did not engage with Buycott's participatory script, but rather viewed themselves as information recipients (Eli et al. 2016). This led to gaps between the app's vision and use in practice. And these gaps, we realized, did not reflect user (mis)interpretation as much as they reflected the app's "dynamic co-constitution, involving the triad of the news media, citizen-consumers, and the ICT platform" (Eli et al. 2016, p. 66). Focusing on this triad, we developed a case study analysis of Buycott's most subscribed campaigns in 2014 -'Long live Palestine' and 'Demand GMO Labelling'. We conducted a thematic discourse analysis of news media texts (published online, April 2013 to August 2014), user-generated posts (Buycott Facebook page, iTunes user reviews), and texts generated by Buycott's developers (Buycott's website, Facebook page, Twitter account). Through this, we analyzed how the multiple discourses within the triad of media, users and developers co-construct and constrain possibilities for consumer action, imbued with internal tensions and contradictions. For example, although the 'Demand GMO Labelling' campaign was aimed at boycotting companies that opposed a California law for GMO labeling, many subscribers expressed the belief that joining this campaign would provide information on which products contained GMO. Thus, we found that while Buycott's developers framed it as enabling 'voting' in supermarket aisles, users framed the app itself as an ethical commodity to be consumed, with media discourses hovering between the developers' vision and the consumers' interpretation -depicting the app both as a means of political participation, and as a product whose use indicated ethical allegiances. As we wrote up our findings, new questions arose about Buycott's sources of funding, its plans for growth, and how user-generated data were used. We therefore approached Buycott's founder, Ivan Pardo, for a Skype interview in early 2017. Pardo provided us with responses concerning funds, as well as consumer and company reactions to the app, but did not specify plans for growth and future revenue. However, we felt the interview painted a sufficiently clear picture of Buycott at the time (Eli et al. 2018). In 2019, when TS was preparing a talk which, in part, drew on the Buycott case study, we found that the Buycott website and mobile app had changed. A new tab appeared on the website: 'Barcode API'. Clicking on this tab, we discovered Buycott was now advertising its provision of "The world's largest UPC database", stating that "our comprehensive product API provides data for over 150 million products from every corner of the globe" (emphasis in the original). 2 UPC refers to a Universal Product Code, a unique electronic identifier for retail products. Now offering paid plans, Buycott began selling access to this database: Plan Basic for $49 a month, Plan Developer for $99 a month and Plan Startup for $499 a month. In other words, while consumers continued to use the app to facilitate ethical consumption, product data were also used as a for-profit product and an income generator for the app. The selling of product data, then, became central to Buycott's business model, though it is unknown whether and to what extent these data were being crowdsourced. As we recently reflected, "this blurs the boundaries between consumption and production, and one may argue that Buycott users provide free 'digital labor', typical of the digital economy (cf. Scholz 2013)" (Schneider and Eli 2021). 3 What do Buycott users know about the digital labor upon which the app is premised? In 2014, Hawkins and Horst (2020) conducted the only study, to date, which examines Buycott user experiences. In this focus group study, participants who identified as ethical consumers were prompted to use Buycott and then provide reflections about using the app. Hawkins and Horst's (2020) participants felt that using Buycott to inform their everyday shopping decisions was labor-intensive. However, as the participants did not report generating data on the app, their knowledge about the digital labor required for the generation of user campaigns and product information on Buycott remained unexplored. Moreover, since 2014, Buycott went through a major change, as explained above, and is no longer just an ethical consumption app, but "The world's largest UPC database". 4 To understand what Buycott currently tells users about its business model, we opened a new account on the app. We discovered that when consumers sign up and create an account, the use of consumer crowdsourced data is neither clearly stated nor explained in the process (see Fig. 1 and 2). Although a link to 'terms of service' is visibly displayed (see Fig. 2), there is no 'agree' prompt, and consumers can sign up for the Buycott app without indicating they have read the terms of service. This is atypical, as other apps or websites request approval, often by ticking a box to acknowledge that one has read the terms of service and agrees to them, as part of the signing up process. A look into the terms of service reveals that ownership and sharing of user content is explained in detail. For example, the subsection 'Rights in User Content Granted by You' states that "By making any User Content available through the Services you hereby grant to us a non-exclusive, transferable, sublicenseable, worldwide, royalty-free license to use, copy, modify (for formatting purposes only), publicly display, publicly perform and distribute your User Content in connection with operating and providing the Services and Content to you and to other Account holders." 5 Terms of service may provide Buycott and other apps with legal protection. Yet the dry, legal language of terms of service -largely inaccessible, frequently unread -fades into the background compared with the emotionally evocative language used to promote Buycott in the media and on the company's website. Though Buycott now shares product data for revenue, consumers are still invited and mobilized to use the app as a means to ethical consumption. When users sign up to use Buycott, they do so as caring consumers. Ethical shopping is the main incentive behind users' data generation. Yet, consumers' crowdsourced product data, which formed the basis for the app's success, may have found an unexpected value in corporate contexts. When Buycott was first launched, a key element of the app's affordances was to prompt users to contribute product data. Each time an item scanned by a user was not in the database, consumers received a prompt to enter product information such as brand name, company name and product name (a process KE documented in her fieldnotes, 29.11.2013). If users decided to do so, they contributed free labor. Users may not have perceived it this way, as they benefited from others entering product data, too, and were enabled to boycott and buycott as promised. Yet, in addition to the mutual benefit of crowdsourced data between consumers, the collected and aggregated data have gained value for Buycott, allowing it to establish itself as a reputable provider of a UPC database with global scope. In the current version of the app, when users scan a barcode of an unknown product, they receive an error message: "Sorry. We couldn't find that barcode in our database. Try searching for something else". However, the FAQ page on Buycott's website still claims that "much of the product data is crowdsourced" and features user-generated information as key to the app: "If you scan a product that Buycott doesn't know yet, fill out the fields and submit the new product" 6 . Moreover, the app's current version still prompts users to report inaccuracies about product and company information. Interestingly, the app also offers a new feature, linking product pages to 'affiliate partners': when users visit a product page, a shopping basket icon appears in the top right hand corner. Clicking on the shopping basket leads to an Amazon link and the following prompt: "Help keep Buycott free by shopping through our affiliate partners". As such, the app's affordances seamlessly blend users' positionalities, from consumers to activists, campaigners, and data contributors, and back to consumers again-supporting Buycott through shopping on a multinational corporate website. Discussion Using our conceptual framework which builds on datafication, affordances and digital labor, we can develop an understanding of how ethical consumption apps build unpaid user labor into their digital infrastructures and business models. We see datafication at work when products are transformed into digital data as illustrated by the Buycott app, where the transformation of products into data is facilitated by scanning a machine-readable barcode on consumer goods. The barcode, a series of unique black bars, together with the unique 12-digit number beneath it, constitutes the Universal Product Code. UPCs makes it easy to identify the scanned product's manufacturer and the product's features, such as the brand name, item, size and color. This datafication process of translating food products into data has added new sources of value to Buycott and as such new tools of accumulation (Sadowski 2019). This is evidenced in Buycott's claim that the company holds "The world's largest UPC database" 7 . Access to this database is available through a subscription-based business model which guarantees monthly income streams for Buycott depending on the chosen subscription plan. This use of data by corporate subscribers to build and maintain digital systems and services shows how an initial model of crowdsourcing product data through consumer interaction with the app has created value for Buycott, even if the crowdsourced data themselves are not directly monetized. The process of datafication is facilitated by the app's affordances. Users opening the app find a prominently placed 'scan' icon that encourages them to scan product barcodes. The app's architecture, then, is centrally built around enabling the scanning of products, to inform consumers about the owner of the brand (family tree) or whether the scanned product is in conflict with any campaigns they personally subscribed to. Thereby, scanning reduces individual search cost and the app's affordance "allows certain action and social dynamics to take shape" (Davis 2020, p. 11). Although users can use the app without the scanning feature by simply exploring products already in the database and campaigns set up by other users, or by learning about recent actions taken, trending products or trending campaigns, the full activist potential is connected to using the scanning feature. Thus, we argue the affordances of the Buycott app enable and encourage a form of ethical consumption directly linked to the action of product scanning. We call for more in-depth empirical studies of digital platforms' affordances to study how these platforms encourage this or other types of digital prosumption. It is the scanning of products, central to unlocking the full potential and insights of the Buycott app, that blurs the lines between prosumption and digital activism on the one hand and digital labor on the other hand. Users enter, share, and receive product data to discover which companies are linked to the retail products they buy. Yet, although company-facing information about API subscriptions is only a click away, the possibility that the data a user generates may translate into revenue for Buycott, either directly or indirectly, is not clearly conveyed in the app or on the company's website. This approach to data, though widely employed by digital media companies, seems out of step with the conscious consumption values Buycott actively promotes. However, when Buycott users share data, does it count as digital labor, and is it necessarily exploitative? Critics might argue that we need to ask users before we make this judgment. Buycott users, like users of other digital media, may be savvier than we assume. Though they might not know precisely how their product data are being used, given public knowledge about how platforms such as Facebook monetize data, it is likely they realize that Buycott, as well, has something to gain from the data they contribute. However, our argument is not that users feel contributing data counts as labor, or that they see themselves as being exploited. Rather, we argue that through a particular digital infrastructure, a "socio-technical architecture of digital labor" (Postigo 2016), apps such as Buycott simultaneously construct users as knowing subjects (seeking/sharing product information) and 'working consumers' (Kleemann et al. 2008;Rieder and Voß 2010;Hornung et al. 2011) (laboring without realizing). Buycott's affordances prescribe a dialectic of knowledge and non-knowledge. In registering to exercise conscious consumption, users also register to perform digital labor for the app. Informal (uncompensated) user activities such as uploading product information including brand, manufacturer, country and more are, thus, simultaneously prosumption, digital activism and digital labor. As the example of Buycott illustrates, digital platforms facilitate data generation and the development of a comprehensive database that might lead to future revenue streams. We suggest that the example of Buycott is illustrative of how contemporary food and eating practices increasingly rely on digital labor, often facilitated by digital platforms. However, we caution against both overly pessimistic and overly optimistic interpretations of this development. Instead, we propose that future research focus on how situated practices of digital platforms afford digital labor in everyday engagements with food. Our call for further attention to digital labor ethical consumption, prosumption and digital activism is aligned with emerging work on reconsidering data governance. Recent overviews of data governance proposals suggest that future research and action should attend to issues of equality, sharing and value. For instance, Solomé Viljoen's (2020, paragraph 7) proposes a focus on data egalitarianism, suggesting that "rather than proposing individual rights of payment or exit, data governance should be envisioned as a project of collective democratic obligation that seeks to secure those of representation instead". Micheli et al.'s (2020) review of four emerging models of data governance, i.e., data sharing pools, data cooperatives, public data trusts and personal data sovereignty, suggests that these should be considered according to the function of the stakeholders' roles, their interrelationships, articulations of value, and governance principles. We join these calls for a reconsideration of data governance, and suggest that research into ethical consumption apps and digital labor might provide a useful lens on these issues. Conclusion In this paper, we explored through the illustrative example of the Buycott app, drawing on our previous research, how an ethical consumption app engages users in digital labor. Through investigating the app's affordances, we found that although the app prompts users to contribute digital labor in the form of barcode scanning and correcting product information, as part of crowdsourcing and sharing data, the potential dialogue between these data and the app's current venture-a UPC database with fee-paying corporate subscribers-remains unclear. The app, therefore, blurs the boundaries between participation and labor, simultaneously constructing users as knowing subjects (seeking/sharing product information) and non-knowing working consumers (laboring without realizing). Our paper contributes a conceptual framework and proposes a research agenda to explore and understand ethical consumption in the digital food economy, by elucidating how ethical consumers engage in digital labor on platforms and apps for digital prosumption and digital food activism. We suggest further research is needed to address the question we raise in our article: how do intermediary digital platforms facilitate digital labor (as part of the everyday digitalization of food) and how could this potentially be governed? Such future research has the potential to shed further light on the design of digital platforms and on how the affordances of these platforms enable or encourage a specific type of consumer, including digital prosumers and activists. These studies also have the potential to examine and reflect upon how work and leisure are no longer separate spheres and how mundane digital interactions monetized. We particularly call for future research to address automated data collection, where participation in digital labor becomes less laborious, or, at least, perceived as less laborious by consumers as they are less actively involved in the process. We situate our call for further research vis-à-vis the growing challenge of citizen-consumers' participation in the digital economy. Future studies of situated digital labor practices can elucidate how platforms' affordances enable and constrain actions and social dynamics that foster or hinder specific types of digital participation. Ultimately, such studies may contribute to the development of new data governance structures, crucial in addressing the issues raised when ethical consumption becomes digital labor. permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
9,783.2
2022-12-07T00:00:00.000
[ "Sociology", "Computer Science", "Environmental Science" ]
Data management for toxicological studies. Organized data management increases the reliability of statistical analysis. The basic purpose of data management is to assure the integrity and the quality of data. To assure data validity, establishing a checking system, such as data audit, would be desirable at the following points: protocol design, supervision of study schedule, definition of data, data collection, choice of tests and procedures, verification, data checking, data recording, data handling, data analysis, and data validation. To process an enormous amount of data on a multitude of items, use of a computerized system would be advantageous. The data processing system in toxicological studies should be based on a protocol-driven system, which gathers and records the data accurately. The main functions that are to be handled by computer are data collection, recording and retrieval via terminals, and statistical analysis of data and assembling of reports. One should be able to validate whether the computer system would perform its intended function accurately, reliably, and consistently. This paper discusses the basic considerations of data management and provides examples of the state of the computerized data management system and its validation. Introduction The fundamental significance of data management lies in assuring the integrity and the quality ofdata. Ifthe integrity and quality ofdata are not assured, statistical analysis ofthe data will not be reliable, no matter what statistical procedure is used. On the other hand, it is important to comprehend the data characteristics and timing of data generation in selecting appropriate statistical procedures (1). Toxicological studies for the assessment ofdrug safety require various kinds oftests and observations on a multitude of items in a large number of animals for long periods. Recently, to avoid deviation or error that may occur during data gathering and processing, computerized data processing systems have been developed for managing toxicological data. Good Laboratory Practice (GLP) regulations have also required that the nonclinical laboratory studies (toxicological studies) are accurately conducted, recorded, monitored, and reported in accordance with protocol and standard operating procedures (2). The present report discusses the basic considerations ofdata management, introduces a computerized system that incorporates GLP regulations (3,4), and reviews computer system validation (unpublished data, 5). Basic Considerations of Data Management Toxicological studies may use single doses, repeated doses, stop-start dosing, etc., for a variety of end points. Department In each experiment, these studies have many test items to be observed, tested, or measured. Regarding the data volume of, for instance, a 13-week toxicity study in rats, the potential number ofdata evaluations is 550 per animal, and totally about 100,000 data points will be considered in one experiment (Table 1). In practice, several types of experiments are performed in parallel in one laboratory. Under these conditions, many possible errors in data evaluation may arise. Complicated schedules followed in various studies, different tests, end points, large numbers of animals, and samples can conspire to increase errors. (Fig. 1). Recently, to avoid errors during the data handling, computerized data processing systems have been developed to manage toxicological data. GLP regulation has also required that nonclinical laboratory studies (toxicological studies) be accurately conducted, recorded, monitored, and reported according to protocol and standard operating procedures. The following basic functions in various forms oftoxicological studies (e.g. single and repeated administration toxicity, reproductive-toxicity, specific toxicity, carcinogenicity, etc.) should be considered for data management. A checking system for data audit should be established according to the following guidelines: Situation of protocol (how to refer to the protocol in practical settings) Supervision of study schedule (how to control the schedule) Definition of data (clarification of raw data) Procedure for data collection (how to collect data accurately) Guidance of test item and its procedure (to match standard operating procedures) Verification/check of data (who and how to check the data) Data recording (how to record data accurately) Data review (for easy retrieval) Monitoring the study (to establish the inquiry system) Qualification of data handling (who handles the data) Data analysis (to introduce relevant processing) Data validation (to assure the integrity of data) An important point to consider is the reference to the protocol in any practical setting. The intention or purpose ofthe study, its schedule, and its contents should be made clear, and they should be clarified and carefully considered during data handling, gathering, or processing. Supervision of study schedule refers to schedule control; namely, a practical schedule managed according to established protocols. Definition of data clarifies what the raw data are. For correct data processing and statistical analysis, we should deal with raw data directly, not secondary processed data. If we use the secondary processed data for further evaluation, verification of raw data and the secondary processed data should be performed. Gathering data accurately is the basis of data handling. When validating data gathering by sensors such as analyzers or keyboard, attention should focus on avoiding errors or generating artificial changes in the data. To develop a unified format to gather the data, it is useful to generate data ofconsistent quality. The standard operating procedure should be continually updated and improved as scientific and technological advances occur. Verification of data should be automatically systematized. At the time ofdata input, both scientific and computerized check systems should be employed. From the generation ofdata to the recording ofdata, the check system shown in Figure 2 may be employed, especially for computerized systems. No systems for data handling could easily or sufficiently manage the raw data check in real time without computerized support. For the data check before recording, previous protocol data and historical data should be referenced and a scientific check by the scientist should also be employed. All these check systems should be engaged by referring to protocol procedure data and historical data. Data recording means simply recording data accurately. Final raw data are input in a uniform database that can be employed for further data processing. The computerized retrieval system should be easily accessible from the unified database at any time. Monitoring under the access and inquiry system is important to assure the integrity ofthe study. Furthermore, qualification of data handling is important to emphasize the responsibility for data handling and data security. Proper qualification can elevate the quality of the data alone. Under the background protocol or process mentioned herein, the integrity and the quality ofdata would be fairly well assured. Using these data if one employs relevant statistical analyses, the assessment ofhealth risks and other safety evaluations would be improved. From the perspective of toxicologists, three basic considerations about statistics in toxicology are important. First, statistical tests are performed under the premise that the samples are completely random in order to be free from biases. Second, accurate statistical tests should be done using the randomization tests such as the Pitmann test so that the analyses are not performed by approximate methods based on the erroneous assumptions. Third, both the biological significance and the statistical significance should be considered before concluding that the toxicological effect is significant. To properly conduct data management, the use of computerized systems would be profitable, and the approach should be applied with the functions mentioned previously. Introduction of Computerized System In the course of conducting toxicological studies, proper guidance to prompt investigators for the correct sequence of testing steps enforces the accurate conduct ofthe studies according to the standard protocol. When the experimental results are received through computer terminals, the computer system should check the data against the standard protocol and against the history ofprevious results. The investigator can immediately recognize any errors and can correct them before the information enters the experimental database. The computer system, which promptly processes the generated data by a combination of time-sharing database processing and a real-time multiprogramming system, was introduced as the total computer system in Nippon Roche Research Center (NRRC) and has been functioning successfully for more than 10 years. The aim ofthis system is to conduct the study accurately, to record the data, and to report results based on protocol and standard operating procedures under the GLP regulation. A high-level mini-computer (VAX6310) was installed as the host computer, and microcomputer-based terminals for data gathering and retrieval were located in laboratories, animal rooms, dissection rooms, etc. (Fig. 3). Under the control/supervision of the host computer, all toxicological studies are performed in real time. Prompt responses (i.e., response time of the computer) are assured with the specified software written by Massachusetts General Hospital Utility Multi-Programming System MUMPS. The following functions in the computerized system are covered completely: a) Study conduct with protocoldriven system (scheduling and guidance on video display terminal; b) Accurate record and strict correction of data (conversation system between investigators and computer, checking the system with standard programmed protocol, automatic input from sensors: autoanalyzer and balance); c) Review of data and monitoring of study (data retrieval with terminal or printer, automatic inquiry system for monitoring data integrity); d) Report (progress and final reports, statistical analysis); e) Backup system (security of data, archiving of data). Figure 4 shows the system configuration. In the computerized system, toxicological studies are conducted on the basis ofprotocol data that have been programmed into the database and performed using software-based programmed standard operating procedures. The other systems are accessed or "called" as managing subsystems that support the main study systems to facilitate smooth study performance. Data gathering and retrieval in this system are illustrated in Figure 5. Through the computer terminal interface, protocol and schedule are assigned by the study director, data are input at the laboratory or animal room by the examiner, and data on managing affairs are also input at the office by the responsible person concerned. All input data can be easily retrieved through the terminal. As a final report, tables and figures with the results of statistical analyses are printed after relevant data processing. As with the other function, sheets, labels, and written reports are also printed. Figure 6 shows the flow-chart of relationship in each system for study conduct, data gathering, and data reporting. When the plan for a nonclinical study relating to drug safety is designed by the testing facility management, a protocol is prepared by a study FIGURE 5. Outline of data gathering and retrieval (tox-DP system in NRRC). director who is responsible for the overall study conduct. After the protocol is registered on the computer, a master schedule is made automatically in the computer system and shown on the video display to facilitate daily work guidance. In addition, all but information on assignments necessary for conducting the study not covered by the protocol is always entered by the study director during the experimental period. Referring to the protocol assignment, time schedule, and standard operating procedures, the computer system provides daily guidance via video display as to what items shall be tested. This function effectively enforces protocol and standard operating procedure adherence, and is strictly required by GLP regulation. Regarding guidance ofanimal/sample number and any other detailed items for each data input, the investigator confirms animal and sample number and input items on the video display. Ifthe investigator enters the wrong animal number or item, the video display does not show any response and data entry is refused. Input data undergo a range-check and data beyond the range are also highlighted or refused by the computer system. For an identification system ofanimal and sample number, test items and data should be coupled with corresponding animal and sample ID, such as keyboard, magnetic card, bar code label, etc. port mputer report nd graph) acting report sheet Ofthese, input by bar code reader is often employed along with video display ofanimal and sample ID because it enables handy, economical and reliable operations in this case. For data recording, real-time data acquisition from terminal or instruments is employed. Data on body weights, feed consumption, and organ weights are recorded directly in the computer from autobalances. Data on hematology and blood chemistry from hematological and biochemical autoanalyzers, clinical signs and urinalysis from note tablets, test animal and sample identification from bar code reader, and dates and times from the timeclockon hostcomputer are all enteredautomatically. Prior to data archiving in a permanent computer file (or database), however, all the data undergo corroboration as to their correctness and integrity. Both the computer system and investigator participate. Thus, errors in key punching and transcription can be eliminated. When data require correction or amendment, any data in the experimental database must not be changed by anyone other than the study director responsible for overall study conduct. The study director notes the corrected data, along with reasons for the amendments, original entries, dates, and signatures from the study notebook. When reviewing the data, the laboratory management, study director, investigator, and QAU, can perform a review ofthe data of on-going experiments, via video display terminals with key word entries and dates and test items to ensure adherence to standard protocols and also to confirm the data. For the study report, the computer system provides progress reports periodically during the experimental period for assessing the status ofthe experiment. After completion of the study, the computer system provides a final report with statistical results for data evaluation. After completion of the study, generated data are saved on magnetic tapes that are stored in the laboratory archives. These data can also be stored and retrieved from computer disk files from the magnetic tapes when reinterpretation of the data is of interest. As a data back-up, the contents ofthe disk files are copied onto the magnetic tape after completion ofdaily processing. The data obtained on each day are recorded both on the disk and on the magnetic tape. The work schedule for the following day is recorded on cassette tape at the terminals. If the host computer system is down, the experimenters will be able to practice the experimentation via schedules on the video display terminals from cassette tape. Computer System Validation Computer system validation to assure the justification of the toxicological data should cover all the stages of computer systematization from the developmental stage to practical use ( Table 2). Prior to performing system validation, basic specifications of the system (structure of file/database, functions to be applied, and procedures for study conduct) should be checked in detail. As an initial validation at the developing/installing stage, which includes retrospective validation, all the documents concerning system design, programming and installing procedure, records of system testing (hardware and software), and other validation records should be compiled properly. For practical use, system validation should meet a daily check/confirmation. The validity of the data processing, a security check, a counterplan for system down-time, management of changes in software/hardware, confirmation of maintenance by responsible person, and education and training of user are all necessary. The checkpoints ofcomputer validation are classified as follows: a) Hardware and computer room-location access to host computer magnetic media as archives b) Development of computer system-software (developed in-house, vendorsupplied, existing) system design specification system configuration programnung program testing validation testing documentation c) Operation and maintenance Regarding the items noted above, appropriateness oflocation of host computer and terminals, adequacy ofdesign and capacity to function, and procedures ir operation and maintenance would be checked at the time of inspection for computer validation. For a more detailed description regarding these various issues, see the check lists of "GLP inspection of computer system" that have been delivered by the Ministry ofHealth and Welfare in Japan (6). In conclusion, it is important to emphasize that proper data management elevates the reliability of statistical analysis. The author wishes to thank Messrs. H. Shiozaki and E. Uchida for developing the total computer system in NRRCITP, and Dr. T. lkimwa for his advice on system managing.
3,630.2
1994-01-01T00:00:00.000
[ "Computer Science" ]
Multi-View Stereo Network Based on Attention Mechanism and Neural Volume Rendering : Due to the presence of regions with weak textures or non-Lambertian surfaces, feature matching in learning-based Multi-View Stereo (MVS) algorithms often leads to incorrect matches, resulting in the construction of the flawed cost volume and incomplete scene reconstruction. In response to this limitation, this paper introduces the MVS network based on attention mechanism and neural volume rendering. Firstly, we employ a multi-scale feature extraction module based on dilated convolution and attention mechanism. This module enables the network to accurately model inter-pixel dependencies, focusing on crucial information for robust feature matching. Secondly, to mitigate the impact of the flawed cost volume, we establish a neural volume rendering network based on multi-view semantic features and neural encoding volume. By introducing the rendering reference view loss, we infer 3D geometric scenes, enabling the network to learn scene geometry information beyond the cost volume representation. Additionally, we apply the depth consistency loss to maintain geometric consistency across networks. The experimental results indicate that on the DTU dataset, compared to the CasMVSNet method, the completeness of reconstructions improved by 23.1%, and the Overall increased by 7.3%. On the intermediate subset of the Tanks and Temples dataset, the average F-score for reconstructions is 58.00, which outperforms other networks, demonstrating superior reconstruction performance and strong generalization capability. Introduction With the rapid development of computer vision technology, multi-view stereo (MVS) has become a highly prominent field of interest.Research in MVS aims to reconstruct threedimensional information of a scene from multiple perspective images with known camera parameters, playing a crucial role in various domains such as virtual reality, augmented reality, and visual effects in the film industry. In existing MVS methods, traditional methods based on geometric context [1][2][3][4][5] have achieved good reconstruction results in texture-rich areas, especially in terms of accuracy.However, challenges persist in reconstructing the three-dimensional information of the scene from images in areas with low texture, image occlusions, variations in radiance, or non-Lambertian surfaces.To address this issue, some researchers [6] have employed deep learning techniques, utilizing Convolutional Neural Network (CNN) to extract image features.They perform robust feature matching within the field of view of the reference camera to construct a cost volume representing the geometric information of the scene.Subsequently, they employ a 3D U-Net network for regularization to regress depth maps.Finally, the scene's three-dimensional information is reconstructed through depth maps fusion.While this approach enhances the overall quality of reconstructing scenes, it encounters challenges in challenging areas with low texture or non-Lambertian surfaces, where Electronics 2023, 12, 4603 2 of 18 features at the same 3D position exhibit significant differences between different views.Incorrect feature matching results in the construction of the flawed cost volume by the network, leading to poor completeness in the final reconstruction.This is due to traditional CNN having fixed receptive field sizes, which limit feature extraction networks to capture only local features, hindering the perception of global contextual information.The lack of global contextual information often causes the network to exhibit local ambiguities in challenging regions, thus reducing matching robustness.Recent studies have employed self-attention mechanism [7,8] to capture crucial information for cost volume computation by considering context similarity and spatial proximity.This has improved matching robustness and enhanced the ability of the cost volume to represent scene geometry information.However, there remains significant potential for enhancing the reconstruction quality, especially in challenging areas. Recently, the Neural Radiance Field (NeRF) [9] rendering technique has made significant advancements in the fields of computer vision and computer graphics.NeRF models view-dependent photometric effects using differentiable volume rendering, enabling it to reconstruct implicit 3D geometric scenes.Additionally, it learns volume density, which can be interpreted as depth, allowing it to explicitly represent the reconstructed geometric scene information through indirectly rendering depth.Subsequent works [10][11][12][13][14][15] have focused on accelerating its rendering speed and implicitly learning the 3D scene's geometry with a strong generalization capability by inputting a few views and combining them with the MVS network to synthesize higher-quality novel views or more accurate depth maps.However, these efforts have primarily advanced the development of the Neural Radiance Field while overlooking the quality of point cloud reconstruction by the MVS network.Therefore, our method leverages the precise neural volume rendering of the Neural Radiance Field to build 3D geometric information about the scene.This approach enables the rendering of depth, even in challenging areas with low texture or non-Lambertian surfaces, allowing the MVS network to learn rich scene geometry information beyond the cost volume that represents scene geometry.This overcomes issues arising from rough depth maps due to incorrect matching in the network, ultimately enhancing the quality of the reconstructed point cloud. In conclusion, we propose an end-to-end MVS network based on attention mechanism and neural volume rendering.By combining dilated convolution and attention mechanism during feature extraction, we extract rich feature information.This allows the network to achieve reliable feature matching in challenging regions.Leveraging the capacity of neural volume rendering to resolve scene geometry information, our approach mitigates the impact of the flawed cost volume arising from incorrect feature matching.Our method exhibits high completeness in reconstructing point clouds on the competitive DTU dataset concerning indoor objects and demonstrates robust performance on the Tanks and Temples dataset, which pertains to outdoor scenes.It outperforms many learning-based MVS networks, thus advancing 3D reconstruction based on MVS networks in crucial domains such as virtual reality, augmented reality, autonomous driving, and other significant applications. In summary, our primary contributions can be outlined as follows: • We introduce a multi-scale feature extraction module based on triple dilated convolution and attention mechanism.This module increases the receptive field without adding model parameters, capturing dependencies between features to acquire global context information and enhance the representation of features in challenging regions; • We establish a neural volume rendering network using multi-view semantic features and neural encoding volume.The network is iteratively optimized through the rendering reference view loss, enabling the precise decoding of the geometric appearance information represented by the radiance field.We introduce the depth consistency loss to maintain geometric consistency between the MVS network and the neural volume rendering network, mitigating the impact of the flawed cost volume; • Our approach demonstrates state-of-the-art results on the DTU dataset and the Tanks and Temples dataset. The remaining structure of this paper is as follows.In Section 2, we present an overview of related work related to learning-based MVS networks and neural volume rendering.Subsequently, in Section 3, we delve into the various components of our proposed MVS network based on attention mechanism and neural volume rendering.Section 4 reports an extensive set of experimental results on the DTU dataset and the Tanks and Temples dataset, supplemented by ablation experiments to validate the effectiveness of the proposed modules.Finally, in Section 5, we offer the conclusion of the article. Learning-Based Multi-View Stereo In light of the flourishing progress in deep learning technologies, a multitude of researchers have harnessed CNN to tackle MVS tasks.As a representative work, MVSNet [6] has established a deep learning-based MVS pipeline.This pipeline generates a 3D cost volume by integrating features from various perspectives through differentiable homography transformations.Subsequently, 3D CNN are employed to refine the cost volume to perform depth regression.Nonetheless, MVSNet consumes a substantial amount of memory, prompting subsequent efforts to seek more lightweight approaches.The study [16] has employed the recurrent architecture, which adjusts two-dimensional feature maps along the depth direction sequentially using Gated Recurrent Units (GRUs).This approach avoids the memory consumption associated with adjusting the entire cost volume at once, enabling high-resolution reconstructions.Another approach [17] estimates and refines depth maps in a coarse-to-fine manner.Initially, it predicts low-resolution depth maps with a large depth interval.As the depth range decreases, the algorithm iteratively increases the depth map resolution.This algorithm effectively reduces memory consumption caused by excessively large cost volumes.However, due to the limitations of CNN in capturing feature information in challenging regions, such as areas with weak textures and non-Lambertian surfaces, subsequent efforts have introduced attention mechanism into the MVS network to enhance feature representations of images.Works like [7] have incorporated self-attention mechanism at the feature extraction stage, enabling the network to focus more on crucial information and capture interdependencies between pixels.Nevertheless, there remains significant room for improvement in point cloud reconstruction, particularly in challenging areas.Due to the inherent advantage of capturing global contextual information using self-attention mechanism in Transformer models [18], subsequent works [19][20][21] have introduced it into MVS, enabling a comprehensive understanding of the global context within the MVS model to extract rich information from the environment.However, this often leads to increased computational time and memory consumption, especially in the reconstruction of high-resolution and large-scale scenes, incurring substantial computational costs. Neural Volume Rendering Based on Multi-View Stereo The Neural Radiance Field (NeRF) [9] represents scenes as continuous implicit functions of position and direction for high-quality view synthesis, achieving photorealistic rendering results at a pixel level.Subsequent works [15,22] have extended NeRF using MVS to support various other neural rendering tasks.MVSNeRF [15] utilizes cost volume constructed by MVS for geometric-aware scene inference, combining it with neural volume rendering for radiance field reconstruction, enabling high-quality view synthesis even with a limited number of images.RC-MVSNet [22], on the other hand, leverages a strongly generalized cost volume derived from MVS, combining it with neural volume rendering to reconstruct implicit scenes.It introduces a neural volume rendering-based reference view synthesis loss to optimize implicit scene information, alleviating photometric blur issues on non-Lambertian reflecting surfaces encountered by unsupervised learning MVS network.Our method leverages a strongly generalized cost volume and incorporates crucial 2D feature information from multiple views for neural volume rendering.In an end-to-end learning manner, it precisely conducts geometric inference for scene perception, mitigating the impact of flawed cost volumes constructed due to incorrect matches in challenging regions by the MVS network. Methods In this section, we elucidate the overall architecture of the proposed method, as illustrated in Figure 1.This architecture primarily comprises the MVS network and the neural volume rendering network.Specifically, in the feature extraction stage, we introduce the attention-aware feature extraction module.This module combines dilated convolution with attention mechanism to extract more comprehensive feature information.The MVS network progressively constructs a probability volume in a coarse-to-fine manner to estimate the depth maps and confidence maps.Subsequently, we design a novel neural volume rendering network. Methods In this section, we elucidate the overall architecture of the proposed method, as illustrated in Figure 1.This architecture primarily comprises the MVS network and the neural volume rendering network.Specifically, in the feature extraction stage, we introduce the attention-aware feature extraction module.This module combines dilated convolutions with attention mechanisms to enhance multi-level feature-capturing capabilities, thereby extracting more comprehensive feature information.The MVS network progressively constructs a probability volume in a coarse-to-fine manner to estimate the depth maps and confidence maps.Subsequently, we design a novel neural volume rendering network. The multi-layer perceptron (MLP) network uses multi-view 2D feature vectors along with the 3D neural encoding volume containing geometric-aware information as the mapping condition.Additionally, we adopt a uniform sampling strategy guided by depth maps and confidence maps to focus the scene sampling on the estimated depth surface region.Finally, we apply the rendering reference view loss RRV L to precisely resolve the geometric shape of the scene from the radiance field.We also introduce the depth consistency loss DC L to ensure geometric consistency between the MVS network and the neural volume rendering network.It is noteworthy that the proposed network architecture functions as a universal framework for training the MVS network, making it applicable to any learning-based MVS network.The two networks provide mutual supervision and are simultaneously optimized.The multi-layer perceptron (MLP) network uses multi-view 2D feature along with the 3D neural encoding volume containing geometric-aware information as the mapping condition.Additionally, we adopt a uniform sampling strategy guided by depth maps and confidence maps to focus the scene sampling on the estimated depth surface region.Finally, we apply the rendering reference view loss L RRV to precisely resolve the geometric shape of the scene from the radiance field.We also introduce the depth consistency loss L DC to ensure geometric consistency between the MVS network and the neural volume rendering network.It is noteworthy that the proposed network architecture functions as a universal framework for training the MVS network, making it applicable to any learning-based MVS network.The two networks provide mutual supervision and are simultaneously optimized. Attention-Aware Feature Extraction Module We propose the attention-aware feature extraction module.This module exhibits resemblances to a 2D U-Net, featuring elementary units that encompass both an encoder and a decoder, complete with skip connections.The encoder forms a network composed of dilated convolutional layers and an attention module, as depicted in Figure 2. In the encoder section, the features are initially subsampled using a convolutional layer with a stride of 2. Subsequently, dilated convolutional layers with 3 × 3 kernel are employed to expand the receptive field of the input features.To address potential information correlation issues associated with the use of dilated convolution, we adopt a strategy similar to that of [23], where feature maps are passed through a residual network structure with Sigmoid function after undergoing dilated convolutional layer with different dilation rate.To create the final feature map, the three fine-grained features are combined and run through a convolutional layer with an attention module.A convolutional layer and deconvolutional layer with 3 × 3 kernel make up the decoder.When provided with a reference image I 1 and source images {I i } N i=2 at a resolution of H × W captured from different viewpoints, the attention-aware feature extraction module outputs three different scales of features, denoted , where k represents the kth stage.and a decoder, complete with skip connections.The encoder forms a residual network composed of dilated convolutional layers and an attention module, as depicted in Figure 2. In the encoder section, the output features from each layer are initially subsampled using a 3 × 3 convolutional layer with a stride of 2. Subsequently, dilated convolutional layers with 3 × 3 kernels are employed to expand the receptive field of the input features, facilitating the exploration of deep-level fine-grained features.To address potential information correlation issues associated with the use of triple dilated convolution, we adopt a strategy similar to that of [23], where feature maps are passed through a residual network structure with sigmoid functions after undergoing dilated convolutional layers with different dilation rates.To create the final feature map, the three fine-grained features are combined and run through a convolutional layer with an attention module.A convolutional layer with a 3 × 3 kernel and a deconvolutional layer with a stride of 2 make up the decoder.When provided with a reference image 1 , , , where k represents the -th k stage.Figure 3 provides a visual representation of the attention module's architectural design.The features, which have undergone fusion through dilated convolutional layers, are input into two 3 × 3 convolutional layers.Each of these layers goes through Group Normalization (GN) and a ReLU activation function.Subsequently, we incorporate a Lay-erScale-based local attention layer [24].The operational details of this local attention layer are elucidated in Figure 4, illustrating the mapping of queries and a collection of key-value pairs to generate an output, with pixel outputs computed via Softmax operations.Figure 3 provides a visual representation of the attention module's architectural design.The features, which have undergone triple dilated convolution, are input into two convolutional layers with 3 × 3 kernel.Each of these layers goes through Group Normalization (GN) and a ReLU activation function.Subsequently, we incorporate a LayerScale-based local attention layer [24].The operational details of this local attention layer are elucidated in Figure 4, illustrating the mapping of queries and a collection of key-value pairs to generate an output, with pixel outputs computed via Softmax operation. In this equation, q ij = W q x ij , k ab = W k x ab and v ab = W v x ab represent the queries, keys, and values, respectively, with the matrices W n (n = q, k, v) composed of learnable parameters.Here, R denotes a local region with a 3 × 3 input size.To address the issue of permutation equivariance resulting from the lack of encoded positional information, we introduce relative positional embeddings by incorporating learnable parameters into the keys, as described in [25].The relative distance vector r ab is partitioned along the dimensions, with half of the dimension of the output channel allocated for encoding the row direction and the remaining half for encoding the column direction.Furthermore, the features x att , produced by the attention layer, need to be multiplied by the learned diagonal matrix weights within the network. where s 1 to s n are learnable weights. sions, with half of the dimension of the output channel allocated for encoding the row direction and the remaining half for encoding the column direction.Furthermore, the features att x , produced by the attention layer, need to be multiplied by the learned diagonal matrix weights within the network. where s 1 to n s are learnable weights. Cost Volume Construction Subsequently, we perform adaptive depth hypothesis sampling using J layers of depth hypothesis planes direction and the remaining half for encoding the column direction.Furthermore, the features att x , produced by the attention layer, need to be multiplied by the learned diagonal matrix weights within the network. where s 1 to n s are learnable weights. Cost Volume Construction Subsequently, we perform adaptive depth hypothesis sampling using J layers of depth hypothesis planes Cost Volume Construction Subsequently, we perform adaptive depth hypothesis sampling using J layers of depth hypothesis planes D j J j=1 .Based on these assumptions, we construct feature volumes {V i } N i=1 , which are constructed by differential warping 2D source views features to the reference view.Under the depth plane hypothesis d, the warping between a pixel p in the reference view and its corresponding pixel p i in the i-th source view is defined as follows: where K i and K are the intrinsic matrix of the i-th source camera and the reference camera, respectively, R i and t i represent the rotation and translation between the two views.Subsequently, we consolidate multiple feature volumes {V i } N i=1 into a 3D cost volume V using the variance-based aggregation strategy.Then, the cost volume is then regularized into a depth probability volume using a 3D U-Net.We determine the probability P j (p) on a specified depth plane D j (p) for the pixel p in the reference view.Following this, we calculate the estimated depth value D(p) for the pixel p using the method outlined below: Neural Volume Rendering Network To further alleviate the issue of incorrect feature matching in MVS caused by significant differences in 3D location of features between adjacent views, we introduced a neural volume rendering network.This network is trained in a self-supervised manner to learn the scene geometry, providing the network with rich scene geometry information.This addition helps mitigate the impact of the flawed cost volume generated by incorrect matching issues in the MVS network. Scene Representation Based on Multi-View Features and Neural Encoding Volume Our network extracts potential 2D feature vectors from the encoded contextual information of the source views.These multi-view 2D features provide additional semantic information about the scene, addressing 3D geometric ambiguity and enhancing the network's ability to handle occlusion.Inspired by PixelNeRF [13], we project 3D points from arbitrary space into the input multi-view images.For N different views {I i } N i=1 , each having its corresponding extrinsic matrix T i = [R i | t i ] concerning the target reference image and intrinsic matrix K i .To acquire the color c and volume density σ of the 3D point, we begin by converting its 3D position x, and its reference view direction g, into the coordinate system of each view, leading to the 3D point x i = R i x + t i .Subsequently, we project this point onto the corresponding pixel and feature maps and employ bilinear interpolation to sample its pixel p i , and feature vector f i : where g i represents the projection of the 3D point x's reference view direction onto the respective observation directions of the multi-view images. We utilize a weighted pooling operator, denoted as ψ, to aggregate the multi-view feature vectors, as illustrated in Figure 5. Initially, we combine the feature vector f i with the pixel information p i to create a two-dimensional feature vector.Subsequently, we compute the mean µ and variance ν of the two-dimensional feature vector to capture both local and global information.Then, the two-dimensional feature vector is concatenated with µ and ν fed into our specially designed lightweight MLP, extracting the multi-view perceptual features, denoted as f i , and the pooling weights, denoted as w i .Finally, by applying a Softmax operation to the weight vector, we perform a weighted pooling operation on the multi-view perceptual features, resulting in the final feature vector f img : Subsequently, following the same approach as in RC-MVSNet [22], we performed trilinear interpolation on the 3D neural encoding volume constructed using MVS, resulting in voxel-aligned three-dimensional feature voxel denoted as voxel f .We then passed the weighted pooled final feature vector img f and the three-dimensional feature voxel voxel f through an MLP network to obtain RGB color c and volume density  at 3D sampling points in the reference view direction.Subsequently, following the same approach as in RC-MVSNet [22], we performed trilinear interpolation on the 3D neural encoding volume constructed using MVS, resulting in voxel-aligned three-dimensional feature voxel denoted as f voxel .We then passed the weighted pooled final feature vector f img and the three-dimensional feature voxel f voxel through an MLP network to obtain RGB color c and volume density σ at 3D sampling points in the reference view direction. Confidence and Depth-Guided Sampling for Volume Rendering In the reference view I 1 , each pixel p corresponds to a ray defined in the world coordinate system.The 3D point associated with the pixel p along this ray originating at a distance e from the origin o can be represented as r p = o + eg.To render the color I 1 (p) at the pixel p, rays are uniformly sampled at M discrete sample distances e m within the original NeRF near and far planes [e n , e f ].The radiance field ϕ at the 3D point is then queried: Due to the uniform sampling probability within a sampling range in the original NeRF, the points may not be concentrated on the surface of the object, leading to a decrease in the quality of the rendered reference view.Therefore, for pixel p, we propose to sample candidate points under the guidance of the prior range defined by the depth estimation value D(p) and its confidence from the MVS network. We define the standard deviation Ŝ as the degree of confidence for pixel p with depth estimation value D(p): The potential location of the object surface corresponding to each pixel should be confined within the interval defined by the depth estimation value D(p) and the standard deviation Ŝ(p), represented as Û(p): Confidence and depth range Û(p) contain valuable signals to guide sampling along rays, thus, for rendering the color of a 3D point x in the geometric scene, we replaced the coarse network used for hierarchical sampling in the original NeRF.We distribute half of the sampled points between the near plane e n and the far plane e f .The second half of the sampled points are extracted within the range of the confidence and depth prior Û(p).This ensures both the network's generalization capability and model convergence.Figure 6 presents a comparison between the two sampling methods. Next, we render the predicted colors and volume density values {(c m , σ m )} M m=1 for each sampling point into the predicted reference pixel: where E m represents the cumulative transmittance along the ray e m , and δ m = e m+1 − e m is the distance between adjacent samples.Our objective is to precisely deduce the depth value corresponding to the reference view from the radiance field.Therefore, we achieve the depth value for pixel p by performing a density integral along the rays in the direction of the reference view. The potential location of the object surface corresponding to each pixel should be confined within the interval defined by the depth estimation value   D p and the stand- Figure 6.Comparison of two sampling methods.In contrast to the uniform sampling employed in the original NeRF, the sampling method within confidence and depth prior range concentrates the samples more on the surface of the object. Loss Function Within the neural volume rendering network, following the methodology established in the original NeRF, we introduce the rendering reference view loss.This loss function utilizes mean squared error to quantify the disparity between the color of volume rendering along rays from the reference view and the color of the corresponding ground truth reference view.By optimizing the pixel values of the rendered reference view I 1 (p), we enhance the implicit geometric representation capability of the 3D scene.(15) To ensure geometric consistency between the two networks, we propose the depth consistency loss.This loss function employs L 1 loss to minimize the difference between the rendered depth and the estimated depth from the MVS network, while also minimizing the difference between the rendered depth and the ground truth depth. Within the MVS network, we utilize the L 1 loss as the training loss, quantifying the divergence between the ground truth depth and the estimated depth. In the end, the overall training loss function for the end-to-end network is given by the following: Experiments We comprehensively present the performance of our proposed method through a series of experiments.Additionally, we perform ablation experiments to validate the efficacy of our proposed attention-aware feature extraction module, loss functions, and the confidence and depth-guided sampling strategy. Datasets We conducted model training and evaluation using the DTU dataset [26] and the Tanks and Temples dataset [27].The DTU dataset comprises 124 scenes captured from 49 distinct viewpoints, covering a range of 7 diverse lighting conditions, and collected using a robotic arm in indoor environments.We assess the reconstructed point cloud using three measurement criteria: Accuracy, Completeness, and Overall. Accuracy represents the average distance between the reconstructed point cloud and the ground truth point cloud, calculated by the Formula (20).Completeness indicates the number of surfaces from the ground truth point cloud that are captured in the reconstructed point cloud within the same world coordinates, computed using Formula (22).Overall is the average of Accuracy and Completeness, calculated as per Formula (23). where RE denotes the reconstructed point cloud, GR represents the ground truth point cloud, and dis re→gr signifies the shortest distance from a point in the reconstructed point cloud to the ground truth point cloud. Comp. = 100 |GR| ∑ gr∈GR dis gr→re (22) where dis gr→re represents the shortest distance from a point in the ground truth point cloud to the reconstructed point cloud. The Tanks and Temples dataset, on the other hand, captures complicated real-world sceneries with 8 intermediate subsets and 6 advanced subsets. We utilize the F-score as the evaluation metric for the Tanks and Temples dataset.The F-score takes into account the precision PR and recall RE of the reconstructed point cloud, with precision defined as in Equation ( 22) and recall as in Equation ( 23).The F-score is calculated according to the formula in Equation (24). End-to-End Training Details We fixed the number of input images at N = 4 and resized the original images to a resolution of 512 × 640 pixels during the training phase.We divided the MVS network into three stages, with each stage taking input images at 1/16, 1/4, and 1 of the original resolution, respectively.We assumed the same number of plane sweep depths and depth intervals as [17].Specifically, for the three stages, we assumed 48, 32, and 8 plane sweep depths and depth intervals of 4, 2, and 1, respectively.In the neural volume rendering network, we set the number of ray samples to 1024.We used the Adam optimizer with λ DC 1 = 0.8, λ DC 2 = 0.2, λ RRV = 1, λ DC = 0.01 and λ MVS = 1.The training process comprised 16 stages, commencing with an initial learning rate of 0.0001.This learning rate was halved at the 10th, 12th, and 14th epochs.Our method was trained with a batch size of 2 using 2 Nvidia GTX 3090ti GPUs. Results on DTU Dataset Our model was assessed with 5 neighboring views (N= 5) and input images at a resolution of 1152 × 864 pixels.We conducted a comparative analysis between our outcomes and those obtained from various traditional techniques as well as cutting-edge learning-based approaches.The quantitative evaluation results are presented in Table 1.Our method excelled in terms of completeness, exhibiting a significant 27% improvement compared to CVP-MVSNet [28].Moreover, our approach outperformed existing advanced methods in terms of overall reconstruction quality.In addition to quantitative analysis, Figure 7 provides visual qualitative results of the reconstructed point clouds.Our model generated more complete point clouds with finer texture details in challenging regions characterized by weak textures and lighting reflections compared to CasMVSNet [17] and UCS-Net [29]. Results on Tanks and Temples Dataset We conducted assessments using input images at a resolution of 1920 × 1080 and a neighboring view count set to = N 5. Table 2 presents quantitative results for the inter- mediate subset.Our method demonstrates superior performance across most intermediate subsets, underscoring its effectiveness and generalization capability.Figure 8 offers illustrative qualitative visualizations of the 3D point clouds reconstructed, highlighting the robust reconstruction capabilities of our algorithm.Figure 9 showcases qualitative results for the "Train" and "Horse" scenes within the intermediate subset.Our method excels in producing more precise and comprehensive points, particularly in regions with low-texture attributes or non-Lambertian surfaces.In the more complex advanced subsets, as delineated in Table 3, our approach performs better than previous advanced learningbased approaches in the scene "Ballroom" and scene "Palace". Results on Tanks and Temples Dataset We conducted assessments using input images at a resolution of 1920 × 1080 and a neighboring view count set to N = 5.Table 2 presents quantitative results for the intermediate subset.Our method demonstrates superior performance across most intermediate subsets, underscoring its effectiveness and generalization capability.Figure 8 offers illustrative qualitative visualizations of the 3D point clouds reconstructed, highlighting the robust reconstruction capabilities of our algorithm.Figure 9 showcases qualitative results for the "Train" and "Horse" scenes within the intermediate subset.Our method excels in producing more precise and comprehensive points, particularly in regions with low-texture attributes or non-Lambertian surfaces.In the more complex advanced subsets, as delineated in Table 3, our approach performs better than previous advanced learning-based approaches in the scene "Ballroom" and scene "Palace". Ablation Study We conducted four comparative experiments on the DTU evaluation dataset.We investigated the impact of different loss functions and the attention-aware feature extraction module on the reconstruction results.Additionally, we examined the impact of varying dilation rates in the attention-aware feature extraction module on the reconstruction results.We also assessed the influence of confidence and depth-guided sampling strategy under different view counts on the reconstruction results.Finally, we evaluated the network's performance when varying the number of rays used for sampling.4, clearly illustrating that the attention-aware feature extraction module along with the two loss functions significantly enhances the integrity of the point cloud reconstruction.When these components are combined with the baseline CasMVSNet model, the improvement in point cloud reconstruction is most prominent in terms of integrity assessment, while maintaining a high overall evaluation level.Our proposed model, during the evaluation process on the test dataset, bypasses the neural volume rendering network.Instead, it utilizes the MVS network to estimate depth maps based on the learned feature weights.As a result, a minor increase in the number of parameters, inference time, and memory usage over the baseline model is introduced to enhance the completeness and overall quality of the reconstructed point clouds.We also visualize the influence of these components on the reconstruction results, as shown in Figure 10.By adding the neural volume rendering network and incorporating rendering reference view loss and depth consistency loss to the baseline CasMVSNet, the neural volume rendering network learns additional scene geometry information beyond the cost volume representing scene geometry.This leads to an enhancement in the completeness of the reconstructed point cloud.Additionally, the inclusion of the attention-aware feature extraction module extracts rich feature information to mitigate feature-matching errors, resulting in improved reconstruction results for point clouds in regions with weak texture and non-Lambertian surfaces.Table 5 presents the influence of different dilation rates in the attention-aware feature extraction module on the reconstruction results.When the dilation rates of the three dilated convolutions are set to 2, 3, and 4, the overall point cloud reconstruction quality is the best.However, as the dilation rates increase, the continuity of extracted feature information decreases, resulting in reduced information coherence, and consequently, the overall point cloud reconstruction quality by the network deteriorates.5 presents the influence of different dilation rates in the attention-aware feature extraction module on the reconstruction results.When the dilation rates of the three dilated convolutions are set to 2, 3, and 4, the overall quality of point cloud reconstruction is the best.However, as the dilation rate increase, the continuity of extracted feature information decreases, resulting in reduced information coherence, and consequently, the overall quality of point cloud reconstruction by the network deteriorates.6 demonstrates the impact of sampling within the confidence and depth prior range on the reconstruction results under varying numbers of views.The point cloud reconstruction achieves the best overall quality when the number of views is set to 4. Therefore, we adopted this view count for other ablation analyses.Furthermore, the confidence and depth-guided sampling strategy concentrates on collecting points near the object's surface.This allows the network to accurately construct the geometric shape of the neural radiance field, thereby mitigating the impact of the flawed cost volume on the network.Consequently, the point cloud reconstruction exhibits an overall improvement in performance.During volume rendering, we quantitatively assessed the impact of varying the number of sampled rays on point cloud reconstruction results.As shown in Table 7, we conducted experiments with four different sampling quantities.The point cloud reconstruction achieved the best accuracy and completeness evaluation results when the number of sampled rays reached 1024. Conclusions In this research, we introduce an attention-aware feature extraction network to capture inter-pixel dependencies and adequately extract semantic information from the original views.Furthermore, we establish a novel neural volume rendering network based on multiview semantic features and neural encoding volume, utilizing rendering reference view loss to reconstruct the 3D scene geometry.Additionally, we introduce depth consistency loss to maintain the consistency of scene geometry, alleviating the impact of incorrect matching in regions with weak texture or non-Lambertian surfaces.Extensive experimentation on both the DTU and Tanks and Temples datasets showcases the superior performance of our network compared to previous state-of-the-art approaches.Comprehensive ablation studies validate the effectiveness of the individual modules introduced. Figure 1 . Figure 1.Illustration of the proposed approach.Our network consists of the MVS network and the neural volume rendering network. Figure 1 . Figure 1.Illustration of the proposed approach.Our network consists of the MVS network and the neural volume rendering network. Figure 2 . Figure 2. The design of the feature extraction module we propose. Figure 2 . Figure 2. The design of the feature extraction module we propose. Figure 3 . Figure 3.The architecture of the attention module.This module is a residual structure composed of a mixture of convolutional layers and a local attention layer. Figure 4 . Figure 4.The architecture of the local attention layer. 1 .Figure 3 . Figure 3.The architecture of the attention module.This module is a residual structure composed of a mixture of convolutional layers and a local attention layer. Figure 3 . Figure 3.The architecture of the attention module.This module is a residual structure composed of a mixture of convolutional layers and a local attention layer. Figure 4 . Figure 4.The architecture of the local attention layer. 1 . Based on these assumptions, we construct feature vol- umes  N i i {V } 1 , which are constructed by differential warping 2D source views features to Figure 4 . Figure 4.The architecture of the local attention layer. )Figure 5 . Figure 5. Weighted pooling operation.Here, N represents the number of input views.The notation below the MLP denotes the dimensions of input and output variables in the linear layer, respectively. Figure 5 . Figure 5. Weighted pooling operation.Here, N represents the number of input views.The notation below the MLP denotes the dimensions of input and output variables in the linear layer, respectively. Û p contain valuable signals to guide sampling along rays, thus, for rendering the color of a 3D point x in the geometric scene, we replaced the coarse network used for hierarchical sampling in the original NeRF.We distribute half of the sampled points between the near plane n e and the far plane f e .The second half of the sampled points are extracted within the range of the confidence and depth prior   Û p .This ensures both the network's generalization capability and model convergence. Figure 6 Figure 6 presents a comparison between the two sampling methods. Figure 6 . Figure 6.Comparison of two sampling methods.In contrast to the uniform sampling employed in the original NeRF, the sampling method within confidence and depth prior range concentrates the samples more on the surface of the object.Next, we render the predicted colors and volume density values      Figure 7 . Figure 7.For DTU dataset scans 13 and 77, we compare the reconstruction results with CasMVSNet, UCS-Net, and ground truth. Figure 7 . Figure 7. On the DTU dataset scan 13 and scan 77, we compare the reconstruction results with CasMVSNet, UCS-Net, and ground truth. Figure 8 . Figure 8. Visualization of 3D point clouds of (a) the scene "Family", (b) the scene "Lighthouse", (c) the scene "Horse", (d) the scene "Train", (e) the scene "Playground", (f) the scene "Temple", and (g) the scene "Museum" on the intermediate and advanced subsets of the Tanks and Temples dataset. Figure 9 . Figure 9.The precision results of the "Train" (τ = 5 mm) and the recall results of the "Horse" (τ = 3 mm) scenes reconstructed on the Tanks and Temples dataset are compared with the CasMVS-Net, R-MVSNet, and CVP-MVSNet.Here, τ represents the official distance threshold, and darker regions indicate higher errors relative to τ. 4. 4 . 1 . Influence of Attention-Aware Feature Extraction Module and Different Loss Functions We have discussed the impact of the attention-aware feature extraction module and different loss functions on the final reconstruction of point clouds and the associated effects on model parameters, inference time, and memory usage during testing, building upon the baseline model CasMVSNet.The outcomes are displayed in Table Figure 10 . Figure 10.Qualitative results of scan 48 reconstruction results on the DTU dataset using the attention-aware feature extraction module and various loss functions.4.4.2.Impact of Different Dilation Rates in the Attention-Aware Feature Extraction Module Figure 10 . Figure 10.Qualitative results of scan 48 on the DTU dataset using the attention-aware feature extraction module and various loss functions.4.4.2.Impact of Different Dilation Rates in the Attention-Aware Feature Extraction Module Table5presents the influence of different dilation rates in the attention-aware feature extraction module on the reconstruction results.When the dilation rates of the three dilated convolutions are set to 2, 3, and 4, the overall quality of point cloud reconstruction is the best.However, as the dilation rate increase, the continuity of extracted feature information decreases, resulting in reduced information coherence, and consequently, the overall quality of point cloud reconstruction by the network deteriorates. Table 1 . Quantitative results for the DTU dataset are presented.(lower scores indicate better performance).These results are categorized into traditional methods and learning-based methods.The other research results referenced in this study, other than our own, are taken from previously released research. Table 2 . The quantitative results of the F-score in the intermediate subset of the Tanks and Temples dataset are presented below (higher scores indicate better performance). Table 2 . The quantitative results of the F-score in the intermediate subset of the Tanks and Temples dataset are presented below (higher scores indicate better performance). Table 3 . We present the quantitative results of the F-score within the advanced subset of the Tanks and Temples dataset, with higher scores indicative of superior performance. Table 4 . Ablation study of attention-aware feature extraction module and different loss functions.The baseline represents CasMVSNet.AM represents the attention-aware feature extraction module.RRV represents rendering reference view loss.DC represents depth consistency loss. Table 5 . Different dilation rates of the dilated convolutions in the attention-aware feature extraction module on the reconstruction results. Table 6 . Ablation study of confidence and depth-guided sampling strategy under different view counts.CDG represents the confidence and depth-guided sampling strategy. 4.4.4.Performance of Sampling with Varying Numbers of Rays Table 7 . The quantitative performance with different quantities of ray sampling.
9,504.6
2023-11-10T00:00:00.000
[ "Computer Science", "Engineering" ]
Energetic Neutral Atoms From Jupiter's Polar Regions Energetic Neutral Atom (ENA) cameras on orbiting spacecraft at Earth and Saturn helped greatly to diagnose these complex magnetospheres. Within this decade, the European Space Agency's Jupiter Icy Moons Explorer will make ENA imaging a major thrust in understanding Jupiter's complex magnetosphere. The present polar‐orbiting Juno mission at Jupiter carries no ENA camera. But, the Jupiter Energetic‐particle Detector Instrument is sensitive to >50 keV ENAs, provided there are no local charged particles to mask their presence. Juno offers great service to interpreting past serendipitous and future dedicated ENA imaging with its orbit providing unique viewing perspectives. Here we report Juno observations of ENAs from Jupiter's polar regions. These ENAs likely arise from energetic ions that nearly precipitate in the main auroral regions and mirror magnetically within, and charge exchange with, Jupiter's upper atmosphere. Jupiter proves itself different from Saturn, as ENAs generated from precipitating ions were not identified there. and from ion composition measurements just above the equatorial atmosphere (e.g., Valek et al., 2019). The Juno spacecraft, now in a polar orbit around Jupiter, does not carry an instrument designed to measure ENAs. However, its energetic particle instrument, the Jupiter Energetic-particle Detector Instrument (JEDI), can measure ENAs with energies > 50 keV provided there are no charged particles around to mask the presence of the ENAs. And recently, reported on Juno observations of ENAs coming from the orbits of Jupiter's moons Europa and Io, and from the general direction of Jupiter itself. There has been uncertainty about emissions coming from Jupiter itself. The analysis of the crude Cassini images by Mauk et al. (2003Mauk et al. ( , 2004 suggested that a central structure in the images was a consequence of precipitation of energetic ions into Jupiter's atmosphere. Colleagues informally challenged this conclusion for a couple of reasons. First, in order to see Jupiter, the Cassini imager had to look through the emissions from Europa's orbit and possibly Io's orbit. The gas distributions around these orbits are likely highly structured (e.g., Smith et al., 2019;Smyth & Marconi, 2006). Structure within the images, interpreted as coming from Jupiter, could well just represent the structured emissions from the orbits of these moons. Additionally, while ENAs have been observed at Saturn coming out of the auroral regions in association with upward acceleration of ions (Mitchell, Kurth, et al., 2009), ENAs have not been identified there as resulting from ion precipitation from the magnetosphere (Mitchell, Krimigis, et al., 2009). While ENA conversions of precipitating ions are expected, they apparently do not represent a substantial contribution to the overall ENA emissions coming out of the magnetosphere of Saturn. A possible reason that there might be differences between Saturn and Jupiter is the relative densities of neutral gasses within these respective magnetospheres, as discussed further in the Summary Section 8. At Jupiter, Juno did see ENAs coming from the general direction of Jupiter , but the region of these emissions was highly uncertain. In the present work, we report on ENA observations from a perspective much closer to Jupiter than previously described. These ENAs are clearly coming from Jupiter's polar regions. Here we use both the ENA emissions and the corresponding in situ measurements to help identify the emission sources and processes. There are several motivations for the present report. We aspire to help interpret previous serendipitous ENA observations of Jupiter, document special features that occur in the Juno data set for the benefit of others now using Juno data for their research, and to help with the interpretation and planning for ENA observations of Jupiter to be made by the European Space Agency's Jupiter Icy Moons Explorer (JUICE) mission. JUICE imagers may not have the resolution and closeness to understand fully the source regions of emissions coming from Jupiter itself. JUICE is scheduled to arrive at Jupiter in 2029 with several advanced ENA cameras (Brandt et al., 2018;Futaana et al., 2015;Mitchell et al., 2016). In the sections that follow, we discuss the Juno and the JEDI measurement capabilities, analyze the observed ENA emissions, discuss the relationship between the ENA measurements and in situ measurements within the remotely sensed regions, and conclude with a discussion and summary. Juno and JEDI Configurations The Juno mission was launched in 2011 and was inserted into Jupiter orbit in July of 2016 with the following orbit parameters: 1.05 × 112 RJ polar (∼90° inclination), ∼53.5 day period elliptical orbit with the line-of-apsides close to the dawn equatorial meridian (Bolton et al., 2017). Following insertion, the lineof-apsides has been slowly precessing southward (∼1° per orbit) and toward the night-side (∼4° per orbit). In this study, we focus on measurements from the JEDI . JEDI measures energy, angle, and composition distributions of electrons (∼25 to ∼1,200 keV) and ions (protons: ∼10 keV to ∼4 MeV; oxygen and sulfur from ∼145 keV to >10 MeV). JEDI measures atoms whether or not they are charged. Mauk, Clark, Gladstone, et al. (2020) provide an overview of the findings of the JEDI investigation over Jupiter's polar-regions. JEDI consists of three independent instruments, each of which has six telescopes arranged in a ∼160° fan. JEDI-90 and JEDI-270, sensitive to both ions and ENAs, are oriented to approximate a 360° field of view within a plane roughly perpendicular to the spacecraft spin vector. JEDI-A180 measures only electrons. The full-width at half-maximum angle (FWHM) resolution of JEDI is roughly 17° × 9°, with the 17° dimension oriented along the 160° fan. In high-resolution mode, JEDI accumulates for 0.25 s at a cadence of 0.5 s (ions and electron measurements are subcommutated). Hence, given the 30s spin period of Juno, the field-of-view is rotated by 3° during an accumulation. A 17° resolution for the telescopes is obviously much wider than one would want in an imaging instrument. However, we can determine the locations of narrow features much more accurately by centroiding the sensor response as the spacecraft spins around at a 30-s cadence. The 12 different telescopes oversample the structure by cutting through it with different rotational phasing with respect to the structure as the sensors accumulates over 3° intervals every 6°. Figure 1a shows the particular Juno orbit that is the focus of this study, viewed from the sun. For this particular period, the orbit resided roughly within the dawn-dusk meridian. The observations that we highlight are those made at the positions on Juno's orbit colored red. The spin axis of Juno points roughly toward the sun (toward Earth to be more precise), and the JEDI ion and ENA measurements all take place roughly (although not exactly) within a plane that is perpendicular to the Sun-Jupiter line. In essence, JEDI obtains a 1-dimensional, 360° image in a direction roughly normal to the Sun-Jupiter line. However, because of modest (∼10°) twists and tilts of the fields of view (to avoid looking at solar panels), a modest range of elevation angles away from that plane are also sampled during a spin. JEDI ENA Measurements The features of interest are those identified with the labels "ENAs" in Figure 1c for hydrogen and in Figure 2a for heavy ions (oxygen plus sulfur). These panels are pitch angle plots, generated with the help of the magnetic field data obtained by the Juno Magnetometer instrument . Most of the features within these panels represent charged particles associated in some way with Jupiter's auroral processes. We identify the features labeled "ENAs" on these panels as ENAs based on their ordering (or lack thereof) with respect to the magnetic field and because it is highly unlikely to see upgoing heavy ions from the magnetosphere without corresponding downgoing components (see for additional discussions). Figure S1 in the Supporting Information shows these electron features explicitly. Gérard et al. (2019) and Allegrini et al. (2020) provide broad studies of the comparison between electron precipitation and auroral emissions. Multiple panels in Figures 1 and 2 show the energy distributions for ions and ENAs. Each panel uses pitch angle filters to select out certain features. We filtered Figures 1d and 2b specifically to catch the energy distributions of the ENAs. We see hydrogen ENAs with energies extending from 50 keV up to energies high enough to occasionally illuminate the JEDI ∼195-230 keV energy channel with single counts (Figure 1d). Oxygen plus Sulfur (OS) ENAs show energies extending from 140 keV up to energies high enough to occasionally illuminate the JEDI ∼1,000-2,300 keV energy channel, with 1-2 counts (Figure 2b). We assume that the ENAs result from charge exchange between the primary ions (H+, O+, S+) and Jupiter's hydrogen atmosphere (H 2 ). Charge exchange cross sections between our primary ions and hydrogen (H; as shown in McEntire & Mitchell, 1989 and as reproduced in the Supporting Information of show that for H + on H the cross section is almost two orders of magnitude lower at 200 keV than it is for the 50 keV low end of JEDI and for the 50-80 keV ENA observations obtained by Cassini (Mauk et al., 2003). They also shows that the O + on H cross section at 1,000 keV is not quite one order of magnitude Gladstone et al., 2017). The colors represent different UV spectral bands with red, green, and blue tending to represent the consequences of high, medium, and low energy electron precipitation (see Gladstone et al., 2017). Jupiter's planetary pole is indicated with a small white circle, and the yellow dot shows the average direction of the sun relative to that polar position during the image accumulation. Overlaying the image is the trajectory of Juno mapped along magnetic field lines to Jupiter's upper atmosphere using the JRM09 internal magnetic field model (Connerney et al., 2018) combined with an explicit model of the external field (Connerney et al., 1981). UVS accumulated the image during the portion of the trajectory shown thicker than the rest of the trajectory. (c) Time versus pitch angle versus intensities for charged and neutral 50-4,000 keV hydrogen atoms. The blue bars just above this panel show where JEDI observed the downward auroral electrons associated with the UV main auroral emissions. The shortest bar on the right is where the electron intensities were most intense. (d) Time versus energy versus intensities for hydrogen atoms or ions with pitch angles between (20° and 67°). (e) Same as (d) but for pitch angles between 165° and 180°. lower than it is for the 140 keV low end of JEDI. Future work will examine whether or not it is reasonable to observe H and OS ENAs with energies as high as those reported here on the basis of our assumed source. We filtered the other energy spectrograms (Figure 1e for H and Figure 2c for OS) to capture downward precipitating ions (pitch angles between 165° and 180°). In the polar cap regions (defined here as simply the regions poleward of the main aurora; see Mauk, Clark, Gladstone, et al., 2020), the OS energy distributions ( Figure 2c) reveal OS ions that have been accelerated downward electrostatically to megavolt energies for the period extending roughly from 0755 to 0825. Clark et al. (2017) discovered these potentials and Mauk, Clark, Gladstone, et al. (2020) reported on their extent and persistence. The energy width of the feature is broad in part because of the multiple charge states of these heavy ions and in part because of the width of the JEDI channels. This feature is also observed in the protons ( Figure 1e) but is not as well characterized there because JEDI had only one broad proton energy channel that measures energies greater than 1,000 keV for this time period. We show these distributions because it will be tempting to attribute the ENA emissions as resulting from these downward accelerated and precipitating ions. We must exercise care, however in that, over broad regions of the polar cap observed here, the energies of those ions do not include the energies of most of the OS ENAs, in the 140-300 keV range where the OS ENA intensities are greatest. The final two panels of Figure 2d and 2e show the directionality of the ENAs in a jovicentric Cartesian coordinate system, the Jupiter-Sun-Orbit (JSO) system. Here, JSO-X points toward the sun, JSO-Y points roughly duskward within the plane of Jupiter's orbit, and JSO-Z completes the orthogonal triad. Looking from the sun, the Azimuth angle ( Figure 2d the counting statistics indicate significant uncertainties. Figure 3c shows the centroids of the OS azimuth angles for each 30-s period. The solid blue circles are judged to be the most reliable, while the open blue circles, derived from very few counts, are much less so. The solid blue line is a polynomial fit to the solid blue circles. The solid red line is a polynomial fit to the corresponding hydrogen azimuth centroids (not shown). The difference between H and OS fits provides some measure of the uncertainties in the centroid azimuth angles (say roughly ±3°). Finally, Figure 3d shows the centroids of the OS elevation angles together with a linear fit to those points. We judge the error in elevation angles to be something like ±5°. The speed of the spacecraft (50-56 km/s for the time period of interest) is not infinitesimal with respect to the speeds of the energetic neutrals (∼4,500 km/s for 100 keV H and ∼1770 km/s for 250 keV O). A consequence is that the azimuth angles are slightly aberrated, requiring adjustments of roughly 1.6° and 0.6° for O and H, respectively. ENA Viewing Analysis The viewing analyses provided in Figures 4 and 5 show that these ENA emissions do indeed come from Jupiter's polar-regions. The term "polar regions" as used here includes the polar cap, the main aurora, and, depending on one's definitions, the regions equatorward of the main aurora that still support robust auroral emissions. Figures 4a and 4b show where the JEDI viewing directions (green lines) encounter Jupiter's atmosphere (red dots) in the JSO coordinate system. To carry out these viewing analyses, we started with an expression for Jupiter's flattened shape in the form of: X 2 + Y 2 + a 2 Z 2 = (R eq ) 2 . Here, X and Y are equatorial coordinates, Z is the polar coordinate, R eq is the equatorial radius of Jupiter (71,492 km), and "a" (= 1.069375) is the ratio of equatorial radius and the polar radius (66,854 km). This expression matches the 1-bar radius calculated using measured occultations of Jupiter over a broad range of latitudes to within errors ranging between 1 and 23 km, depending on latitude (Helled, 2011). Given that the average auroral-arc emission altitudes above the 1-bar level is of order 245 km (Vasavada et al., 1999), we have arbitrarily chosen 1,000 km above the 1-bar level as the altitude where the less-penetrating ions interact strongly with the upper atmosphere for the calculations shown in It may be puzzling why the emissions are confined in elevation, resulting in what might be described as a curved line of emission in Figure 4b. Because we trust only the centroids of the observations averaged over a spin, and because the counts are too low to trust fully any one measurement (hence the use of the polynomial fits), we have essentially reduced the emission profile to a line. The observations shown in Figures 2d and 2e do show clear confinements in both azimuth and elevation. However, before one decides that the emission regions themselves are so confined, one must be aware of an additional constraint on whether JEDI can see the emissions. Specifically, Juno must reside within what we will call here the "cones of emission" as discussed here and in Section 7.2. Figure 5a shows the viewed positions on Jupiter's atmosphere in the Jupiter-fixed latitude and longitude system (a right-handed System III coordinate system) using the azimuth and elevation choices described earlier in this section. Here the zero-degree longitude is along the plus x-axis, and east-longitude (a right-handed longitude) increases in the counter-clockwise direction. Overlaying these plots is a tracing of the auroral MAUK ET AL. Figure 1b. This tracing does not include the enigmatic, unexplained, and informally named "red aurora" (based on standard false-color presentations) that fills large fractions of the polar cap. We show three different plot symbol types along the emission line to identify potential ENA emissions points. They are as follows. • Colored dots: The two clusters of colored dots are the ones that correspond to the most reliable ENA emission measurements and those positioned most reliably. Note that the red dot on the left (dot number 6 on the panel) is obscured by the overlaying green dot (number 8) • Crosses: We consider these symbols at the extreme ends of the line dots to be unreliable. They are included only for completeness, as future studies might conclude that they are from different sources (note the discontinuity at the extreme left of Figure 3c). These points are unreliable for two reasons. (1) They are based on very low count rates (Figures 3a and 3b at the left-most and right-most extremes). (2) And, the derived viewing directions are close to being tangent to Jupiter's atmospheric surface. Because of (2), the positioning of these dots in Figure 5 are highly sensitive to errors in the viewing directions. By adding appropriate error perturbations to the view directions, we can move those points uncomfortably large distances across Jupiter • Stars: These symbols correspond to the center regions of our measurements where the overlaying high ion intensities block any ENA measurements in Figures 1 and 2. We positioned these dots by interpolation using the polynomial fits. These points may or may not exist. Also, if they do exist, their estimated positions may be wildly inaccurate. We include them here, again, for completeness In the discussions and analyses that follow, our concern will be only with the reliable colored dots. Note that in Figure 4, all of the view directions shown in Figure 5a are included. Because, as discussed in Section 7.2, the probability of ENA emission from Jupiter's atmosphere depends on the pitch angle of the ions that are being converted, we have added an additional element in Figure 5a; the pitch angles of the ions just as they were converted to ENAs. Those pitch angles are printed at the bottom left of the panel for the four colored dots on the left and at the bottom right for the four colored dots on the right. We use the JEDI viewing directions and the vector magnetic field directions at the 1,000 km altitudes based on the latest magnetic field model of Jupiter's internal field, JRM09 (Connerney et al., 2018). Because our mental picture of these emissions has the ions locally mirroring within a very thin upper atmosphere, we expect that the pitch angles should be very close to 90°. However, because the pitch angles evolve so quickly away from 90° over very short distances, we will show in the discussion Section 7.2 that pitch angles as much as 10°, and perhaps even 15°, away from the 90° values are expected. Figure 4a shows that all of the observed ENAs move upward away from Jupiter's atmosphere following conversion. Because Jupiter's magnetic field points upward away from Jupiter in the northern polar regions, there will be a tendency for the observed ENAs to have arisen from ions with pitch angles that are <90°. However, the field lines can tilt substantially away from the Juno view point so that even ENAs that are moving away from Jupiter's atmosphere can come from ions with pitch angles >90°, as is observed to have occurred for one of the eight colored points in Figure 5a. For a purely dipolar magnetic field configuration, we expect that the field will tilt as much as 12° from the vertical in the main auroral regions. For the highly distorted fields actually observed in the north polar regions (Connerney et al., 2018), larger tilts certainly occur and with varying directionalities. Figure 5a suggests that the cluster of four colored dots on the left come from the main auroral regions, perhaps with a polar cap contribution. The cluster on the right appears to come from a combination of main aurora and polar cap sources. However, before making such a determination, we must examine possible imaging errors. In Figure 5b, we examine as examples the imaging errors for just two of the image points from Figure 5a; points 2 and 6 (see the numbers of the colored dots in Figure 1a; the red dot 6 is mostly hidden by the green dot 8). Here we have added and subtracted all possible combinations of azimuth errors (+3°, 0°, −3°) and of elevation errors (+5°, 0°, −5°), yielding a total of 9 points for each image measurement. In addition, the size of the dots depends on how close to 90° were the emitting ions at the time of their conversions to ENAs. The largest dots are for pitch angles within 10° of the 90° angle. Medium size dots are for pitch angle between 15° and 10° away from the 90° angle. Finally, the smallest dots are for pitch angles more than 15° away from the 90° angle. In Section 7.2, we show that we expect robust emissions associated with the largest dots, perhaps modest emissions for the intermediate sized dots, and likely no observable emissions for the smallest dots. Note that the fact that on Figure 5b, the possible viewing positions for point 2 extends into the subauroral regions does not mean that any ENAs are coming from that region. It means that for that particular measurement there are multiple possibilities as to where the ENAs are coming from. Figure 5c now adds some complexity by examining the two extreme points from each cluster of four points in Figure 5a. Those points are 1 and 4 on the right, and 5 and 9 on the left. Finally, for completeness, we include all points in Figure 5d, although different points are falling on top of each other here, making it difficult to interpret completely that plot. It appears from Figure 5 that all of the image points are consistent with a main aurora source. That is, the main aurora is a possible source for each of the eight imaged points. In addition, two of the points (7 and 8) most likely can only have come from the main aurora since the extension into the polar cap appears with only small dots (meaning that emissions from there would be outside of the cones of emission). However, that also means that six (or five, excluding point 1) of the eight points might possibly have come from the polar cap. Finally, only two of the eight imaged points have the subauroral region as a possible source. Putting all of that together, we can probably exclude the subauroral region as the source of the ENAs. Based on Figure 5, the source may come from a combination of the main aurora and the polar cap. In the discussion-section (Section 7.1), we will argue against the polar cap as being a source for the observed ENAs based on the ENA energies. However, if it turns out that the stars in Figure 5a are a reality as roughly positioned, then the polar cap must play role in the ENA emissions. Figure 6 shows all of the examples of near-Jupiter, polar ENA emissions that we have found examining 26 science orbits of Juno. All of them are from the northern hemisphere. How Juno cuts through the system strongly affects the statistics on ENA viewing, specifically affecting the background populations that can mask the ENAs. The statistics also depends strongly on viewing perspective. Figure S2 in the Supporting Information section shows how asymmetric the observed northern and southern environment can be. That asymmetry results in part from north-south asymmetries in the Juno trajectory and possibly from northsouth asymmetries in Jupiter itself. Specifically, the northern hemisphere at atmospheric altitudes has generally higher magnetic field strengths than does the south (Connerney et al., 2018), changing the location of the interfering charged particle populations relative to Juno. In the north, the rotational phase of Jupiter often determines whether charged particle populations will mask any possible ENA emissions, given the tilt in Jupiter's magnetic axis. We have not analyzed in any detail the conditions needed for Juno's trajectory to be clear of contaminating charged particles. How Common are the Polar ENA Emissions? The issue of viewing perspective, mentioned above, has to do in part with the plane within which Juno flies. The view planes of JEDI-90 and JEDI-270 are very roughly perpendicular to the direction to the sun. Early in the mission, Juno's trajectory was also roughly normal to the sun line. With that configuration, Juno views an emission point on Jupiter's atmosphere from multiple directions as it flies overhead. However, when Juno's trajectory is rotated to being closer to the noon-midnight meridian, emission points tend to be viewed only from one perspective, a perspective that may not be within the cones of emission (fairly close to 90° pitch angles, see Section 7.2). Therefore, we expect to see more ENA emissions early in the mission rather than later. We saw near-Jupiter, polar ENA emissions from five of the first 15 science orbits (16 orbits minus orbit number 2 where Juno collected no science data). Out of those 15 orbits, only eight were sufficiently clear of charged particle populations such that we would expect to detect ENA emissions. From that perspective, Juno observed polar ENAs during 62% of the available orbits. If one ignores our arguments about viewing perspective and uses the entire mission (to date), then Juno observed ENAs during 36% of the available orbits. Auroral Structure and ENAs Jupiter's aurora is different from Earth's in several important ways . One key way is that strong auroral emissions occur both in the regions of apparent upward electric currents and downward electric currents. At lower latitudes, there is the region of diffuse aurora like at Earth, sometimes containing structures caused by injections. At higher latitudes, there is what has been termed the "Zone I" region with primarily downward electron acceleration, sometimes accompanied by downward electron inverted-V's, but more often comprising broadband acceleration. At still higher latitudes is what has been termed the "Zone II" region with bidirectional electron acceleration that often favors upward acceleration. However, the downward electron energy fluxes in this region are just as likely to generate the brightest aurora as is Zone I. While it has been demonstrated explicitly for only one auroral crossing, it is assumed that Zone I is the region of upward electric currents and Zone II is a region of downward electric currents. Poleward of Zone II is the polar cap, which might be different from Zone II only quantitatively rather than qualitatively (see discussions in Mauk, Clark, Gladstone, et al., 2020). Figure S1 in the Supporting Information section provides some information about the auroral regions associated with the observations shown in Figures 1 and 2. The reason this discussion of auroral structure is important is that Zone II, a region of bright auroral emissions, can also be a region of intense downward precipitating ions. That condition is apparent in the righthand portions of Figures 1c and 2a. We view that region as one of the prime candidates for providing the precipitating ions that end up as the observed ENAs. This source would generally come from the poleward portion of the main aurora. However, irrespective of these arguments about auroral structure, and 2a demonstrate that the region of the main aurora, indicated with the light blue bars above these panels, are regions of trapped and participating ions that can serve as the source populations for the observed ENAs. The polar cap regions, poleward of the main aurora, are also possible sources. Downward accelerated and precipitated ion populations (occupying the downward loss cone) are certainly present. Our reticence in identifying this region as a source is the fact that the energies for the observed precipitating OS populations (Figure 2c) appear to be generally higher than the observed energies of the prime ENA emission energies ( Figure 2b). However, it needs to be investigated the extent to which the upper atmosphere can degrade the energies of the ions with multiple interactions and still retain observable intensities with the ENAs that emerge. In the main auroral regions in the right-hand-side of Figures 1c and 2a, the size of the loss cone can be seen by the sharp cutoff of the ion populations (overlaying the ENA features) at lower pitch angles, corresponding to the upward direction. For example, at just 0831, the size of the loss cone is about 22°. When you look at the other end of the field line corresponding to downgoing ions (e.g., the top portions of Figures 1c and 2a at just 0831), it is the particles with pitch angles at Juno close to 180°-22° = 158° that will locally mirror within Jupiter's atmosphere (with near 90° pitch angles at that location). At that end of the field line, we also see that the loss cone is completely filled. Therefore, to within JEDI's ability to resolve the angular distributions, all possible precipitating ion pitch angles are available at that time for generating ENAs. As discussed in the next section, the likelihood of an ion with any one pitch angle to generate an observable ENA depends on the amount of time that the ion spends within the upper atmosphere. That likelihood maximizes for those locally mirroring ions. In the polar cap regions, observed at higher altitudes, JEDI does not have the resolution to fully resolve the very narrow downgoing ion beams. All we can say is that there are downgoing ions apparently within the loss cone, but the exact pitch angles that are available within the upper atmosphere are poorly determined. Ion Pitch Angles at the Time of Conversion to ENAs One anticipates that the pitch angles of the ions at just the time of conversion to ENAs will be close to 90° as the ions mirror magnetically within the upper atmosphere. But, it is less obvious just how far from 90° the converted ions are expected to be. It turns out that the pitch angles migrate extremely rapidly as the ions rise up from their mirror points and still remain within the relatively dense regions of the upper atmosphere. For these discussions, we will make no distinction between the pitch angles of the ions just prior to their conversions to ENAs, and the "pitch angles" of the ENAs just following their conversions. For the energies involved here, the velocity vectors of the ENAs just an instant following the time of their conversions are essentially the same as the velocity vectors of the ions just prior to their conversions. The scale height of the nominal atmosphere is roughly 150 km between altitudes of 1,000-2,000 km (above the 1-bar level), derived using the H 2 profile of Gladstone et al. (2004). Within the part of the upper atmosphere heated by auroral processes, the scale height is expected to be higher (Grodent et al., 2001), but for our discussions, we will use the conservative 150 km value. Given a magnetic field strength that varies as 1/R 3 , the pitch angle of an ion that mirrors at one altitude (R mir ) will migrate (in the northern hemisphere) from 90° to 83.6° at the altitude of R mir + 150 km, just one scale height above the mirror point. Using this kind of information, one may estimate the probabilities of emission for various pitch angle ranges. Let us assume that an ion has a 50% chance of surviving its downward plus upward traversal of just one scale height above its mirror point. We also assume that the upward magnetic force on the particle is roughly constant within these low altitude regions of reflection and that the field lines are vertical within the atmosphere. Both of these assumptions are conservative for the points that we are trying to make. Given these conditions, one may use the classical kinematic equation: distance = acceleration × (time 2 )/2, and the exponential falloff of the atmosphere, to estimate the time that the ion spends within each scale height of the atmosphere, and the probability of emission within each of those regions. The estimates are as follows. There is a 25% chance that the ions mirroring at the designated position will emit ENAs within the first scale height with pitch angles between 90° and 83.6° (again, for the northern hemisphere). There is a 7.6% chance that ENAs will be emitted within the second scale height with pitch angles between 80.9° and 83.6°. Continuing on, the probability and pitch angle ranges are: 2.2% for the range 78.9° and 80.9°, and 0.67% for the range 77.2° and 78.9°. For ions that mirror at different altitudes, the absolute emission probabilities will be different, but the relative emission probabilities for different pitch angles will stay the same provided the atmospheric scale height is the same at that new altitude. From this analysis, it is certainly reasonable to expect ENA emission pitch angles between 80° and 90°, and perhaps down to the mid-70°'s. Note that if the field lines lean substantially away from the viewing position, the pitch angles greater than 90° are possible (considering only the northern hemisphere). There is symmetry in the corresponding calculations (assuming that the field line is tilted enough so that the ENA always moves toward greater altitudes) such that the emission probability of an ion at a pitch angle of 90° − delta° is the same as that of one at 90° + delta°. A relaxation of our worst-case assumptions (e.g., scale height within the auroral regions) pushes the pitch angles to even lower values. For scale heights of 150 km (used above), 200 km, and 250 km, the pitch angles at the top of the fourth scale height with still modest emission probabilities, are 77.2°, 75.2°, and 73.5°. Also, if one includes the slight tilt of the magnetic field lines with respect to the vertical, the ions will spend a little more time within each scale height, and the probability of emission will go up slightly according to 1/ Cosine of the tilt angle. Note that only one narrow range of emission pitch angles is visible at any one time for a specific image point, depending on the viewing position of the observation. ENAs may be copiously emitted from a viewed spot on Jupiter's atmosphere, but if the view direction has an angle with respect to the magnetic field within the upper atmosphere that is not within the "cones of emission" (pitch angles between, say 75° and 105°, for the northern hemisphere), then JEDI will see no emissions. We used these numbers to make the choices in Figures 5b-5d for the sizes of the dots. All of the calculations in this section assume that the ions that result in observable ENAs have single interactions with the upper atmosphere. Multiple interactions (e.g., charge exchange neutralization followed by stripping ionization followed by another charge exchange neutralization, etc.) can certainly happen. It is not obvious whether such multiple interactions will substantially change the calculations performed here. That question will be the subject of future studies. Summary and Closing Remarks We have observed ENAs (>50 keV) emanating from Jupiter's polar regions. They are likely emanating from nearly precipitating ions that mirror within Jupiter's upper atmosphere and are converted there to ENAs through the charge exchange process. Spatial imaging shows that they are likely coming from positions in Jupiter's polar region of auroral acceleration and precipitation processes, and specifically from either the main aurora or the polar cap. The main aurora is favored for several reasons. The main aurora is a region where ions precipitate intensely with a configuration not anticipated from studies of Earth's aurora. Ions do precipitate within the polar cap poleward of the main aurora, but the ion energies observed there (for OS) do not match the energies of the observed OS ENAs for the event studied. It is, however, an open question whether the atmospheric interactions (with multiple charge exchanges and re-ionizations) can degrade the energy of the precipitating ions and result in ENA emissions of observable intensities. Crude estimates of the probability of seeing ENAs suggest that observable polar region ENA emissions are common (36%-62% of the time) but variable. Such emissions will be an important component of ENA imaging from the JUICE mission in diagnosing global dynamics of Jupiter's magnetospheric system. These measurements support the proposal from Mauk et al. (2003Mauk et al. ( , 2004) that ion precipitation into Jupiter's atmosphere caused a central structure in the crude ENA imaging obtained by the Cassini spacecraft flyby of Jupiter. Finally, Jupiter appears to be different from Saturn in this respect. Ion precipitation-generated ENAs were not an identifiable feature in the quality ENA imaging of the Saturn system (Mitchell, Krimigis, et al., 2009). Saturn's magnetosphere is different from Jupiter's in that it contains much more copious densities of neutral gas (from the plumes of the moon Enceladus), as evaluated by Dialynas et al. (2003), using ENA imaging (and references therein). These neutrals are not ionized to the extent that gases are at Jupiter leading to much larger neutral-to-ion ratios. Also, Vasyliunas (2008) points out that the gas input rate at Saturn, when appropriately normalized to other magnetospheric parameters, loads Saturn substantially greater than the loading that occurs at Jupiter. We presume that while ion precipitation may be a competitive or dominate loss process for energetic ions at Jupiter, ion precipitation may be much less competitive at Saturn where charge exchange losses near the equator likely dominate. Data Availability Statement The data presented here are available from the Planetary Plasma Interactions Node of NASA's Planetary Data System (https://pds-ppi.igpp.ucla.edu/). Also, ASCII dumps with header documentation has been performed for each panel of the JEDI data displayed in this study and is accessible at zenodo (doi:https://doi. org/10.5281/zenodo.4244653). The JEDI display software used here is available online and can be accessed by contacting the lead author. A 1-h teleconference tutorial provided by the lead author or his designate is generally sufficient for a user to have sufficient expertise to proceed.
8,918
2020-12-01T00:00:00.000
[ "Physics" ]
Tunable Nb superconducting resonators based upon a Ne-FIB-fabricated constriction nanoSQUID Hybrid superconducting--spin systems offer the potential to combine highly coherent atomic quantum systems with the scalability of superconducting circuits. To fully exploit this potential requires a high quality-factor microwave resonator, tunable in frequency and able to operate at magnetic fields optimal for the spin system. Such magnetic fields typically rule out conventional Al-based Josephson junction devices that have previously been used for tunable high-$Q$ microwave resonators. The larger critical field of niobium (Nb) allows microwave resonators with large field resilience to be fabricated. Here, we demonstrate how constriction-type weak links, patterned in parallel into the central conductor of a Nb coplanar resonator using a neon focused ion beam (FIB), can be used to implement a frequency-tunable resonator. We study transmission through two such devices and show how they realise high quality factor, tunable, field resilient devices which hold promise for future applications coupling to spin systems. I. INTRODUCTION A large and growing variety of spin systems have been coupled to superconducting resonators, including ensembles of non-interacting spins in silicon [1], diamond [2] and other materials [3], magnons in ferrimagnets [4] and chiral magnetic insulators [5], and individual spins in quantum dot devices [6]. Motivations for such studies include long-range coupling of spin qubits [7], realisation and study of topological systems [8], longlived microwave quantum memories for superconducting qubits [9,10], and the demonstration of microwave-tooptical conversion at the single-photon level [11]. Planar superconducting circuits provide a robust, wellstudied [12] and scalable architecture [13] for such hybrid systems, and superconducting resonators with Q-factors over 1 million have been achieved [14]. However, for the majority of the applications described above, externally applied magnetic fields from ∼10 mT up to several 100 mT, or more, are required to bring the spin systems into a suitable regime of interest. Furthermore, control of the spin-resonator coupling, often on short timescales, is required in applications such as quantum memories, and may be achieved, for example, by frequency-tuning of the resonator. Superconducting quantum interference devices (SQUIDs) act as flux-tunable inductors and have been successfully incorporated into resonators to provide frequency tunability [15][16][17]. The SQUID inductance is tuned from its minimum to maximum by an additional half flux quantum threading the SQUID loop, thus altering the resonator frequency. This means that small local fields provided by on-chip flux lines are able to tune SQUID-embedded resonators on timescales of a few nanoseconds [16]. Technologies which use DC currents to tune the kinetic inductance and hence resonant frequency of resonators have also been developed [18,19]. Previous SQUID-tunable devices have been fabricated from aluminium with shadow-evaporated junctions [16] or used Nb/Al/AlO x /Nb trilayer junctions [15]. Al devices may suffer from the low critical field of Al and are expected to have poor magnetic-field resilience. AlO x tunnel junctions may introduce extra losses to resonators and limit quality factors. An alternative technology is based on nanoSQUIDs formed by superconducting-nanowire-based constriction junctions; these have already been shown to possess exceptional field resilience [20]. NanoSQUIDs are commonly fabricated [21] by a Gabased focused ion beam (FIB); however, this technique has been shown to induce loss into superconducting resonators [22]. Here, we use a neon FIB (a technique shown to be compatible with high quality superconducting resonators [23]) to create constrictions within the centre conductor of a Nb superconducting λ/4 co-planar resonator. These constrictions have a width of 50 nm and are placed in parallel such that they complete a superconducting loop between the current antinode of the arXiv:1807.00582v1 [cond-mat.supr-con] 2 Jul 2018 resonator and the ground (Fig. 1). We study the microwave transmission of this device and demonstrate that it realizes a superconducting-nanowire-based frequencytunable and field-resilient resonator with high quality factor, > 9 × 10 3 at applied fields up of to 0.48 mT, perpendicular to the resonator plane. We determine that flux-focusing increases the local field around the SQUID to approximately 59 mT, which provides evidence that our structures could withstand much higher applied fields with a different ground-plane design. Furthermore, the resilience to in-plane magnetic fields is expected to be much bigger, given that Nb resonators with Q ∼ 2.5×10 4 in 160 mT in-plane fields have already been demonstrated [24]. Overall, this suggests that such tunable Nb resonators are an attractive candidate technology for hybrid quantum systems [1][2][3][4][5][6][7] as well as for high-sensitivity electron spin resonance (ESR) [25]. II. EXPERIMENTAL Co-planar resonators were fabricated by etching thin films of superconductor using a similar method to that described in Ref [23]. Here Nb is used, instead of NbN, because of its longer coherence length of 38 nm [26] which sets the length-scale for the width of constrictions needed to make junctions -Nb thus allows ≈50 nm constrictions to be used, which is easier to achieve than the ≈20 nm constrictions required in NbN. Nb also has a lower kinetic inductance (hence lower impedance and larger zero-point current fluctuations), which could enable stronger magnetic coupling of the Nb resonator to spins. Nb films were deposited on Si substrates by DC magnetron sputtering from a 99.99%-pure elemental Nb target in argon. The pressure before deposition was 6×10 −7 mbar and during deposition was 3.5×10 −3 mbar. The sputter power was 200 W, with the deposition timed to produce a 50-nm-thick film. Quarter-wave (λ/4) resonators (see Fig. 1a) with an embedded superconducting loop at the grounded end of the resonator were patterned by electron beam lithography (EBL) (Fig. 1b) in the same lithography step as a microwave feed line. This pattern was transferred from resist to the film by a reactive ion etch (RIE) process using a 2:1 ratio of SF 6 to Ar, at 30 mbar and 30 W for 120 s. The RIE process additionally etches exposed Si to a depth of 500 nm, leaving areas with Nb raised above their surroundings. The superconducting loop has two constrictions (see Fig. 1b and c): broad constrictions are defined in the initial EBL exposure and subsequently narrowed to ≈ 50 nm by Ne FIB milling, in which a beam of Ne ions, accelerated to 15 kV, mills through the Nb. A dose of ≈2 nCµm −2 is used. On the same chip, 21 out of 22 constrictions milled were still intact after narrowing to a dimension approaching the coherence length, suggesting a high yield for this part of the processing; see supplementary materials for details. Resonators are measured at a temperature T ≈ 300 mK in a 3 He cryostat with a heavily attenuated microwave in-line and a cryogenic high-electron-mobility transistor (HEMT) amplifier on the microwave out-line. The S 21 transmission of microwaves through the feedline -to which the resonator is capacitively coupledis measured using a Rohde & Schwarz ZNB8 vector network analyzer. Perpendicular magnetic fields are applied by a superconducting magnet connected to a precision current source (Keithley 2400 SourceMeter). Samples are enclosed in a brass box lined with Eccosorb CR-117, a microwave absorber, to reduce the number of quasiparticles excited by stray IR photons, which we have previously shown can have a significant effect on superconducting constrictions [23]. III. RESULTS A. Tunability Fig. 2 shows the magnitude response of S 21 from the λ/4 resonator with no applied magnetic field, measured at T = 307 mK and an applied microwave power P ≈ −106 dBm. Using a traceable fit routine [27], based on fitting a circle to the resonance in the realimaginary plane, we extract the resonator parameters including the internal quality factor (Q i ≈ 3.0 × 10 4 ), central frequency (ν 0 = 7.417 GHz) and coupling quality factor (Q c ≈ 2.2 × 10 5 ). The asymmetry of the resonance, which persists down to single photon powers, is due to impedance mismatch and is fully captured by the fit routine. To examine the field tuning of the resonator, we study its behaviour in a small perpendicular magnetic field (≤ 10 µT). Fig. 3a shows a typical field sweep which reveals the smooth tuning of the resonator towards lower frequencies as magnetic field amplitude is increased. The full range of frequency tuning is found to be ≈100 MHz, obtained by changing the applied perpendicular field by ≈10 µT (see Fig. 3 b) [28]. To analyse this behaviour, we start by assuming a Josephson-like sinusoidal currentphase relationship for the nano-constrictions, treating the superconducting loop as a DC-SQUID with a fluxtunable inductance [16] where I C0 is the zero-field critical current of the SQUID, f = πΦ/Φ 0 is the frustration of the SQUID and Φ 0 is the flux quantum. The total impedance Z T of a transmission line terminated by a SQUID is given at frequency ν by where d is the length of the transmission line, l 0 the inductance per unit length, Z is the impedance of the transmission line, Z SQUID the impedance of the SQUID and v = 1/l 0 c 0 the speed of light in the transmission line where c 0 is the capacitance per unit length. Z T is real at resonance and the resonant frequencies ν i are therefore given by which may be solved numerically. The fundamental resonant frequency may be expressed approximately [29] in terms of the total inductance L and capacitance C of the distributed resonator: where L res is the inductance of the resonator excluding the SQUID, L 0 SQUID is the zero-field inductance of the SQUID. ∆L(B) is the change of inductance of the SQUID with field which, from Eq. 1, is equal to Φ 0 /4πI C0 × (1/| cos (f )| − 1). Assuming f ∝ B, an assumption which we examine further below, we can write SQUID )] and f = KB so that K scales field to f . The observed ν 0 (B) dependence of the resonator in Fig. 3 fits well with Eq. 5, allowing determination of A and K. We next consider the relation between the field periodicity of the tuning behaviour and the flux quantum. SQUID behaviour is periodic in applied flux. The area of the SQUID loop is A loop = 3.7±0.3 µm 2 , such that 10 µT (the field required to maximally detune the resonator) corresponds to a flux BA loop ≈ 0.02Φ 0 . Assuming the tuning arises from the SQUID, the field required to maximally tune the resonator implies that the local flux density at the SQUID is much greater than the 10 µT applied by the magnet. This indicates substantial flux focusing due to flux expulsion from the superconducting ground plane surrounding the resonators. We return to the topic of flux focusing later. As the resonator is detuned away from ν 0 , Q i is found to drop from its maximum value, 2.8×10 4 , to 1.0×10 4 when the resonator is maximally tuned. This phenomenon of decreasing Q i has previously been observed and attributed to increasing thermal noise as the SQUID is detuned [15] or increased dissipation caused by a subgap resistance [16]. An alternative explanation could be that dilute surface spins [30] induce spectral broadening of the resonance lineshape by flux-noise-based frequency jitter in these flux-tunable resonators. However, even for the highest values of flux noise in Ref. 31, which correspond to ∼100µΦ 0 , the corresponding frequency jitter would be too small to create sufficient spectral broadening to explain the drop in Q i observed here. The source of these extra losses is the subject of ongoing work. B. Hysteresis and Premature Switching When tuning the constriction-SQUID resonator over more than one period, as shown in Fig. 3c and d, a hysteretic behaviour is seen, similar to that previously reported in Al constriction SQUID resonators [17]. In addition, frequency jumps are observed at values of detuning less than the maximum value (see for example the region between −5 µT and −20 µT). We attribute jumps at non-maximal detuning to flux trapping as the field is ramped up and down. The oscillations in the internal quality factor (Fig. 3d) follow the frequency detuning of the resonator, showing that the degradation in Q i with magnetic field arises from the state of the SQUID and not from the properties of the resonator in a magnetic field. The hysteretic tuning of the resonant frequency and internal quality factor may be explained by significant self-inductance of the superconducting SQUID loop. The SQUID has a characteristic parameter β L = 2L loop I C /Φ 0 (where L loop is the inductance of the loop and I C is the Josephson critical current). When β L 1, the SQUID behaviour becomes hysteretic with applied flux, as shown in Fig. 4a. The red path in Fig. 4b maps out the flux within the SQUID as the field is increased (assuming zero temperature, in the absence of fluctuations). At extremal points, the flux threading the SQUID exhibits discontinuous jumps as Φ app is ramped upwards, occurring periodically with a period of 2Φ 0 . At finite temperature, thermal fluctuations cause these jumps to occur at a temperature-dependent flux less than that at T = 0. Using this 2Φ 0 periodicity, we are able to calibrate local fields at our device and experimentally quantify flux focusing in these hysteretic devices. Jumps occur every 9.7 µT (averaged over 4 consecutive jumps in Fig. 5), which corresponds to BA loop ≈0.016Φ 0 . Identifying the jumps as 2Φ 0 -periodic features, we infer a flux-focusing F ≈ 124, where we have defined F by B local = FB. Significant flux focusing from superconducting ground planes has recently been investigated the-oretically and experimentally in Ref. 33, where simulations gave F ≈ 27.5. The extent of flux focusing is specific to device geometry; for example, features designed to trap flux can have large effects on F. The change in applied flux required to maximally detune the resonator frequency is determined by the β L value of the SQUID (see Fig. 4c). At finite temperature, the flux in the SQUID jumps before reaching the point of instability shown in Fig. 4, and so the experimentally measured jump positions provide a lower bound on β L . We thus infer that β L > 3.4 for our device. Using Eq. 1 and the fit parameter A (which relates the inductance of the resonator and of the SQUID), we can calculate the expected tuning of the resonator in an applied magnetic field based on a sinusoidal current-phase relation (Fig. 4d). The smooth tuning shown in Fig. 3a (blue line in Fig. 4b,d) and jumps seen in Fig. 3c (red line in Fig. 4b,d) are qualitatively reproduced. The calculation, however, suggests that resonant frequencies should decrease significantly in the vicinity of hysteretic jumps due to the asymptote in 1/ cos f at f = 1/2, whereas in practice these resonators tune by only ∼ 1% of their untuned frequency. Numerical analysis [34] based on Eq. 3 predicts such reduced tuning for resonators where the SQUID inductance or capacitance is a significant fraction of the total inductance or capacitance of the distributed resonator. We determine the untuned inductance of the SQUID to be approximately 10% of the total inductance of the distributed resonator (see supplementary materials for details), which in Ref. [34] is sufficient (even with small SQUID capacitance) to reduce the tuning of the resonator approaching Φ = Φ 0 /2 to only around 30%. Additionally, asymmetry between junctions allows switching at smaller detuning than perfectly symmetric devices as the narrower junction becomes maximally biased (and hence switches) before the wider junction becomes maximally biased. C. Magnetic field resilience In Fig. 5, the SQUID-containing resonator (black markers) and an identical resonator with no embedded SQUID (red markers) are measured as the applied perpendicular magnetic field is increased from zero to ≈0.5 mT, which corresponds to a local field at the SQUID of ≈60 mT based on flux focusing factors calculated above. Even at these relatively large perpendicular magnetic fields, the internal quality factor of the resonators is still Q i = 9.1×10 3 and the resonator remains tunable, demonstrating the field resilience of these devices. The internal quality factors of the two resonators at zero applied field are similar: 5.0×10 4 for the SQUIDembedded resonator vs. 6.5×10 4 for the resonator without a SQUID [35]. Embedding a SQUID in our case therefore causes at most a minor degradation in the resonator Q, though its effect may be more profound on resonators with larger internal Q 10 4 . We also note that film inhomogeneity can lead to small differences in zero-field quality factors for different resonators on the same chip and therefore more statistics would be required to make quantitative statements regarding the impact of the SQUID on the resonator Q-factor at zero field. For the resonator without an embedded SQUID, ν 0 and Q i tune weakly and smoothly as the applied magnetic field is increased up to about 0.21 mT with the exception of abrupt drops in both ν 0 and Q i at four field values up to 0.2 mT (see Fig. 5). This step-like response of Q i vs B is consistent with the formation of vortices on the superconducting resonator's central conductor [36]. For the nanoSQUID resonator, Q i modulates by a factor of about 5 with field as the resonator tunes (as previously shown in Fig. 3). In addition to the modulation with tuning, the maximum Q i also drops with applied field. The magnitude and field scale of this drop is similar to that of the bare resonator. The respective Q i (B) dependences for the SQUID-embedded and bare resonators are consistent with the maximum of the field-modulated Q i in the SQUID-embedded resonators being limited, as B increases, by the same physics which causes the step-like reductions in Q i for the bare resonators. The similarity is even more clearly seen in ν 0 (B), where the untuned resonant frequency of the nanoSQUID resonator jumps up at an applied field of 0.21 mT just as in the bare resonator. This suggests that the resonator's field resilience is the limiting factor in the nanoSQUID device field performance, a conclusion that is also consistent with measurements of nanoSQUIDs successfully operating in fields up to 1 T [20]. Our results are therefore promising for future generations of devices where SQUIDs are embedded within field-resilient resonators. Importantly, these SQUID-embedded resonators operate at local fields comfortably above 30 mT, the first clock transition of bismuth spins. In the literature, resonator tuning is typically given in units of flux quanta. This means that there are no comparable results on field resilience of tunable resonators to which this device can be compared. Therefore, we believe that this study is the first report addressing the field resilience of a tunable resonator. IV. CONCLUSIONS In conclusion, we have embedded constriction nanoSQUIDs in a Nb resonator, and demonstrated a tunability of >100 MHz with Q i ≈ 3.0 × 10 4 . The SQUID resonator is resilient to magnetic fields and maintains a Q i of 9.1×10 3 at an applied perpendicular field up 0.5 mT. A number of modifications to the device and measurement setups may straightforwardly be made to improve field resilience, specifically: operating the device in parallel fields (which tune spins to the clock transition without applying large fields to the resonator), modifying the device design by the addition of anti-dots [37], patterning the whole ground-plane [24] and/or the use of resonator designs inherently more robust to external field [24,37,38]. These resonators hold great promise for future hybrid quantum system applications, in particular for applications which require magnetic field, such as for operating Bi impurities in Si at the clock transition. VI. SUPPLEMENTAL In the main text of the article, we report measurements on two different resonators fabricated on the same chip. Six resonators were patterned on this chip by EBL. One device has no further constrictions embedded and is used as a reference (see Fig. 5 of the main text, red symbols). The remaining five resonators have constrictions embedded. Three resonators, including the resonator featured in the main text, have a single nanoSQUID embedded. The two other resonators have a chain of four SQUIDs is embedded in series. In total, 11 nanoSQUIDs were fabricated on this sample, resulting in a total of 22 constrictions being milled by Ne FIB. Of these 21 were successfully fabricated and one constriction was destroyed during fabrication -a yield of 95%. There is some distribution in the widths of constrictions which were fabricated; this is shown in Fig. 6. This distribution could likely be narrowed in future devices simply due to improved fabrication. The resonators with chains of SQUIDs were fabricated because simple considerations suggest that this could increase the tunability of the resonator: for identical SQUIDs, the inductance of each SQUID should tune by the same amount, ∆L, implying that the total inductance of the resonator tunes by N ∆L where N is the number of SQUIDs in series. Such behaviour, however, depends on the SQUIDs being identical: if the SQUIDs are not identical, switching in each individual SQUID will occur at different field values, meaning that it will not be possible to simultaneously maximally detune each SQUID at a single field value. This suggests that the total tuning will likely not increase significantly (and tuning will be much harder to control as there will be N SQUIDs, each of which may undergo a hysteretic jump in the flux threading it). To demonstrate that behaviour similar to that presented in the main text is observed in other SQUIDembedded resonators, we present in Fig. 7 characterisation of one of the other resonators embedded with a single nanoSQUID. This resonator has an untuned resonant frequency of 3.825 GHz and Q i ≈ 3×10 4 . The S 21 amplitude response shows hysteretic jumps just as in Fig. 3 (a,b). The resonator may be tuned by >25 MHz. This resonator has a lower resonant frequency (higher inductance) than the resonator featured in the main text, so the same tunable inductor would be expected to tune the resonant frequency less. Analysis analogous to that presented in the main text gives β L ≈ 6 for this device. The resonator has untuned resonant frequency 3.825 GHz and Qi ≈ 30k. Hysteretic jumps are found, as in Fig. 3 (a,b). Fig. 5 (main text) have similar length by design and are designed to have 50Ω characteristic impedance and have the same capacitance and inductance per unit length. More refined calculations accounting for the width of the gap and central conductor, using r = 11.7 for Si at cryogenic temperatures, give an impedance of 51.1 Ω. Using the impedance of the bare resonator, its resonant frequency (7.84 GHz) and the design length (3.552 mm) of the two resonators we can calculate the total inductance L and total capacitance C of the resonator without the embedded SQUID from ν = 1/4 1/LC and Z = L C , obtaining L = 1.63 nH and C = 0.62 pF. By assuming the same capacitance and inductance per unit length in the SQUID-embedded resonator, we may calculate the inductance and capacitance coming from the resonator. At zero field, the inductance is given by L tot = L res + L 0 SQUID , so by taking the untuned frequency (7.42 GHz) we calculate the zero-field SQUID inductance L 0 SQUID ≈ 0.168 nH. We assume that this inductance comes from the loop inductance of the SQUID when untuned. We note that, when embedded within an resonator, the two arms of the SQUID are in parallel. This means that the inductance adds inversely 1/L tot = 2/L arm . When considering the β L value in the SQUID, we must consider the total inductance in the loop which means that the arms of the SQUID are in se-ries. Therefore, the series inductance is four times larger than the parallel inductance. Using the minimum bound on β L from the main text of 3.4 we can place a minimum bound on the critical current I C > 5 µA.
5,602.8
2018-07-02T00:00:00.000
[ "Physics", "Engineering" ]
Microbiota and Metabolomic Patterns in the Breast Milk of Subjects with Celiac Disease on a Gluten-Free Diet The intestinal microbiome may trigger celiac disease (CD) in individuals with a genetic disposition when exposed to dietary gluten. Research demonstrates that nutrition during infancy is crucial to the intestinal microbiome engraftment. Very few studies to date have focused on the breast milk composition of subjects with a history of CD on a gluten-free diet. Here, we utilize a multi-omics approach with shotgun metagenomics to analyze the breast milk microbiome integrated with metabolome profiling of 36 subjects, 20 with CD on a gluten-free diet and 16 healthy controls. These analyses identified significant differences in bacterial and viral species/strains and functional pathways but no difference in metabolite abundance. Specifically, three bacterial strains with increased abundance were identified in subjects with CD on a gluten-free diet of which one (Rothia mucilaginosa) has been previously linked to autoimmune conditions. We also identified five pathways with increased abundance in subjects with CD on a gluten-free diet. We additionally found four bacterial and two viral species/strains with increased abundance in healthy controls. Overall, the differences observed in bacterial and viral species/strains and in functional pathways observed in our analysis may influence microbiome engraftment in neonates, which may impact their future clinical outcomes. Introduction Celiac disease (CD) is an autoimmune enteropathy triggered by ingestion of gluten, a protein found in wheat, rye, and barley [1]. This disease occurs only in individuals with specific human leukocyte antigen (HLA) DQ haplotypes (DQ2, DQ8) [1]. However, while 30% of the population carries these compatible genetics, only 2-3% of these individuals ultimately develop CD [2]. This implies that genetic predisposition and exposure to dietary gluten are necessary but not sufficient to develop CD. Multiple environmental factors [3][4][5][6][7] including infant feeding type have been evaluated independently in case-control studies [3,4] and meta-analyses [8,9]. While early studies suggest that formula feeding during infancy is associated with an increased risk of developing CD [8], subsequent prospective studies have not found that breastfeeding was protective against developing CD [3,4]. Mounting evidence supports the role of the gut microbiota as one of the environmental factors involved in the pathogenesis of chronic medical conditions, including food allergy [5] and inflammatory bowel disease [6]. Many factors are known to alter the composition of the intestinal microbiota, including exposure to antibiotics [10], exposure to other medications [11,12], and dietary patterns [13]. Additionally, research suggests that there are significant differences in the intestinal microbiota of infants who are breastfed when compared with those who are formula fed, both in the general population [14,15] and in infants at risk for developing CD [16][17][18]. While human breast milk was once considered a sterile fluid, it is now recognized to contain a unique microbiota that likely plays a role in engraftment of the infant intestinal microbiota. The source of the breast milk microbiota is not fully understood, though is thought to be multifactorial and include the maternal gastrointestinal tract, the maternal skin, and the infant oropharynx [19]. Multiple studies have been conducted evaluating the microbial characteristics of breast milk in healthy subjects, and have identified the presence of bacterial species such as Bifidobacterium [20][21][22] (including B. breve, B. adolescentis, and B. bifidum [20]), Lactobacillus [22] (including, L. rhamnosus [23], L. gasseri [24,25], and L. lactis [23]), Staphylococcus [22], and Streptococcus [22]. However, few studies have been performed in mothers with CD. One case-control study utilized polymerase chain reaction (PCR)-based techniques and found a reduced abundance of Bifidobacterium species in mothers with CD when compared with mothers without CD at one month after delivery [26]. Another study compared the breast milk microbiota of subject's whose children later did or did not develop CD at 9 months after delivery using 16S rRNA sequencing [27]. While they did not identify significant differences between the breast milk microbial composition in subjects with or without CD, they found that the breast milk of subjects whose children later developed CD had an increased abundance of Methylobacterium komagatae, Methylocapsa palsarum, and Bacteroides vulgatus [27]. Though these studies provide an important foundation for our understanding of the composition of the breast milk microbiota in CD, they do not focus on the initial period of breast feeding immediately following birth. Furthermore, previous studies utilize techniques such as PCR and 16S rRNA sequencing which are limited to identifying bacteria at the species level and do not allow for the identification of additional microorganisms (including viruses, protists, archaea, and fungi), nor can they directly provide information about the functional characterization of the microbiome. To our knowledge, metagenomic sequencing has not been utilized to investigate the breast milk microbiota of subjects with and without CD, which is of clinical interest given implications of the breast milk composition on the engraftment of the infant intestinal microbiota. Here, we analyze the breast milk microbiota and metabolome of subjects with CD on a gluten-free diet and healthy controls as part of an ongoing prospective cohort study called the Celiac Disease Genomic, Environmental, Microbiome, and Metabolomic study (CDGEMM) [28], which follows over 500 infants at high risk of developing CD. We perform metagenomic and metabolomic analysis to compare samples collected one week after parturition in order to investigate whether there are differences in the breast milk of subjects with CD on a gluten-free diet compared with healthy control subjects who ingest gluten. Materials and Methods The CDGEMM cohort consists of 500 infants from the United States, Italy, and Spain with a first-degree relative with CD, who have been followed prospectively since birth [28]. As part of this study, we collect perinatal and maternal health information, as well as maternal breast milk and maternal pre-and post-natal stool, in addition to infant health and dietary information and infant samples (blood, stool). Thirty-six subjects from the United States were chosen for our analysis; 20 subjects with CD on a gluten-free diet and 16 healthy control subjects who ingest gluten. Parents of the infants included in the study pro-vided written informed consent per the standards outlined and approved by the Partners Human Research Committee Institutional Review Board. Parents completed a detailed questionnaire at enrollment, providing information about pregnancy, delivery, maternal medical history, maternal antibiotic and probiotic usage, and infant antibiotic usage. All subjects included in our analysis provided a breast milk sample at 7-14 days after parturition. Maternal breast milk samples were collected using a pump at home, poured into the provided tube (10 mL), and immediately frozen for shipment. After overnight shipment, samples were placed in the −80 • C freezer for long-term storage. At the time of analysis, breast milk samples were thawed on ice and aliquoted. Maternal breast milk DNA was isolated using a modified INSPIRE protocol [29]. Given the naturally low biomass of microbial DNA in breast milk and the need for focus on sterile and validated processing techniques, we chose this well-known protocol previously utilized [30,31]. Briefly, samples (2 mL) were centrifuged (13,000× g) for 10 min at 4 • C. The fat layer was removed using a sterile swab, and the supernatant was discarded. The cell pellet was then resuspended in 500 mL TE 50 Enzyme dilution buffer. For enzymatic lysis, each sample was mixed with 100 mL of lytic enzyme cocktail mix. Lytic enzyme cocktail mix was composed of 50 mL lysozyme (lysozyme, 10 mg/mL~400-500 KU/mL in molecular grade water; cat# L6876-10G, Sigma-Aldrich, Milwaukee, WI, USA), 6 mL mutanolysin (mutanolysin, 25 KU/mL in molecular grade water; cat# M9901-50KU, Sigma-Aldrich, Milwaukee, WI, USA), 3 mL lysostaphin (lysostaphin, 4000 U/mL in 20 mM sodium acetate; cat# L9043-5MG, Sigma-Aldrich, Milwaukee, WI, USA), and 41 mL TE 50 Enzyme dilution buffer. After resuspension in the lytic enzyme cocktail mix, the lysate mixture was then incubated on a dry heat block at 37 • C for one hour. A modified protocol for the QIAamp DNA Mini Kit was then utilized for DNA extraction [29]. The CosmosID (CosmosID Inc., Rockville, MD, USA) commercial metagenomic analysis platform (formerly known as GENIUS https://app.cosmosid.com/, accessed on 5 May 2020) [32,33] was used to identify the composition of breast milk microbiota up to a strain-level resolution as detailed in our previous work (Supplementary File S4) [18]. Functional profiling of metagenomic reads was also conducted by using the same platform, which works as follows: initial quality control, adapter trimming, and preprocessing of metagenomic sequencing reads are done using BBduk. The quality-controlled reads are then subjected to a translated search using Diamond BLASTX against a comprehensive and non-redundant protein sequence database, UniRef90. The mapping of metagenomic reads to gene sequences are weighted by mapping quality, coverage, and gene sequence length to estimate community-wide weighted gene family abundances as described by Franzosa et al. [34]. Gene families are then annotated to MetaCyc reactions (Metabolic Enzymes) to reconstruct and quantify MetaCyc metabolic pathways in the community as described in [34]. Furthermore, the UniRef90 gene families are regrouped to GO terms in order to get an overview of GO functions in the community. Lastly, to facilitate comparisons across multiple samples with different sequencing depths, the abundance values are normalized using Total-Sum Scaling (TSS) normalization to produce "copies per million" (analogous to TPMs in RNA-Seq) units. All human breast milk samples for metabolomics were processed using the Metabo-Prep GC kit (Theoreo, Montecorvino Pugliano, Italy) according to the manufacturer instructions for the metabolome extraction, purification, and derivatization in preparation for gas chromatography-mass spectrometry (GC-MS) analysis in accordance with previous work (Supplementary File S4) [18]. A max tolerance of 50 for the linear index was used in this study. Chao1 richness estimator and Shannon diversity index for alpha diversity analysis were calculated using estimateR and diversity functions of the vegan R package [35], respectively. Bray-Curtis beta diversity analysis was conducted using vegdist and pco functions of ecodist R package [36]. Mann-Whitney U (Wilcoxon rank-sum) test (using the wilcox function of R) was used to identify microbes, pathways, and metabolites whose abundance is significantly different between subjects with a history of CD on a gluten-free diet and healthy controls. Significant results were reported for a p-value of <0.05 without adjustment for multiple testing. The corresponding adjusted p-values based on the Benjamini-Hochberg method (using p.adjust function of R with 'fdr' for its method argument) for these results are provided in Supplementary File S3. Results Thirty-six subjects with transitional breast milk samples available at 7-14 days after parturition were selected for analysis. Of those selected, 20 subjects had CD and were on a gluten-free diet; the other 16 were healthy control subjects ingesting gluten. Detailed information about pregnancy/delivery, maternal medical history, and maternal antibiotic and probiotic use (both during pregnancy and after delivery while breastfeeding) was collected ( Table 1; Supplementary File S2). Taxonomic profiling of the metagenomes was performed at both species-and strainlevel resolution for bacteria, fungi, protists, and viruses (Supplementary File S1 and Figure S3). Functional profiling was also done to identify functional (MetaCyc) pathways encoded by each metagenome, and metabolomics analysis was conducted to profile the metabolites present in each sample ( Supplementary Figures S1 and S2). No significant change in the abundance of fungi, protists, and metabolites was found in our analysis. Bacterial Composition We performed a cross-sectional analysis between subjects to evaluate differences in microbiota composition between our groups. We did not identify any differences in alpha or beta diversity at the species or strain level (p-value < 0.05; Supplementary Figures S1 and S2). We did, however, identify three bacterial strains with increased abundance in the breast milk of subjects with CD on a gluten-free diet: Acinetobacter ursingii SM 16,037 = CIP 107286, Rothia mucilaginosa ATC 25296, and Acintobacter sp. 479375_u_t ( Figure 1A; p-value < 0.05). In addition to identifying an increased abundance in the corresponding species for these three strains, we also identified an increase in abundance of one other species, Bacillus cereus ( Figure 1B; p-value < 0.05). We also identified four bacterial strains that were significantly increased in abundance in the breast milk of our healthy control subjects: Bacteroides_u_t, Faecalibacterium prausnitzii, Clostridiales_u_t, and Gemella_u_t ( Figure 1A; p-value < 0.05). We also observed an increase in the abundance of species corresponding to these four strains ( Figure 1B; p-value < 0.05). The "_u_t" and "_u_s" mean unspecified species and strains, respectively. This means that these taxa could not be resolved at the species or strain levels. Cross-sectional analysis of bacterial strains in the breast milk of subjects with CD on a gluten-free diet compared with healthy controls. Bacterial strains with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon ranksum test) (p-value < 0.05). (B) Cross-sectional analysis of bacterial species in the breast milk of subjects with CD on a glutenfree diet compared with healthy controls. Bacterial species with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon rank-sum test) (p-value < 0.05). Virome We identified a statistically significant difference in the alpha diversity between viral species (Figure 2; p-value < 0.05) using the Chao1 estimator to evaluate differences between groups. There was no identified difference in beta diversity (Supplementary Figure S2). Two viral species were found in increased abundance in healthy control subjects: Dill cryptic virus 2 and Rosellinia necatrix partitivirus 2 (Figure 3; p-value < 0.05). Figure 1. (A) Cross-sectional analysis of bacterial strains in the breast milk of subjects with CD on a gluten-free diet compared with healthy controls. Bacterial strains with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon rank-sum test) (p-value < 0.05); (B) Cross-sectional analysis of bacterial species in the breast milk of subjects with CD on a gluten-free diet compared with healthy controls. Bacterial species with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon rank-sum test) (p-value < 0.05). Virome We identified a statistically significant difference in the alpha diversity between viral species (Figure 2; p-value < 0.05) using the Chao1 estimator to evaluate differences between groups. There was no identified difference in beta diversity (Supplementary Figure S2). Two viral species were found in increased abundance in healthy control subjects: Dill cryptic virus 2 and Rosellinia necatrix partitivirus 2 (Figure 3; p-value < 0.05). Pathways Our cross-sectional analysis identified five pathways with subjects with CD on a gluten-free diet. These pathways included acid biosynthesis initiation (E. coli), purine ribonucleosides degr mentation, phosphatidylcholine acyl editing, and mevalonate pat < 0.05). Pathways Our cross-sectional analysis identified five pathways with subjects with CD on a gluten-free diet. These pathways included acid biosynthesis initiation (E. coli), purine ribonucleosides degr mentation, phosphatidylcholine acyl editing, and mevalonate pat < 0.05). Cross-sectional analysis of viral species in the breast milk of subjects with CD on a glutenfree diet compared with healthy controls. Viral species with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon rank-sum test) (p-value < 0.05). Pathways Our cross-sectional analysis identified five pathways with increased abundance in subjects with CD on a gluten-free diet. These pathways included: superpathway of fatty acid biosynthesis initiation (E. coli), purine ribonucleosides degradation, heterolactic fermentation, phosphatidylcholine acyl editing, and mevalonate pathway I (Figure 4; p-value < 0.05). Figure 4. Cross-sectional analysis of MetaCyc pathways in the breast milk of subjects with CD on a gluten-free diet compared with healthy controls. Pathways with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon rank-sum test) (p-value < 0.05). Discussion To our knowledge, metagenomic sequencing has not been employed to explore the differences in the microbial and metabolomic composition of breast milk in subjects with CD. Our analysis provides novel insights into differences at the species and strain level for bacteria and viruses. We found that breast milk composition of subjects with CD on a gluten-free diet appears to be quite similar to the breast milk composition of healthy control subjects at 7-14 days post-partum. There was no difference in diversity for bacteria; however, we did identify a difference in alpha diversity for viruses. We also identified differences between the breast milk of subjects with CD on a gluten-free diet and healthy controls at both the strain and the species level for bacteria and viruses. Given that the majority of available microbiome and virome literature at this time is at the species level, here we focus the discussion of our results at the species level. While a previous study which utilized 16S sequencing did not identify differences in the breast milk microbiota of subjects with CD compared with healthy controls [27], we identified statistically significant differences in eight bacterial species and two viral species between the two groups. The eight bacteria isolated in our study have previously been isolated from the breast milk of healthy subjects [37][38][39][40][41][42][43][44]. We did not identify breast milk literature . Cross-sectional analysis of MetaCyc pathways in the breast milk of subjects with CD on a gluten-free diet compared with healthy controls. Pathways with a statistically significant difference in abundance between the breast milk of subjects with CD on a gluten-free diet and healthy controls according to Mann-Whitney U test (Wilcoxon rank-sum test) (p-value < 0.05). Discussion To our knowledge, metagenomic sequencing has not been employed to explore the differences in the microbial and metabolomic composition of breast milk in subjects with CD. Our analysis provides novel insights into differences at the species and strain level for bacteria and viruses. We found that breast milk composition of subjects with CD on a gluten-free diet appears to be quite similar to the breast milk composition of healthy control subjects at 7-14 days post-partum. There was no difference in diversity for bacteria; however, we did identify a difference in alpha diversity for viruses. We also identified differences between the breast milk of subjects with CD on a gluten-free diet and healthy controls at both the strain and the species level for bacteria and viruses. Given that the majority of available microbiome and virome literature at this time is at the species level, here we focus the discussion of our results at the species level. While a previous study which utilized 16S sequencing did not identify differences in the breast milk microbiota of subjects with CD compared with healthy controls [27], we identified statistically significant differences in eight bacterial species and two viral species between the two groups. The eight bacteria isolated in our study have previously been isolated from the breast milk of healthy subjects [37][38][39][40][41][42][43][44]. We did not identify breast milk literature for these bacteria in CD or other treated or untreated diseases; however, we did identify literature related to the intestinal microbiota for some of these bacteria. For example, Rothia mucilaginosa, which was noted to have an increased abundance in the breast milk of subjects with CD, is also reported to have an increased abundance in the gut microbiota of subjects with autoimmune inflammatory conditions such as primary sclerosing cholangitis [45]. Additionally, we identified an increased abundance of Faecalibacterium prausnitzii in the breast milk of healthy control subjects, which has been found to be in decreased abundance in the intestinal microbiome of subjects with active inflammatory bowel disease [46]. Our virome analysis found two species, Dill cryptic virus 2 and Rosellinia necatrix partitivirus 2, increased in abundance in the breast milk of healthy control subjects. To our knowledge, there is no prior published data of the breast milk virome in disease processes for comparison. We did identify five pathways with increased abundance in subjects with CD on a gluten-free diet. Heterolactic fermentation was previously linked to gastrointestinal disease with an increased abundance in esophageal brushings of patients with Barrett's esophagus when compared with healthy controls [47]. Otherwise, there is no prior research reporting these pathways, and they may be CD specific, though more research is required to further elucidate these relationships. While we were able to identify statistically significant differences between subjects with CD on a gluten-free diet and healthy control subjects, our study is limited by a relatively small sample size. This implies that a number of differentially abundant features reported in this study based on unadjusted p-values for multiple testing have a high chance of being false positives (see Supplementary File S3 for a list of features with a high false discovery rate). This small sample size was chosen for our pilot analysis and can be expanded upon in the future due to ongoing enrollment in this study. Furthermore, as our subjects with CD were on a gluten-free diet and possibly in remission, it would be valuable also to compare the results of our breast milk microbiome and metabolome analyses with individuals with active CD or healthy individuals on a gluten-free diet for reasons other than CD, to see whether additional or distinct differences might be identified. These comparisons would provide insight into whether the differences noted in our analysis were secondary to disease state (CD on a gluten-free diet and possibly in remission) or diet (gluten-free). Finally, our study is a case-control model, which allows for an initial snapshot of the state of the breast milk microenvironment at a single point in time. It will be important in future prospective studies to consider longitudinal data to identify trends throughout the stages of breastfeeding. Conclusions In this manuscript, we utilized a comprehensive metagenomic approach to evaluate the composition of transitional breast milk in subjects with and without CD, and we identified multiple bacterial strains, species, and viral species that differ between these two groups. To the best of our knowledge, there is little other literature evidence demonstrating similar trends, thus supporting the need for further breast milk and intestinal microbiome research, both in CD and in other disease processes. Additionally, further work is required to investigate whether the differences we noted impact the intestinal engraftment in the offspring, and thus potentially impact the long-term health of the offspring. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: SRA SUB9917188.
5,224.8
2021-06-29T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Gold nanoparticles-induced cytotoxicity in triple negative breast cancer involves different epigenetic alterations depending upon the surface charge Gold nanoparticles (AuNPs) are used enormously in different cancers but very little is known regarding their molecular mechanism and surface charge role in the process of cell death. Here, we elucidate the molecular mechanism by which differentially charged AuNPs induce cytotoxicity in triple negative breast cancer (TNBC) cells. Cytotoxicity assay revealed that both negatively charged (citrate-capped) and positively charged (cysteamine-capped) AuNPs induced cell-death in a dose-dependent manner. We provide first evidence that AuNPs-induced oxidative stress alters Wnt signalling pathway in MDA-MB-231 and MDA-MB-468 cells. Although both differentially charged AuNPs induced cell death, the rate and mechanism involved in the process of cell death were different. Negatively charged AuNPs increased the expression of MKP-1, dephosphorylated and deacetylated histone H3 at Ser10 and K9/K14 residues respectively whereas, positively charged AuNPs decreased the expression of MKP-1, phosphorylated and acetylated histone H3 at Ser 10 and K9/K14 residues respectively. High-resolution transmission electron microscopy (HRTEM) studies revealed that AuNPs were localised in cytoplasm and mitochondria of MDA-MB-231 cells. Interestingly, AuNPs treatment makes MDA-MB-231 cells sensitive to 5-fluorouracil (5-FU) by decreasing the expression of thymidylate synthetase enzyme. This study highlights the role of surface charge (independent of size) in the mechanisms of toxicity and cell death. 1) Experimental procedures a) Synthesis of citrate-capped AuNPs Citrate-capped AuNPs were prepared by the reduction of Gold (III) chloride trihydrate (HAuCl4.3H2O) with trisodium citrate according to the Turkevich and Frens method 1,2 . Briefly, the reaction was carried out in a 250 mL round bottomed flask with the centre neck attached to a reflux condenser. First, 250 mL of 0.25 mM HAuCl4.3H2O solution was heated to boiling. Then, 4 mL of aqueous solution of 1% trisodium citrate was added to it under vigorous stirring and the boiling was continued for another 15 min until the solution turns to a deep red colour. During the reaction, the colour of the solution changed initially from yellow to colourless and finally to wine red. It was then stirred until it reached room temperature to control the particle size and thus achieving a narrow particle size distribution. b) Synthesis of cysteamine-capped AuNPs Cysteamine-capped gold nanoparticles were prepared by the reduction of gold (III) chloride trihydrate with sodium borohydride in the presence of cysteamine 3 . Briefly, 400 μL of 213 mM cysteamine hydrochloride was added to 40 mL of 1.42 mM gold (III) chloride trihydrate in a conical flask and then subjected to stirring for 20 min at room temperature. Sodium borohydride was dissolved in cold distilled water immediately before use. After 20 min, 10 μL of 10 mM NaBH4 was added quickly into the mixture solution. Vigorous stirring was maintained from the addition of NaBH4 to another 30 min. The colour of mixture solution during the reaction process was changed from yellow to brownish. After further mild stirring, the gold nanoparticle solution was stored in the dark condition at 4 C. c) Zeta size, Zeta potential and TEM Dynamic light scattering for characterization of hydrodynamic size of AuNPs dispersed in water was performed on Nano-ZS, Malvern Instruments, Malvern, UK, taking the average of 5 measurements. Zeta potential was also measured to determine the amount of aggregation of d) Transmission electron microscopy (TEM) of AuNPs treated cells Ultra-thin sections of cells were analyzed using TEM to see the distribution of AuNPs according to the modified method as described 5 . Briefly, cells were treated with AuNPs (100 μg/mL, 250 μg/mL and 500 μg/mL) for 24 h. At the end of incubation, the cells were washed with PBS to remove any unbound AuNPs. Cells were then fixed in 2.5% glutaraldehyde for 30 min at 4 C. Fixed cells were scraped and washed 5 times with PBS. Before dehydration with increasing concentrations of alcohol, cells were further treated with 1 % osmium tetroxide for 2 h at 4 C. Alcohol and spurr's low viscosity resin were used in the ratio 2:1, 1:1, 1:2 and 1:3. Finally 100 % spurr resin was added and the beam capsule was incubated for 18 h at 70 C. RMC ultra-microtome is used for cutting ultra-thin sections of 60 nm thickness. Sections were stained with 0.5% uranyl acetate and analyzed under FEI TF-20 TEM at 120 kV. e) Isolation of total proteins and western blotting After AuNPs treatment for 24 h, isolation of total proteins and histones, using 0.25 M HCl were done by the methods as described 6 . Protein estimation was performed by Lowry's method and then reduced by using 1X laemmli's sample buffer. Equal amounts were run on SDS PAGE and were electrophoretically transferred onto PVDF membrane using semi-dry transfer apparatus (Bio-Rad). Immunoblot analysis was performed using anti-phospho-p38 the membranes were stripped in stripping buffer and re-probed with another antibody. The immunoblots were quantified by densitometry scanning using NIH Image J software. f) Total RNA isolation Briefly, total RNA was extracted from cells using TRIzol reagent (Invitrogen, CA, USA) and was purified according to the manufacturer's protocol using an RNeasy kit (AuPrep RNeasy Mini Kit; Life Technologies Pvt. Ltd., India) as described 7 . The RNA quality and integrity of each sample was assured using NanoDrop spectrophotometer (ND-1000) by A260/280 absorbance ratio and agarose gel electrophoresis respectively. g) Reverse transcription and RT-PCR For checking the mRNA levels of genes, cDNA synthesis from RNA was carried out by using verso cDNA synthesis kit (Thermo Fisher Scientific, USA) according to the manufacturer's instructions as described 7 and then Quantitative real time-PCR was carried out by using Light Cycler 2.0 (Roche Diagnostics) according to the method as described 6
1,309
2018-08-16T00:00:00.000
[ "Medicine", "Chemistry" ]
On the Form of the Optimal Measurement for the Probability of Detection We consider the problem of maximizing the probability of detection for an infinite number of mixed states. We show that for linearly independent states there exists a unique simple optimal measurement, generalizing thus a result obtained in finite dimension by Y. Eldar (Phys. Rev. A, 68, 052303:1-052303:4 2003). Introduction Let ρ 1 , ρ 2 , . . . be (finite or infinite number) quantum states (density matrices) on B(H)the bounded linear operators on Hilbert space H, of arbitrary dimension, which can occur with some a priori probabilities π = (π 1 , π 2 , . . .). We want to find, in an optimal way, the state in which the system really is. To this end we perform a measurement (called also strategy) M, by which is meant a sequence (M 1 , M 2 , . . .) of positive operators from B(H), such that where the series is convergent in the weak operator topology on B(H). A measurement M = (M 1 , M 2 , . . .) for which all M i 's are pairwise orthogonal projections is called simple or sharp (see [1]). Rafał Wieczorek<EMAIL_ADDRESS>1 Faculty of Mathematics and Computer Science, Łódź University, ul. S. Banacha 22, 90-238 Łódź, Poland If we receive outcome M i , we choose state ρ i . The probability that the true state is ρ i when measurement will give result M j is given by tr(ρ i M j ). Thus tr(ρ i M i ) is the probabilty of guessing correctly state ρ i . If our guess is ρ j while the true one is ρ i , then we pay penalty L(i, j ). Function L is called a loss function. The risk function is defined by the formula The expectation of the risk function is called the Bayes risk, and denoted by r(M, π), i.e. Consider the concrete loss function of the form Then we have In this case, minimizing Bayes risk is equivalent to maximizing the expression The above expression is the probability of correct guess while performing measurement M, called the probability of detection. We shall denote this probability by P D (M). We want to find a measurement which maximizes the probability of detection. The existence of an optimal measurement is discussed in [7] in a general setup. In our case, the following result from [7] is sufficient. Theorem 1 There exists a measurement maximizing the probability of detection. For two states the solution can be achieved by taking the simple measurement made by the projections on the support of the positive and negative part of the Hermitian operator π 1 ρ 1 − π 2 ρ 2 . Kaniowski [4] did a deeper analysis for two finite dimensional projections with arbitrary a priori probability. Each state ρ i has the spectral decomposition where λ j i > 0 and m i ∈ {1, 2, . . . , ∞}. In our further considerations we assume that the vectors {ϕ m n } span the Hilbert space H. For arbitrary states it is hard to say anything about the optimal measurement. In the case of finite-dimensional Hilbert space and finite number of a states it is natural to assume linear independence of vectors {ϕ m n }. Then we say that the states are linearly independent. For dim H = ∞ we have a stronger assumption. We say that states are strongly linearly independent if vectors {ϕ j i } are strongly linearly independent, i.e. for each i, j we have ϕ j i ∈ Lin{ϕ m n : n = i, m = j }. A state ρ is called pure if it has the form |ϕ ϕ| for some unit vector ϕ ∈ H, otherwise a state is called mixed. For dim H < ∞ and pure states Kennedy [5,6] obtained the following result. It turns out that this result holds also for dim H = ∞. Theorem (Łuczak [7], 2009) Let pure states ρ 1 , ρ 2 , . . . be strongly linearly independent. Then there exists a unique measurement maximizing the probability of detection and this measurement is simple. For dim H < ∞ and arbitrary states Eldar [2] obtained the following result. Theorem (Eldar [2], 2003) Let the states ρ 1 , ρ 2 , . . . , ρ n be linearly independent. Then there exists a unique measurement maximizing the probability of detection and this measurement is simple. A natural question is whether the Eldar result can be generalized to infinite dimension. In this paper we show that the answer is positive. Proof We use the method from the proof of Lemma 4 in [7]. Assume that we have e.g., = π 1 tr(ρ 1 (1 − Q)) + π 1 tr Optimal Measurement From the above and the optimality of the measurement M we obtain that tr(ρ 1 (1−Q)) = 0. Therefore tr(ρ 1 Q) = 1. This gives This contradicts the relation Before the main theorem we show an interesting result. is an optimal measurement, then M k is a nonzero uniquely determined projection. Proof From the Holevo condition we have i =k Therefore for all ξ ∈ H we obtain From the above and assumption (2) N = (N 1 , N 2 , . . .) are two distinct optimal measurements such that M k = N k . From the above M k and N k are projections. Of course 1 2 M + 1 2 N is also an optimal measurement and 1 2 M k + 1 2 N k is a projection. Then a contradiction. Consequently, M k is uniquely determined. Let the states ρ 1 , ρ 2 , . . . of the form (1) be strongly linearly independent. Our main theorem is Therefore M is the unique simple measurement with the nonzero outcomes. Suppose now that ρ 1 , ρ 2 , . . . are arbitrary states linearly independent or not. The next theorem shows a relation between the ranges of elements of an optimal measurement and the ranges of the states in question. Consequently, condition (4) is of the form This implies dimRangeM i ≤ dimRangeρ i . As a corollary we obtain the main result of [7]. Corollary 1 Let pure states ρ 1 , ρ 2 , . . . be strongly linearly independent. Then there exists a unique measurement maximizing the probability of detection and this measurement is simple. The outcomes of this measurement are rank one operators. Proof The first part of the corollary is a consequence of Theorem 3. From Theorem 4 the outcomes of the optimal measurement are zero or rank one operators but Lemma 1 implies that the outcomes can't be zero operators.
1,603
2015-04-28T00:00:00.000
[ "Mathematics" ]
An Introduction of a Modular Framework for Securing 5G Networks and Beyond : Fifth Generation Mobile Network (5G) is a heterogeneous network in nature, made up of multiple systems and supported by different technologies. It will be supported by network services such as device-to-device (D2D) communications. This will enable the new use cases to provide access to other services within the network and from third-party service providers (SPs). End-users with their user equipment (UE) will be able to access services ubiquitously from multiple SPs that might share infrastructure and security management, whereby implementing security from one domain to another will be a challenge. This highlights a need for a new and effective security approach to address the security of such a complex system. This article proposes a network service security (NSS) modular framework for 5G and beyond that consists of different security levels of the network. It reviews the security issues of D2D communications in 5G, and it is used to address security issues that affect the users and SPs in an integrated and heterogeneous network such as the 5G enabled D2D communications network. The conceptual framework consists of a physical layer, network access, service and D2D security levels. Finally, it recommends security mechanisms to address the security issues at each level of the 5G-enabled D2D communications network. Introduction New use cases will be created and vertical industries supported in the Fifth Generation Mobile Network (5G). End-users will be able to access services at the edge from different service providers (SPs). The mobile network operator (MNO) and SP will also share infrastructure and security management while providing the service to the users. The Fifth Generation Mobile Network (5G) will also use network services such as device-to-device (D2D) communications as a underlay technology to push content to the edge [1,2]. The general security approach in solving security issues is using cryptographic techniques to achieve most security objectives. These cryptographic techniques should increase the reliability of security and privacy mechanisms in D2D communication, in form of anonymity, unlinkability, privacy, confidentiality, integrity, and authentication. These mechanisms should be lightweight due to mobile device computation and energy consumption constraints. In the past, D2D security was applied at the application layer, however, recently, network layer and physical layer security are new ways to achieve security objectives. For example, communication secrecy can be achieved at the lower layers without depending on higher layer encryption. With cryptographic methods such as the symmetric and asymmetric methods deployed at these layers, D2D communication security requirements can be achieved. Additionally, physical layer security can also be achieved by analysing and implementing the physical properties of wireless channels connecting D2D devices to provide secrecy capacity, channel-based key agreement, physical-layer authentication, and privacy-preserving anonymity [3]. • The security of 5G and D2D communications is explored by investigating the UE authentication and authorisation procedures; • A security framework is proposed that addresses security at different levels of the network for D2D communications in 5G and beyond; • The security model that applies different security mechanisms for network service delivery in 5G is explained; • An integrated security solution for securing network services for 5G-enabled D2D communications by incorporating verified and evaluated security protocols in the proposed security framework is explored. The rest of this article is structured as follows: an overview of the D2D security in the 5G network is presented in Section 2. While Section 3 introduces the designing and modelling of the security framework, an integrated security solution for D2D in 5G is presented in Section 4. This article is concluded in Section 5, with some recommendations. Related Work The Fifth Generation Mobile Network (5G) enabled D2D communication security challenges to be addressed using infrastructural and information-centric security mechanisms to protect devices, the communication channel and the network services. The New Generation NodeB (gNB) assists in establishing the connectivity of D2D devices, and is involved in the distribution of security information such as keys and certificates, which extends the decentralised security-centric methods into D2D architecture; however, the gNB acts as the trust authority. Security Mechanisms To achieve the main security objectives, which are authentication, authorisation and secure data sharing in a 5G-enabled D2D communication network, multiple security techniques are deployed. Mobile networks adopted a service authorisation model that provides default services to every subscribed user, whereby implicit access authorisation is given to registered user equipment (UE) upon successful primary authentication. Service authorisation in legacy mobile networks, such as Fourth Generation (4G), was based on the static subscription of a user. Moreover, each UE's authorisation matrix is kept in the home network (HN) and then downloaded to the snetwork (SN) [6]. The SN then utilises the received permission matrix to grant the authenticated UE access to the services provided by the SP. The standardisation and adoption of a static SP-based authorisation model have proven beneficial from an interoperability standpoint when applied to a market with a limited set of services supplied via wireless networks managed by one or two MNOs. In 5G, the UE will be authenticated to access the HN and authorised to access services in the HN and data network (DN) to support multiple shareholders. For new services, the authentication mechanism was decoupled from authorisation, and new authorisation processes were established. Network slicing provided by Network Function Virtualisation (NFV) and Software-Defined Networking (SDN) technologies is used in 5G to provide a diverse collection of services. A service authorisation architecture that allows the delivery of services from several infrastructure providers using an SP-based authorisation mechanism to enable existing implicit service authorisation while also protecting SPs from unauthorised service access is desirable because of the projected large range of service products and connected devices [7]. The authentication and authorisation mechanisms that were used in this research adopted Authentication Key Agreement (AKA) and access control methods to address security in 5G-enabled D2D communications networks. These protocols' security properties are derived from the security requirements of the system model which are secrecy, authentication, integrity, confidentiality and privacy [2]. Authentication There are two authentication procedures specified in the 5G standard [6], i.e., primary authentication with two methods, namely 5G-AKA and Extensible Authentication Protocol (EAP)-AKA' and secondary authentication based on the EAP framework, which is an important step for 5G to become an open network platform. The UE and network authentication methods in 5G are classified as primary authentication. It is comparable to that used in the legacy systems, however, in 5G, the HN has been given more control during the authentication procedure. This procedure has an in-built home control, which allows the HN to be notified when the UE is authenticated in an SN and to make the final decision on mutual authentication with the UE, whether it agrees with the message exchange and verification process [6]. This applies to the authorisation process for non-3GPP technologies such as IEEE 802.11, due to it being independent of radio access technology. Secondary authentication provides secure communication between UE and DN outside the mobile operator domain. EAP-based authentication techniques and related credentials can be utilised for this. The UE can be authenticated with DN and obtain authorisation on establishing a data path from the operator network to DN, assisted by the HN Session Management Function (SMF). In this case, the DN could be a third-party SP. The DN might be providing data services such as operator services, Internet access or content services. The DN function has been mapped onto the third party domain in 5G architecture because of secondary authentication provided by DN Authentication Authorisation Accounting (AAA) servers [8]. In another applicable scenario, the HN might provide infrastructure services via network slices to other MNOs or SPs, even though they are in the same network domain; however, the service and security provision are handled by another party, therefore secondary authentication could be applied to internal DN [9]. The primary and secondary authentications are discussed in detail in [10][11][12][13], respectively. Mutual authentication is achieved when both parties confirm each other's identities and agree on a session key. The access security for the New Generation Radio Access Network (ngRAN) and 5G Core Network (5GC) involves mutual authentication between the HN and UE, key derivation for authentication, access network, non-access stratum, radio resource control security and non-3GPP access [6,10]. It provides integrity, ciphering and replay protection of signalling within the 5G network. The UE and 5G network mutual authentication rely on primary and secondary authentication procedures for accessing services in 5GC and from third-party SP/external DN, respectively. The 5G system supports mutual AKA between the UE and SN authorised by the HN, enabling the UE to securely access the HN via SN. The 5G-AKA or EAP-AKA' methods are mandatory for the 5G primary authentication procedure and the only authentication methods supported by UE and SN, for private networks' EAP framework, should be used as specified in [6] and as shown in Figure 1. The 5G-AKA and EAP-AKA' are discussed in [6,10,11,14,15]. Authorisation Mobile networks implicitly authorise service access after authentication. Generally, for authorisation, access control can be used to implement permission and access rights by protecting access to an object. When a subject wants to access, the subject's name is checked against a list; if it is on the list, then access is granted [16]. Conventional access control approaches to provide service access authorisation to the system have been proposed in related work and include Role-Based Access Control RBAC [17], Discretion Access Control (DAC) [18] and Attribute-Based Access Control (ABAC) [19]. Such access control mechanisms sometimes require additional techniques such as Encryption-based Access Control (EBAC) to provide a robust and efficient authorisation to complex systems including heterogeneous networks (HetNets). However, due to the complex characteristics of 5G, they are unable to provide a controllable and efficient mechanism to meet the criteria of 5G network service authorisation [12]. RBAC is a framework for specifying user access authorisation to resources, roles and responsibilities, and it follows principles such as the separation of duties, the least privilege and administrative activity segmentation. In contrast to ABAC, access control policies are developed by directly linking attributes with subjects. To achieve fine-grained access control, an efficient ABAC authorisation technique is employed based on user attributes and the access control authority grants the access rights. Approaches based on Capability-Based Access Control (CBAC) have been suggested as a possible solution for the 5G network. CBAC uses an unforgeable token that designates access to a resource to inform of abilities according to a set of rights [20]. Capabilities are a two-pronged method to access control, in which each subject is assigned to a capability list that specifies each object and the actions that the subject is authorised to perform on it. The access matrix is stored by row in the metadata of the object [16]. The subject presents a capability to the service server (SS) to obtain access to an object and the capability is transferable and non-forgeable. Local SPs could perform the CBAC, capability token validation and access right authorisation processes. This can be accomplished by locally implementing permission processes on distributed edge devices, making it feasible for D2D communications. Many access control systems for mobile network applications have adopted capability-based methods, but this has raised a few issues such as capability propagation and revocation [21]. With in-network caching, content objects may not always arrive from their original producer such as the SP, and content security cannot be considered in the traditional mobile network model based on secure and wireless or point-to-point channels [22]. This implies that content must be encrypted to prevent unauthorised access, invalid disclosure or modification by unauthorised parties using EBAC. By offering a framework for delivering access permissions to services, the existing access control mechanism reflects a good conceptualisation of authorisations. All these access control policies can be implemented independently or as an integrated access control solution. The authorisation mechanism described in [6] uses the OAuth 2.0 framework as defined in RFC 6749 [23]. It states that client credentials should be used as grants and access tokens shall be in JavaScript Object Notation (JSON) web token format, which can be protected with JSON web signature in the form of a digital signature or message authentication code (MAC) built on JSON web signature [24]. Access Rights Delegation Users can be assigned access permissions in the form of delegation, which is the process of assigning access rights to a user by either an administrative user or another user. The administrator user does not need to be able to use the access right, but a user delegation must be able to use the access right [25]. For authorisation and capability revocation management, a federated delegation method can be used in the capability development and propagation workflow. This could overcome issues in the access control strategy processing of a hybrid security mechanism by combining ABAC and CBAC with federated identity (FId) in a content-aware mobile network such as 5G [12]. Moreover, delegating some authentication and authorisation activities to other security domains facilitates 5G security policies and ubiquitous services access in different domains from multiple SPs. Processing capability validation in the HN and third-party SPs offers a D2D communications access control mechanism that is flexible, elastic, context-aware and fine-grained [26]. This inter-domain delegation and access authorisation enable 5G-enabled D2D communications security to be implemented beyond static authorisation. In addition, the authors in [27] introduced a framework that proposed a self-delegation protocol for device authentication and proactive handover authentication using a delegated credential for unified network-and service-level authentication for wireless access. Two authentication and key agreement protocols were introduced as part of a security framework to secure transactions at the network and service levels in [28]. In a heterogeneous system such as 5G for multi-server collaboration, privacy protection is crucial, as presented in [29], so the authors used blockchain to develop heterogeneous multi-access edge computing (MEC) systems to offer privacy topology protection. The authors in [30] developed a privacy-preserved, incentive-compatible and spectrum-efficient framework based on blockchain that considers human-to-human spectrum utilisation and machine-to-machine communication. A framework for the Internet of Vehicles (IoV) architecture model and an authentication-based protocol for smart vehicular communication using 5G are both suggested in [31]. The comparison between some related work is shown in Table 1 in order to highlight the key differences between the other pre-existing security frameworks for heterogeneous networks security and the proposed conceptual framework in this research. It outlines the authors, their descriptions, and the variations among a number of criteria, including key hierarchy, protocols interface, privacy preservation, authentication, authorisation, single sign-on (SSO), formal verification and evaluation. As discussed in the related work, some security features such as authentication, authorisation and permission delegation have been used in different D2D communications or 5G independently. However, there has been a lack of a framework that considered a multilayered security solution for a 5G mobile network including the D2D communication as a layer of the network. With 5G's unique characteristics, the promise of integration with the networks and pushing services to the edge, the proposed framework intends to provide an integrated security solution for D2D communications in 5G and beyond that is interoperable, verified and evaluated. As discussed in the related works, some security features such as authentication, authorisation and permission delegation have been used in different D2D communications or 5G independently. However, there has been a lack of a framework that considered a multilayered security solution for 5G mobile networks, including D2D communications as a layer of the network architecture. With 5G's unique characteristics, the promise of integration with the networks and pushing services to the edge, the proposed framework intends to provide an integrated security solution for 5G and Beyond that is interoperable, verified and evaluated. proposed a framework with x x x x self-delegation and handover protocols [28] 2014 introduced a framework with x x two AKA security protocols [29] 2020 presented a blockchain x x x x for privacy preservation in MEC systems [30] 2020 presented a framework that x x x x x provides privacy preservation and is spectrum-efficient [31] 2020 proposed a framework x x x x for the IoV architecture model and an authentication protocol Proposed in . Proposed Network Service Security (NSS) Framework This section presents the proposed security solution and explains how protocols coexist and interact with each other in the context of the framework. This article adopts a network service abstraction concept from [32], the system architecture from [33] and the security architecture from [6]. The UE registers with HN and starts receiving roaming services from a visiting network (VN). The network services consist of services that rely on other services, such as D2D communications and ICN. It would require integrating 5G with other network architectures such as ICN, content delivery network (CDN) and cloud computing [2,34]. The UE would register with MNO or SP and subscribe for such services, allowing it to connect to the network and access services from HN and other SPs in different domains. After being authenticated to access the network, the UE may need to execute a secondary authentication with the SP, which authorises the UE to access its services as well as authorising the UE to perform other activities such as data caching and sharing with other UEs. D2D communications can be deployed to support different use cases such as traffic offloading, location-based services and vehicle to everything (V2X) communications. In addition, D2D direct communication could be established for content sharing and gaming, therefore the communication must be established using a secure and efficient method with the minimal involvement of the gNB [2,26]. The MNO/SP controls the service subscription, access and content retrieval authorisation as well as enables the normal service operation of the cellular network. However, the SP in 5G could be the MNO, third party, or another SP that uses the MNO's infrastructure as a tenant via network slicing [26]. The gNB controls the UE in cellular coverage and communications between two devices, whilst the D2D devices control the UE out of coverage scenario. Moreover, the MNO is in charge of the user's network access, connection setup, resource allocation and security management. The MNO/SP may block the UE from accessing the services or hide their visibility. To deliver inter-operator D2D services, various networks should sign an inter-operator agreement. The communication channels between UE and networks as well as the D2D devices are all susceptible to attacks, as HN and VN may also be interested in eavesdropping on D2D communications [35]. It should be mentioned that the content access and retrieval process, which include content discovery and distribution, were also considered in this study. In this article, an approach that integrates both infrastructural-centric and informationcentric security services is proposed. The hybrid security framework will focus on: • Information-centric security services-providing data confidentiality, integrity and availability; • Infrastructural-centric security services-providing entities authentication, access controls of the user to network and services. Network Service Security Architecture To address the security threats in 5G-enabled D2D communication and to provide the secure delivery of network services, the proposed security framework assumes that network access security has been achieved. The main concern is the secure access of services by the UE from the SP. The UE should be able to obtain authorisation to access and share the data with another UE, hence achieving service security. The verification of the authenticity, integrity and provenance of the named data object against the producer must be performed before the UE is granted access. Another concern is whether the right data are being published and can be restricted even during out of coverage. The UE also should be able to share data without involving the HN even during coverage, which addresses one of the D2D security problems. After a successful authentication procedure with the network, the UE can request access to services of SP via HN, and the SP verifies the UE and grants access to UE. Security is implemented by various security mechanisms, which should be interoperable with each other. Before addressing security at the different levels, these levels must be defined and the security model function of each level has to be specified. To address the security issues of accessing the network and services, a unified modular architecture is required, as shown in Figure 2, to support the proposed security framework, and the generic architecture consists of the following security entities which are modified according to various security models: • UE: The end-user's device that is trying to access the services; • HN AAA servers consist of security anchor function (SEAF) as SN or authenticator, Authentication Server Function (AUSF) for authentication from the HN if the service is in a home-controlled environment. From a network perspective, the Authentication Credential Repository and Processing Function (ARPF) maintains keys and other security contexts that are utilised for primary authentication [6]; • External AAA servers for service authorisation and secondary authentication in local and external DNs authenticate and authorise the UE to access the service and permission delegation to share with other UE; • SS: A server storing the content/services that the UE is trying to access which could be controlled by either the HN as an internal SP or controlled by an external SP. Security Modelling This subsection discusses security modelling and a multilayered approach in a modular framework. The secure services model allows the UE to securely access network services from the network and SP. The wireless network is under the control of the 5G network, whose root certificate is kept in the Universal Subscriber Identity Module (USIM) [6]. In the 3GPP standard, it is mentioned that the UE and ARPF share long-term K by performing a challenge and response, the shared key with other information including identification information is used by the ARPF to generate session keys through collaboration with SEAF and AUSF as explained in [10]. The AUSF is in charge of the network and UE mutual authentication; after initial authentication, the communication link with HN gNB is left open until it is disconnected. If the UE moves to a different gNB, it will require re-authentication and the generated keys and security information might be reused during the handover-which is handled by the ARPF in both the HN and VN, this is out of the scope of this work. In this case, the UE obtains access to the services after connecting to the wireless network, 5G is based on a security architecture that is host-centric and the CCN is informationcentric; hence, the hybrid approach of the security framework. Normally, the security of the content relies on the encryption of the content object as the producer must register the content object to the database owned by the SP achieving the validation and authentication of the named data object. Multiple methods are used to address the complexity of this system model. The security modelling in Figure 3 leverages the security principles by 3GPP [6] by applying the authentication and authorisation methods that grant the UE service access and permission to engage with other UEs. A multilevel framework is proposed to align with the network service abstraction and 5G protocol stack [32]. It comprises Network Access Security (NAC), Service Level Security (SLS) and D2D Security (DDS) levels which are parallel with the network service abstraction. It also explores physical layer security (PLS) to highlight the need for security at each level and integrated solutions. This security framework's novelty is mainly on SLS and DDS, as PLS and the NAC have been extensively studied by related work. The PLS and NAC provide security for physical and network access; the SLS provides security for services access; and DDS provides security for D2D services sharing. The framework considers the security link between all the security levels, and in addition to the security entities, it is represented as a unified security model. This security framework intends to provide the UE with secure service access and the sharing of these services with UE from another network without losing their initial network access. The security framework intends to provide secure communication and sharing between UEs without the need for HN as the central authority. Physical Layer Security This level of security is concerned with the PLS such as the spectrum, resource allocation, interference and signal. Even though PLS is out of scope, this article gives a brief overview of PLS solutions applied by relying on wireless channel characteristics such as interference, signal and fading-whilst the quality of the attacker's signal can be degraded through keyless secure communication by using signal engineering and processing techniques. Different studies explored PLS in 5G, and a detailed review of different PLS techniques are presented [36][37][38]. For example, artificial noise injection is used to increase channel quality and an anti-eavesdropping signal approaches are used to align multiple users' signals at the eavesdroppers. Secure beamforming, on the other hand, improves the spatial distribution properties of the transmitted signal, resulting in a greater difference in channel quality between legitimate users and eavesdroppers. Network Access Security The NAC was well-defined by 3GPP in 5G security standards and has been extensively studied in various related works [10,14,[39][40][41][42][43][44]. The 3GPP specifies that, for UE to access the network, it requires a primary authentication process, which addresses the NAC. This includes the AKA protocol, which enables the UE and HN to authenticate each other. Mobile subscribers will be able to access network services through ngRAN using their UEs, taking advantage of a variety of wireless communication technologies. As a result, secure access is critical to the 5G principle design, and 3GPP has defined the security requirements in [6] as well as the system architecture in [45] to support its objectives. The UE's connections should be secured by 3GPP's standardised security mechanisms. Both subscribers and MNOs require these mechanisms to provide security guarantees, such as the authentication and trust of involved parties, as well as the confidentiality and integrity of the user's data. For the UE to access the network, the UE and the network must mutually authenticate, and then the UE must further authenticate using the security context to access services provided by the SP, which might be the same network provider or a third party via the DN function. Through layered access and security, the UE can gain access to network services. The AKA protocol mutually authenticates the UE and the HN and establishes a session key for UE and SN in which they can have secure communication over a wireless channel to provide network access security. As mentioned earlier, the 5G standard recommends 5G-AKA and EAP-AKA' protocols as preferred methods for primary authentication to address the most significant security requirements in 5G [6]. The network-level authentication is responsible for verifying that the UE has access to the right network when it connects to it [6]. In 5G, the messages between the UE and the access network via radio interface are to be encrypted. The security at the network level was extensively investigated in [39,42]. Furthermore, under the 3GPP definition, the AKA messages were implemented according to a security standard which outlines certain security properties that must be met. The security context obtained from the authentication on this level can be utilised for further authentication when the UE wants access to other services. The security on this level is concerned with authenticating devices in the network and mobility of the UE. In addition, the handover authentication requires the UE and new AN to re-authenticate both parties. To access the network services provided by the MNO, the UE and the network must mutually authenticate each other to establish a secure channel of communication, trust and the authenticity of the device in the network. After this level of security assurance, the UE can request to access other services from the servers in the CN or from third party service provider. Service Level Security The user must be verified to use the services at this level, and it must be verified whether the user is attempting to access the services with the right permission. As previously stated, this research is focusing on SLS due to new emerging services being promised in 5G, in addition to PLS and NAC having been extensively investigated. With 5G extending the mobile network's potential through the use of additional resources, dense connection, and the enabling of vertical industries, service provision is getting more challenging, especially from a security standpoint. However, the NAC is revisited in the discussion of SLS, as both the UE and network must be mutually authenticated for the UE to join the network. Security on the service level is concerned with authentication and authorisation between the UE, MNO and SP, which gives the UE access to the services and SP the ability to provide the services securely [26]. Since 5G is a large-scale HetNet in nature, some studies on service security are still relevant to this research. The service level protocol for the mobile network in [27,28] addressed security concerns when the UE is accessing services provided by the SP, this was based on IP-based networks and future networks. In [46], the authors proposed an open architecture based-service level protocol for mutual authentication between the UE and SP. After establishing connectivity to the network, the UE must be authenticated and authorised to use the services, which is addressed by SLS [12]. SLS also requires mutual authentication between the UE and the SP to establish a secure communication channel, with a focus on zero trust [11]. Device-to-Device Security D2D communications' security was addressed to some extent in 4G and has to be explored more in 5G as it uses D2D communications as an underlay technology critical to its functionality and attaining its key objectives [1,3,32,47,48]. This study is concerned with service security and how the existing D2D security can be improved. How will the UE deal with the data accessed after being granted access to the network and service? To date, D2D authentication and authorisation have been dependent on various security procedures, requiring it to be authenticated and authorised each time it disconnects from the network. Furthermore, in out-of-coverage conditions, the UE is currently not able to share restricted data with another UE. What happens to data on the UE and how can it be securely exchanged with or without network support is the subject addressed in this study. As a result, the DDS security level will attempt to address these concerns [2,26]. The Protocols These security protocols that are defined in the framework to address security on different levels of the system model, as shown in Figure 4, were verified for security guarantees and evaluated for performance effectiveness. The framework incorporates various security protocols to provide an integrated security solution to a 5G-enabled D2D communication network. The protocols are formally analysed in related work, and they are as follows: Network Access Security NAC protocols and related work are explored in [10,15]. • 5G-AKA Protocol: enables the UE and the HN to establish mutual authentication and anchor keys [10]. • EAP-AKA' Protocol: enables the UE and the HN to establish mutual authentication and anchor keys [15]. • Secondary Authentication Protocol (SAP)-AKA Protocol: enables the UE and the SP to establish mutual authentication and anchor keys [11]. • Network Service-Federated Identity (NS-FId) Protocol: enables the UE and the SP to achieve mutual federated authentication and authorisation [12]. • Data Caching and Sharing security (DCSS) Protocol: allows the UE to cache and share data accessed from the SS [13]. Device-to-Device Security DDS protocols and related work are presented in [26]. • Device-to-Device Service Security (DDSec) Protocol: provides authentication and authorisation to share the cached data between two UEs in proximity, with network assistance. • Device-to-Device Attribute and Capability (DDACap) Protocol: provides security authentication and authorisation to share the cached data between two UEs in proximity without network assistance. Formal Verifications and Performance Evaluation of the Security Protocols These protocols are formally verified for security guarantees that align with the security requirements of the unified modular architecture in Figure 2, as demonstrated in [10][11][12][13]26]. Formal Verification Approach Formal methods and automated verification were applied to security protocols such as AKA that provide weak assurances due to the use of strong abstractions, protocol simplifications and limitations in the properties' interpretation. To give solid guarantees, formal approaches were already used to examine security protocols in [12,14,15,43]. Most verification approaches and tools struggle with security protocol features such as those employed in the proposed framework. This is due to the use of cryptographic primitives such as the sequence number (SQN) and exclusive-OR (XOR), which have algebraic features that make symbolic reasoning difficult [14]. As a result, some tools are incompatible with manual proof checks. Many automated verification tools can be used for this security analysis, including Automated Validation of Internet Security Protocols and Applications (AVISPA) [49], Tamarin [50], and ProVerif [51]. Performance Evaluation Based on Analytical and Simulation Approaches To check the effectiveness of these protocols, performance evaluation was carried out using analytical and simulation methods presented in [52]. The analytical model associates an enhanced label to each communication and each decryption based on the ProVerif and Applied pi-calculus processes used in the verification of the protocol. The performance parameters and metrics are displayed in Tables 2 and 3 and the outcomes for each level are represented by applicable protocols in Table 4 for an analytical model. The simulation model is built on the NS-3 5G mmWave module [53,54] to replicate the current non-standalone deployment of 5G using 5G radio technologies and the LTE network. By evaluating efficiency, throughput and computational cost, the communication and processing costs associated with the protocols are taken into account. The performance parameters and metrics are displayed in Table 5 for computational cost, Tables 6 and 7 for communication cost and Tables 8 and 9 for the simulation model results at each level with a specific protocol. Furthermore, the security formal verification and performance evaluation approaches of this framework's underlining protocols were extensively explored in [10][11][12][13]26,52], respectively. The behaviour and cost of an algorithm are affected by the cryptographic scheme, formally analysed security properties and system model. An example of this is the employment of symmetric or asymmetric cryptography in a mobile network with a variety of stakeholders from various security realms. As a result, utilising this method with this conceptual framework makes the important cost aspects obvious, helps in the development of security protocols and aids in the selection of cost-effective solutions. Other communications systems in addition to mobile networks can also use these techniques. An Integrated Security Solution The inclusion of security features in the suggested security framework is covered in this section. The proposed NSS framework covers security for network services in 5Genabled D2D communications from the point at which a UE requests access to the network via wireless access to the point at which it is allowed to share the service with another UE. The goal of the NSS framework is to defend against the dangers described in [2] by protecting the entities participating in communication and the data being communicated over communication channels in various security domains and scenarios. The NAC, SLS and DDS levels of the solution are as follows: • NAC provides primary authentication and is concerned with the security of the 5G access network. It safeguards the entities, the wireless data connection and the correspondence between the UE, SN and HN. • SLS offers secondary authentication and authorisation and is concerned with UE service authorisation. It safeguards information, entities and communication between the UE, HN and SP in several domains. • DDS addresses the security of D2D communication, enabling data sharing in both network-assisted and non-network-assisted communication and offers authentication and authorisation between two UEs that are close to one another. Data, entities and communication between two UEs and across networks are all protected. An integrated security framework can be created by including some solutions into this safe framework. Each level of the security model is made up of security protocols in the following linked work. In order to provide security with the NAC, 3GPP standardised 5G-AKA and EAP-AKA [6,10]. The authors [11][12][13] proposed protocols that deal with security at the service level of 5G networks. In [26], the authors proposed investigating D2D communication security and proposed two security protocols that offer security covering many D2D scenarios. These security framework's underlying protocols address security on the network, service and D2D levels of communications. As illustrated in Figure 5, these security levels are connected by the protocol interfaces, and the protocols are encapsulated while addressing the security requirements from one level to another. These protocols employ or share some security contexts in order to provide an integrated security solution; nevertheless, this should not compromise security on another level or domain. Additionally, Figure 6 shows a hierarchy of keys, and explains how keys are generated and shared in various contexts. Connection of Different Levels of NSS Model in 5G Enabled D2D System When a user requests network access using a UE, the closest SEAF will start the primary authentication procedure. In [32], the procedure for the UE to access the network and services is covered. According to the following descriptions, the security model aims to provide three secure connections between the UE and HN; UE and SP; and UE and UE at various phases: • In phase 1, a primary authentication protocol is initiated when the UE presents a network access request. According to [6,10], a 5G-AKA or EAP-AKA' protocol can be selected as the AKA procedure between UE and HN. • In phase 2, after primary authentication, the UE submits a service request that initiates a secondary authentication or authorisation process, which is handled by SMF in the HN network and SP AAA in the SP network. Depending on the registration status and security guidelines outlined in [11,12], the SAP-AKA or NS-FId protocols may be utilised at this time. • In phase 3, UE sends a request for data caching and sharing authorisation after getting access to the services using the DCSS protocol described in [13]. • In phase 4, once the UE was given permission to cache and share data, it can publish that data by broadcasting the data name to other nearby UEs, in this example, UE A and UE B . The DDSec and DDACap protocols are invoked by another UE after an interested UE sends it the request, as defined in [26]. Federated Security in 5G This section covers the integration of federated security into 5G as well as how the UE performs SSO. The benefits of adopting FId in mobile communications and how it eliminates the need for the UE to continually authenticate and authorise services, including in roaming scenarios, were presented in [55], which also discussed federated identity management in 5G. When the suggested solution in this article is implemented in a 5G-enabled D2D communications network, the tokens and caching data of the security processes are reduced when the UE needs to re-authenticate to the network or perform handover authentication while roaming but due to SSO, tokens and caching data of the security processes are reduced. Next, the following steps will show how federated security is used in 5G communication: • Step 1: Following network authentication, the UE requests the SP for service authorisation; • Step 2: The UE is forwarded by the SP through SMF to the identity provider (IdP), which creates and assigns the FId to the UE; • Step 3: Using the UE's identity token, the IdP and UE carry out federated authentication operations. • Step 4: The UE uses an identity token to ask the SP for an access token, which the SP-AAA then issues along with a refresh token. SSO has now been accomplished; • Step 5: If the access token is legitimate, the SS will grant the UE's request for access to the service; • Step 6: Using the cached access token, the UE requests caching and sharing authorisation with other UE after getting access to the service. An integrated security solution addressing security risks at various levels of the system model is provided by the interface between the underlying security protocols of the security framework using supported security context, as shown in Figure 7. Conclusions A D2D communications network with 5G capability is made up of several systems backed by different technologies. This prompted the creation of several security solutions that deal with security for particular applications or layers. In addition, new use cases for accessing other services from SPs in various domains are made possible by 5G network services, posing additional security difficulties. Consequently, a comprehensive security solution that handles these problems is required. End-users will frequently access services from several SPs using their UE, which will present a new security risk. Additionally, infrastructure and security management may be shared amongst SPs. This article examined security in D2D communications, 5G, and beyond. It described the system's levels, including those that have been addressed in related works and those which still require attention. A security framework was proposed that outlined the security levels and entities involved in the authentication and authorisation processes. For the proposed security framework, it established the underlying security protocols, which were fully verified for security guarantees and appraised for efficiency. As part of an integrated security solution, the designed underlying security protocols using the suggested framework were streamlined to certain system levels and solved various security challenges. The protocols, however, can be used as a standalone solution, sharing some security context to comply with 5G security standards while permitting interoperability with third-party solutions without compromising security at any level. Current studies have focused on developing security solutions that apply to one layer without considering the security of the layer below or above. The future direction of the research in this article is to extend the framework's application to the next-generation mobile network and other systems such as Internet of Things (IoT) and autonomous vehicles. This framework could develop security mechanisms and evaluate their effectiveness and interoperability as an integrated solution. Conflicts of Interest: The authors declare no conflict of interest.
9,774.8
2022-07-22T00:00:00.000
[ "Computer Science" ]
Modular Synthesis and Biological Investigation of 5-Hydroxymethyl Dibenzyl Butyrolactones and Related Lignans Dibenzyl butyrolactone lignans are well known for their excellent biological properties, particularly for their notable anti-proliferative activities. Herein we report a novel, efficient, convergent synthesis of dibenzyl butyrolactone lignans utilizing the acyl-Claisen rearrangement to stereoselectively prepare a key intermediate. The reported synthetic route enables the modification of these lignans to give rise to 5-hydroxymethyl derivatives of these lignans. The biological activities of these analogues were assessed, with derivatives showing an excellent cytotoxic profile which resulted in programmed cell death of Jurkat T-leukemia cells with less than 2% of the incubated cells entering a necrotic cell death pathway. Owing to their anti-cancer properties and their classification as drug-like compounds [13] extensive work has gone into the study of these compounds and their related analogues to explore and establish structure-activity relationships and the possible use of these lignans as lead compounds for therapeutics. Whilst previous work has explored the synthesis of these lignans and analogues thereof [14][15][16], mainly focusing on changing the substituents on the aryl rings [17], one area that has not been extensively investigated is the synthesis of C-5 substituted analogues of these butyrolactone lignans, represented by 4. We have previously shown that the acyl-Claisen rearrangement can be used to prepare disubstituted morpholine pentenamides 5 with high diastereoselectivity at the C-3 and C-4 positions which correspond to the benzyl groups in the lactone scaffold ( Figure 2) [18][19][20][21][22]. Furthermore, in our efforts to a prepare a number of different lignan scaffolds [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36], we have used amides such as 5 to prepare compounds including tetrahydrofuran lignans (e.g., galbelgin 6), aryltetralins (e.g., ovafolinin 7) and aryl dihydronaphthalene lignans (e.g., (-)-pycananthuligene B 8). We wished to explore the usage of this methodology to synthesise butyrolactone lignans, as well as probe the effect of adding a substituent at the C-5 position on the biological activity. The route would be convergent and modular, allowing for simple modification of aromatic groups resulting in the synthesis of a number of analogues. 2 . Results and Discussion In order to utilise the acyl-Claisen rearrangement to prepare the desired lactones, the corresponding allylic morpholines and acid chlorides first needed to be synthesised. Allylic morpholines 9a and 9b were synthesised in five steps from 4-allyl-1,2-dimethoxybenzene 10 and We wished to explore the usage of this methodology to synthesise butyrolactone lignans, as well as probe the effect of adding a substituent at the C-5 position on the biological activity. The route would be convergent and modular, allowing for simple modification of aromatic groups resulting in the synthesis of a number of analogues. Results and Discussion In order to utilise the acyl-Claisen rearrangement to prepare the desired lactones, the corresponding allylic morpholines and acid chlorides first needed to be synthesised. Allylic morpholines 9a and 9b were synthesised in five steps from 4-allyl-1,2-dimethoxybenzene 10 and safrole 11 (Scheme 1), respectively. Firstly, allylic benzenes 10 and 11 were dihydroxylated using catalytic osmium tetroxide giving 12 and 13, followed by periodate cleavage to give aldehydes 14 and 15. Aldehydes 14 and 15 were immediately used in a Wittig reaction with (carbethoxymethylene)-triphenylphosphorane to exclusively give the E-isomer of α,β-unsaturated esters 16 and 17, in 55% and 56% yields, respectively, over three steps. The esters 16 and 17 were then reduced to allylic alcohols 18 and 19 using di-iso-butyl aluminium hydride (DIBAL-H) in excellent yields. Alcohols 18 and 19 were then converted to the corresponding allylic morpholines 9a and 9b, by first generating a mesylate in situ, which then underwent substitution to give allylic morpholines 9a and 9b. The required acid chlorides were then synthesised in four or five steps from commercially available benzaldehydes-piperonal 20, 3,4,5-trimethoxybenzaldehyde 21 and vanillin 22 (Scheme 2). Acyl-Claisen rearrangements were undertaken using two allylic morpholines 9a and 9b which were reacted individually with the four acid chlorides 34a-d, using TiCl 4 ·2THF as the Lewis acid, providing eight morpholine amides 35aa-bd in 42-95% yields. All amides 35aa-bd were obtained as single diastereomers with a syn-configuration between the C-2 and C-3 substituents (Scheme 3). In all cases it was observed that only the 3,4-trans-4,5-trans-lactone was obtained. This configuration was confirmed through NOESY NMR analysis, depicted in Figure 3 with 4bb. We propose that only this isomer was obtained due to the preferential cyclisation of the 3,4-anti diol 36, leaving the polar uncyclised 3,4-syn diols 37 which were difficult to isolate. Upon dihydroxylation of amide 35bb at a larger scale and following isolation of lactone 4bb by column chromatography, a small sample of the corresponding uncyclised diol 37 was able to be isolated. This diol 37 was subsequently cyclised using 2 M H 2 SO 4 in methanol to give the corresponding C-5 epimer, epi-4bb, confirming this hypothesis (Scheme 4). Acyl-Claisen rearrangements were undertaken using two allylic morpholines 9a and 9b which were reacted individually with the four acid chlorides 34a-d, using TiCl4·2THF as the Lewis acid, providing eight morpholine amides 35aa-bd in 42-95% yields. All amides 35aa-bd were obtained as single diastereomers with a syn-configuration between the C-2 and C-3 substituents (Scheme 3). All amides 35aa-bd then underwent dihydroxylation using osmium tetroxide and N-methylporpholine N-oxide (NMO) to give cyclized 5-hydroxymethyllactones 4aa-bd. In all cases it was observed that only the 3,4-trans-4,5-trans-lactone was obtained. This configuration was confirmed through NOESY NMR analysis, depicted in Figure 3 with 4bb. We propose that only this isomer was obtained due to the preferential cyclisation of the 3,4-anti diol 36, leaving the polar uncyclised 3,4-syn diols 37 which were difficult to isolate. Upon dihydroxylation of amide 35bb at a larger scale and following isolation of lactone 4bb by column chromatography, a small sample of the corresponding uncyclised diol 37 was able to be isolated. This diol 37 was subsequently cyclised using 2 M H2SO4 in methanol to give the corresponding C-5 epimer, epi-4bb, confirming this hypothesis (Scheme 4). Finally, to deprotect the benzyl-protected lactones 4ad and 4bd to their respective alcohols, they were subjected to hydrogenolysis to give 4ae and 4be in excellent yields. Transformation of C-5 hydroxymethyl analogues 4 into dibenzylbutryolactone lignans 1 was achieved via reduction using LiAlH 4 , to the corresponding triols 38aa-bd, followed by periodate cleavage, forming lactols 39aa-bd. These lactols 39aa-bd were then oxidised using Fetizon's reagent [37,38] to give racemic samples of dibenzyl butyrolactone lignans 1aa-bd, including known natural products arcitin 1aa, bursehernin 1ab, (3R*,4R*)-3-(3 ,4 -dimethoxybenzyl)-4-(3 ,4 ,5 -trimethoxybenzyl)dihydrofuran-2(3H)-one 1ac, kusunokinin 1ba, hinokinin 1bb, and isoyatein 1bc. Additionally, phenolic lignans, buplerol 1ae, and haplomyrfolin 1be were produced by the debenzylation of 1ad and 1bd, respectively. Several of the synthesised compounds were then tested for their anti-microbial and cytotoxic activities. All tested compounds were found to be inactive against Staphlycoccus aureus and Escherichia. coli, showing no to little antimicrobial activity, while the compounds were shown to exhibit antiproliferative effects against Jurkat T-leukaemia cells, while also showing effects on cell cycle progression (Figure 4). While the synthesised naturally-occurring dibenzyl butyrolactones, arcitin 1aa, bursehernin 1ab, and (3R*,4R*)-3-(3′′,4′′-dimethoxybenzyl)-4-(3′,4′,5′trimethoxybenzyl)dihydrofuran-2(3H)-one 1ac, boasted the best activities, 5-hydroxymethyl analogue 4bb had similar potency. Compound 4bb was shown to have the best activity of all of the 5-hydroxymethyl analogues tested, inducing apoptosis, evidenced by the presence of cells in the early and predominantly in the late apoptotic cell cycle ( Figure 4). Additionally the compounds demonstrated an effect on cell cycle progression. A significantly greater number of 4N cells were present following treatment with compound 4bb in particular causing a significant increase in 4N cells ( Figure 4D and 4E). During the cell cycle, DNA is replicated in the S-phase, going from 2N in G1, to 4N by the end of this phase. The DNA content in cells then remains at 4N during G2 and M phases, before cytokinesis at the M-phase. The observation that there was in increase in 4N cells indicates that it is likely these cells have arrested in G2/M and will not re-enter next G1-phase after this mitotic slippage. This is in-line with published cell cycle data following treatment with other lignans [39,40]. Furthermore, our compounds showed minimal levels of necrosis, less than 2% (except 4ba with 7%), suggesting that the cells are in fact entering programmed cell death cycles, which is considered the most effective and non-inflammatory mechanism of cancer-cell death. In conclusion, the synthesis of dibenzyl butyrolactone lignans utilising the acyl-Claisen rearrangement has been accomplished and represent a new, modular, and convergent method towards the synthesis of this class of natural products. Furthermore, this route gives rise to the previously-unexplored 5-hydroxymethyl derivatives 4 of these natural products. The biological activities of this new set of derivatives were assessed, with one derivative in particular, 4bb, showing a superior cytotoxic profile and resulting in cell cycle arrest and programmed cell death of Jurkat Tleukaemia cells with less than 2% of the incubated cells entering a necrotic cell death pathway. Several of the synthesised compounds were then tested for their anti-microbial and cytotoxic activities. All tested compounds were found to be inactive against Staphlycoccus aureus and Escherichia. coli, showing no to little antimicrobial activity, while the compounds were shown to exhibit antiproliferative effects against Jurkat T-leukaemia cells, while also showing effects on cell cycle progression ( Figure 4). While the synthesised naturally-occurring dibenzyl butyrolactones, arcitin 1aa, bursehernin 1ab, and (3R*,4R*)-3-(3 ,4 -dimethoxybenzyl)-4-(3 ,4 ,5 -trimethoxybenzyl)dihydrofuran-2(3H)-one 1ac, boasted the best activities, 5-hydroxymethyl analogue 4bb had similar potency. Compound 4bb was shown to have the best activity of all of the 5-hydroxymethyl analogues tested, inducing apoptosis, evidenced by the presence of cells in the early and predominantly in the late apoptotic cell cycle ( Figure 4). Additionally the compounds demonstrated an effect on cell cycle progression. A significantly greater number of 4N cells were present following treatment with compound 4bb in particular causing a significant increase in 4N cells ( Figure 4D,E). During the cell cycle, DNA is replicated in the S-phase, going from 2N in G 1 , to 4N by the end of this phase. The DNA content in cells then remains at 4N during G 2 and M phases, before cytokinesis at the M-phase. The observation that there was in increase in 4N cells indicates that it is likely these cells have arrested in G2/M and will not re-enter next G 1 -phase after this mitotic slippage. This is in-line with published cell cycle data following treatment with other lignans [39,40]. Furthermore, our compounds showed minimal levels of necrosis, less than 2% (except 4ba with 7%), suggesting that the cells are in fact entering programmed cell death cycles, which is considered the most effective and non-inflammatory mechanism of cancer-cell death. In conclusion, the synthesis of dibenzyl butyrolactone lignans utilising the acyl-Claisen rearrangement has been accomplished and represent a new, modular, and convergent method towards the synthesis of this class of natural products. Furthermore, this route gives rise to the previously-unexplored 5-hydroxymethyl derivatives 4 of these natural products. The biological activities of this new set of derivatives were assessed, with one derivative in particular, 4bb, showing a superior cytotoxic profile and resulting in cell cycle arrest and programmed cell death of Jurkat T-leukaemia cells with less than 2% of the incubated cells entering a necrotic cell death pathway. General Methods All reactions were carried out with oven-dried glassware and under a nitrogen atmosphere in dry, freshly distilled solvents unless otherwise noted. Diisopropylethylamine was distilled from CaH 2 and stored over activated 4Å molecular sieves. All melting points for solid compounds, given in degrees Celsius ( • C), were measured using a Reicher-Kofler block and are uncorrected. Infrared (IR) spectra were recorded using a Perkin Elmer Spectrum1000 FT-IR spectrometer. The NMR spectra were recorded on a 400 MHz spectrometer. Chemical shifts are reported relative to the solvent peak of chloroform (δ 7.26 for 1 H and δ 77. 16 ± 0.06 for 13 C). The 1 H-NMR data was reported as position (δ), relative integral, multiplicity (s, singlet; d, doublet; dd, doublet of doublets; ddd, doublet of doublet of doublets; dt, doublet of triplets; dq, doublet of quartets; t, triplet; td, triplet of doublets; q, quartet; m, multiplet), coupling constant (J, Hz), and the assignment of the atom. The 13 C-NMR data were reported as position (δ) and assignment of the atom. The NMR assignments were performed using COSY, HSQC and HMBC experiments. High-resolution mass spectroscopy (HRMS) was carried out by electrospray ionization (ESI) on a MicroTOF-Q mass spectrometer. Fetizon's reagent was prepared following a literature procedure [41]. Unless noted, chemical reagents were used as purchased. General Procedure A: Acyl-Claisen To a stirred suspension of TiCl 4 ·2THF (1 mmol) in CH 2 Cl 2 (5 mL), under an atmosphere of nitrogen, was added a solution of allylic morpholine (1 mmol) in CH 2 Cl 2 (2.5 mL) followed by dropwise addition of i Pr 2 NEt (1.5 mmol). After stirring for 10 min a solution of acid chloride (1.2 mmol) in CH 2 Cl 2 (2.5 mL) was added dropwise and the resultant mixture stirred for the specified time. The reaction mixture was quenched with aqueous NaOH (12 mL, 1 M) and the aqueous phase extracted with CH 2 Cl 2 (3 × 10 mL). The combined organic extracts were washed with brine (6 mL), dried (MgSO 4 ), the solvent removed in vacuo and the crude product purified by column chromatography. General Procedure B: Dihydroxylation To a stirred solution of morpholine pentenamide (1 mmol) in t BuOH/H 2 O (1:1, 20 mL) or t BuOH/H 2 O/THF (1:1:1, 30 mL) was added NMO (3 mmol). A solution of OsO 4 (0.08 mmol, 2.5% w/v in t BuOH) was then added dropwise and the resultant mixture stirred for the specified time. The mixture was quenched with saturated aqueous Na 2 SO 3 (30 mL) and stirred for a further 1 h. The aqueous phase was extracted with ethyl acetate (3 × 20 mL), the combined organic extracts washed with aqueous KOH (5 mL, 1 M), dried (MgSO 4 ), the solvent removed in vacuo and the crude product purified by column chromatography. General Procedure C: Lithium Aluminum Hydride Reduction To a stirred suspension of LiAlH 4 (1.4 mmol) in THF (10 mL), under an atmosphere of nitrogen at 0 • C, was added a solution of lactone (1 mmol) in THF (10 mL) and the mixture stirred for the specified time. After warming to room temperature, the mixture was quenched with the addition of water (30 mL) and the aqueous phase extracted with ethyl acetate (3 × 40 mL). The combined organic extracts were washed with brine (25 mL), dried (MgSO 4 ), and the solvent removed in vacuo. General Procedure D: Periodate Cleavage To a stirred solution of triol (1 mmol) in MeOH/H 2 O (3:1, 50 mL) was added NaIO 4 (1.2 mmol) and the resultant mixture stirred for the specified time. The reaction mixture was quenched with brine (40 mL) and extracted with ethyl acetate (3 × 80 mL). The organic layers were combined, washed with water (2 × 40 mL), dried (MgSO 4 ), and solvent removed in vacuo to give the crude product which was purified by column chromatography if necessary. General Procedure E: Fétizon's Oxidation To a stirred solution of lactol (1 mmol) in toluene (60 mL), under an atmosphere of nitrogen, was added Fétizon s reagent (2 mmol) and heated at reflux for the specified time. The reaction mixture was allowed to cool and filtered, the solvent removed in vacuo and the crude product purified by column chromatography. General Procedure F: Benzyl Deprotection To a stirred solution of benzyl ether (1 mmol) in MeOH (30 mL) was added 10% palladium on carbon (20% w/w) and the resultant mixture stirred under and atmosphere of hydrogen for the specified time. The reaction mixture was filtered through celite, washed with methanol (3 × 20 mL), the solvent removed in vacuo and the crude product purified by column chromatography if necessary (The 1 H and 13 C-NMR spectra of compounds in the Supplemental Materials). To a stirred solution of unsaturated ester 25 (4.13 g, 18.6 mmol) in ethyl acetate (30 mL) was added 10% palladium on activated carbon (0.4 g, 10% w/w). The solution was flushed with an atmosphere of hydrogen and stirred for 2 h. The reaction mixture was then filtered through a plug of celite and washed with ethyl acetate, solvent was then removed in vacuo to give saturated ester 28 (3.9 g, 94%) as a yellow oil which was then used without further purification. To a stirred solution of phenol 28 (3.75 g, 16.7 mmol) in acetonitrile (40 mL), under an atmosphere of nitrogen, was added K 2 CO 3 (6.9 g, 50.0 mmol) and stirred for 10 min. Benzyl bromide (6.0 mL, 50.0 mmol) was then added and the resulting mixture allowed to stir for 65 h. The reaction mixture was then quenched with addition of water (50 mL) and extracted with CH 2 Cl 2 (3 × 30 mL). The organic phases were combined, washed with water (2 × 10 mL) and dried (MgSO 4 ). Solvent was then removed in vacuo and the crude product purified by column chromatography (9:1 hexanes, ethyl acetate) to give benzyl ether 29 (4.38 g, 83%) as a colourless oil which was used immediately. To a stirred solution of ester 29 (4.3 g, 13.7 mmol) in methanol (30 mL) was added aqueous NaOH (55 mL, 1 M, 4 eq.) and stirred for 2. 3-(3 ,4 -Methylenedioxyphenyl)propanoyl chloride (34b). To a stirred solution of carboxylic acid 30 (0.22 g, 1.2 mmol) in CH 2 Cl 2 (3 mL), under an atmosphere of nitrogen, was added oxalyl chloride (0.2 mL, 2.3 mmol) dropwise and the mixture stirred for 4 h. The solvent was removed in vacuo to give the title compound 34b (0.24 g, quant.) as a green oil, which was placed under nitrogen and used without further purification. 3-(3 ,4 -Dimethoxyphenyl)propanoyl chloride (34a). To a stirred solution of carboxylic acid 33 (0.24 g, 1.2 mmol) in CH 2 Cl 2 (5 mL), under an atmosphere of nitrogen, was added oxalyl chloride (0.2 mL, 2.3 mmol) dropwise and the mixture stirred for 2.5 h. The solvent was removed in vacuo to give the title compound 34a (0.26 g, quant.) as a yellow oil, which was placed under nitrogen and used without further purification. Annexin V/PI Assay Following treatments at a cell density of 1 × 10 5 cells/well, the samples were centrifuged at 500 g for 5 min and the supernatant was removed. Cells were washed in 500 µL DPBS before addition of 100 µL of 1 × Annexin V binding buffer (BD Biosciences). A 5 µL volume of FITC-conjugated Annexin V (BD Biosciences) and 10 µL Propidium Iodide (BD Biosciences) was added and the cells were incubated in the dark for 20 min. Samples were diluted by addition of 400 µL 1 × Annexin V binding buffer before immediate analysis on an Accuri C6 Flow Cytometer (Becton Dickinson, Oxford, UK). Cell Cycle Analysis Following treatments at a cell density of 5 × 10 6 /well, cells were centrifuged at 500 g for 5 min and the supernatant was removed. The remaining cell pellet was vortexed while simultaneously adding 500 µL of 70% ethanol dropwise, fixing the cells and minimising clumping. The samples were incubated at 4 • C for 30 min, and then centrifuged at 1000 g for 5 min. The supernatant was discarded, and the pellet was re-suspended in 500 µL DPBS. The samples were centrifuged again at 1000 g for 5 min, and the supernatant was removed a final time. The pellet was resuspended in 50 µL RNase A (100 µg/mL stock; Roche, UK) and 200 µL PI (50 µg/mL stock; Sigma, UK). The samples were analyzed on an Accuri C6 flow cytometer (Becton Dickinson) and data was modelled and interpreted using ModFit Analysis Software, version 5.0 (Verity Software House).
4,621.4
2018-11-22T00:00:00.000
[ "Chemistry", "Biology" ]
Design of a Portable Potentiostat with Dual-microprocessors for Electrochemical Biosensors In this paper, we design and implement a portable potentiostat by using dual-microprocessors for the signal processing of electrochemical biosensors. In our design approach, one of the microprocessors is used to design the programmable waveform generator, and the other microprocessor is used to measure the current of biosensors. The proposed potentiostat can perform general electrochemical analysis functions, including cyclic voltammetry, linear sweep voltammetry, differential pulse voltammetry, amperometry, and potentiometry. In the experiment, we adopt a commercial screen printed electrode immersed in potassium ferricyanide solution to test the performance of the proposed potentiostat and compare the proposed potentiostat’s measured results with a commercial potentiostat’s (CH Instrument Model: CHI1221) under the same test condition. The experimental results show that the proposed potentiostat has the merits of good accuracy, low cost, low power consumption, and high portability. Introduction Electrochemical biosensors, which can detect the particular target biomolecule to make a corresponding output change in the form of potential or current according to the biomolecule concentration, are commonly applied in DNA identification [1] and pH variation detection [2]. In electrochemical sensors, the potentiostat [3] is an important and indispensable device to maintain electrochemical stability. Up to the present, two sorts of potentiostats have been frequently used: one is the commercial potentiostat designed for research in laboratory, and the other is the specific potentiostat developed for a particular sensor. The former is sophisticated, expensive, and non-portable, and the latter is simple, low-cost, and portable. In the light of low cost and portability, the latter seems to be more popular for researchers to study. In the past, many researchers devoted themselves to developing single-chip potentiostats for reducing chip's size and cost. Turner presented a basic CMOS integrated potentiostat [4], Kakerow presented a monolithic potentiostat [5], Bandyopadhyay proposed a multi-channel potentiostat designed in chip-type to reduce the size and the cost [6], and Huang proposed an integrated circuits (ICs) for monolithic implementation of a voltammetery potentiostat with a large dynamic current range (5 nA to 1.2 mA) and short conversion time (10 ms) [7]. In system realization, Huang proposed two portable potentiostats to mimic the commercial potentiostat's function to reduce the cost, using SOC based microprocessor and circuit components off the shelf to implement potentiostats on PCB [8][9]. In this paper, a portable potentiostat designed by using dual-microprocessors for the signal processing of electrochemical sensors is proposed to improve the accuracy of measurement. In Huang's designs [8][9], since a single microprocessor is used to generate waveform and measure the sensor's current in the meantime, a little timing mismatch will exist in waveform generator. In order to improve such a problem, we use two microprocessors to design the potentiostat. One is used to design the programmable waveform generator, and the other is used to measure the current of biosensors. The two microprocessors are communicated with one memory card to obtain experimental parameters and to work independently. This design approach can avoid the mutual interference between current measurement and waveform generation to improve measurement accuracy. The proposed portable potentiostat can be used to perform the electrochemical measurements of biosensors outside the laboratory. Principle of a Potentiostat's Operation An electrochemical biosensor is composed of a working electrode (W), a reference electrode (R), and a counter electrode (C). The working electrode with the biomolecular probes attached can generate the reaction with the target. In an electrochemical biosensor, a potentiostat is an electronic device used to maintain electrochemical stability and to convert the biosensor's output into an analog signal [10]. Fig. 1(a) is a potentiostat's operation diagram, which can be implemented by the electronic components as shown in Fig. 1(b). The operational amplifier on the right establishes the control loop to accomplish the potentiostatic control while the integrator on the left converts the current flowing through the W to a voltage for digitization and readout. Fig. 2 shows the circuit diagram of a portable potentiostat designed with dual-microprocessors. The proposed potentiostat is mainly constructed by two mixed-signal microprocessors (MCU1, and MCU2), a voltage amplifier, an auto-range current-to-voltage converter, a voltage follower, an operational amplifier, a DC-DC power converter, and an SD memory card. The microprocessor MCU2 is used to communicate with PC through USB interface and to get the experimental parameters of the potentiostat from PC, and these parameters will be stored in an SD memory card. The other microprocessor MCU1 is used to generate the required signal waveform by 12-bit digital to analog converter according to the experimental parameters. The signal in range of 0 ~ 2.4 V will be amplified to 0 ~ 3.2 V by a voltage amplifier to extend the scan range of the potentiostat. The two microprocessors are communicated with the interface of memory card to obtain the desired experimental parameters. After the experimental parameters are set up, two microprocessors can be operated independently. MCU1 is in charge of signal waveform generation, and MCU2 is in charge of biosensor's current measurement. During the process of experiment, the potentiostat is controlled by the microcontrollers. The power supply can be provided by a USB charger, without using AC power supply. This design approach can increase the portability and the ability of standalone operation. Design of a Portable Potentiostat In the proposed potentiostat, an auto-range current to voltage converter, which contains a transimpedance amplifier and a set of resistors (10 MΩ, 1 MΩ, 100 kΩ, 10 kΩ, 1kΩ, and 100 Ω), is used to measure the current flowing through the solution between the working and the counter electrodes. It can be used to automatically convert the sensor's current range from ±16 mA to ±160 nA by the microprocessor to select the suitable resistor. The C8051F005 chip is an 8051 based CPU with 12-bit digital-to-analog converters (DACs), 12-bit analog-to-digital converters (ADCs), and digital peripherals. According to the specification of C8051F005 chip, the output range of DAC is between 0 and 2.4 V, and hence DAC can generate positive potential only and can't generate negative potential. In order to realize the function of cyclic voltammetry, the potentiostat needs to generate a sweep potential in both forward and reverse directions. Thus, two 12bit DACs (DAC1 and DAC0) are used to generate a scan potential between 0 and 2.4 V, and a 1.6 V offset potential, respectively. The potential of the working electrode (V W ) is set as 1.6 V by the voltage on the operational amplifier's virtual "ground" node. Based on the preset scan potential, the microprocessor generates a triangular-type signal, and then the DAC1 will transfer it into a triangular-type voltage between 0 and 2.4 V, which will be amplified to 0 and 3.2 V, and be acted as the setting voltage (V set ). In the operational amplifier, a negative feedback loop controls the potential of the reference electrode (V R ) to be equal to the setting voltage (V set ). According to this approach, the scan voltage V WR will be obtained in the range of -1.6 V to 1.6 V. The design specifications of the proposed cyclic voltammetry potentiostat are described as follows: the range of the programmable scan potentials (V WR ) is between -1.6 V and 1.6 V under the resolution 1 mV, the minimum programmable scan rate is 5 mV/s, and the auto-range of the measured current (I WC ) is from ±16 mA in the resolution of 8 µA to ±160 nA in the resolution of 80 pA. Auto-range Current-to-voltage Converter In the potentiostat, the auto-range current to voltage converter is used to measure the biosensor's current. In the auto-range current to voltage converter, the resistor's selection strategy is based on a sequential search algorithm to find a suitable resistor for current-to-voltage conversion. At first the 100 Ω resistor will be selected, and then the converted voltage will be measured by ch-4 ADC of MCU2. The measured voltage will be checked whether it is in the range of (1). If the range of the converted voltage is in the condition of (1), which means that the presented resistance is too small to be used, then the resistance will be enlarged ten times. The foregoing procedure will be continued until the range of the converted voltage satisfies the condition of (2) or until the resistance reaches 10 MΩ. After the suitable resistance is found, the resistance will be recorded and sent to the next cycle current-to-voltage conversion. In the new converting cycle, according to the variation of the biosensor's current, the previous converting resistor may not suit the present current conversion. So, the converting resistor will be changed to a larger resistance if the condition of (1) is satisfied. Otherwise, the converting resistor will be changed to a smaller resistance if the condition of (3) Personal User Interface The experiment is controlled by a PC running LabVIEW software, via a USB interface to communicate with the microprocessor. The personal user interface is shown in Fig. 3, which contains cyclic voltammetry, linear sweep voltammetry, differential pulse voltammetry, amperometry, and potentiometry. In the cyclic voltammetry mode, we can see the experimental parameter setting table, the real-time cell potential, the real-time sensor's current, and voltammogram diagram. Before the experiment, the user has to set up some experimental parameters, such as the scan range which can be set by the "initial" and "final" settings, the scan rate, and the scan cycles. The scan rate is determined by the number of increments in the output voltage (called "steps") and the number of seconds fixed in each step. 162 Design of a Portable Potentiostat with Dual-microprocessors for Electrochemical Biosensors During the experiment, the LabVIEW program controls the experimental procedure and collects the measured data from the potentiostat. Before the LabVIEW program plots the cyclic voltammogram, a digital 'weighted averaging filter ' [11] is used to smooth the measured current signal. The algorithm of weighted averaging filter is where X(t) is the value of new input sample, Y(t-1) is the value of previous output, Y(t) is the value of present output, and L is the weighted factor of the filter. SD Memory Card The SD memory card is used to store the experimental parameters and the measured results of the potentiostat. In SD memory card system, there are two alternative communication protocols: SD and SPI. In order to communicate the memory card with the microcontroller, the SPI communication protocol, which is one of the built-in communication protocols of C8051F005, is used to reduce the design-in effort to minimum. Through the signals of CS, CLK, DataIn, and DataOut, we can connect the SPI interface of C8051F005 with the SD memory card. All communications between the microcontroller and the memory card are controlled by the microcontroller. The FAT16 format is adopted to store data in SD card. The measured data of each experiment will be stored in one text file and can be retrieved to analyze and to plot the experimental diagram by PC. Experimental Results In the experiment, we adopt a commercial screen printed electrode as a blank biosensor to be immersed in potassium ferricyanide solution to test the performance of the proposed potentiostat and compare the measured results with a commercial potentiostat's (CHI1221) under the same test condition. In other words, the same biosensor and ferricyanide solution will be tested by the proposed potentiostat and the commercial potentiostat alternatively. Fig. 5 shows a comparison diagram of the cyclic voltammetry measurements performed by the proposed potentiostat and a commercial potentiostat. The scanned voltage is between -1 V and +1 V at a scan rate of 100 mV/s. Fig. 6 (a) and (b) are the cyclic voltammograms of the blank biosensor, which are immersed in the standard test solution with different concentration measured by the proposed potentiostat, and the commercial one. The cyclic voltammetry measurements for the blank biosensor are respectively performed in 1 mM, 3 mM, 5mM potassium ferricyanide solution, at a scan rate of 100 mV/s, and the scanned voltage is between -0.6 V and +0.8 V. Fig. 7 exhibits the relationship between the target concentration and anodic peak currents, which is re-plotted in accordance with Fig. 6. The current response for the blank biosensor is monotonically increased from 8 to 31 µA, when the concentration of potassium ferricyanide solution is increased from 1 mM to 5mM. The measured results of the proposed potentiostat are almost the same as the commercial potentiostat's. In addition to the demonstration of the function of cyclic voltammetry, Fig. 8 shows the comparison results of differential pulse voltammetry, where the scan voltage is from -0.4 V to 1 V, the pulse amplitude, pulse width, and pulse period are 50 mV, 50 ms, and 200 ms, respectively. The increased potential of each pulse is 5 mV, and the sampling time is 17 ms. According to the experimental results, we can observe that the proposed potentiostat's performance can achieve the accuracy as good as the commercial one's. Conclusions In this paper, we propose a portable potentiostat designed by dual-microprocessors for the signal processing of electrochemical biosensors. The proposed potentiostat can perform general electrochemical analyses, such as cyclic voltammetry, linear sweep voltammetry, differential pulse voltammetry, amperometry, and potentiometry. Comparing the proposed potentiostat's performance with the commercial potentiostat's, the proposed potentiostat's performance is as good as the commercial one's according to the experimental results. In addition, the proposed potentiostat has the merits of low cost, low power consumption, and high portability. In the future, the proposed potentiostat can be cooperated with real biosensors to be applied in home-care system.
3,134.6
2015-01-01T00:00:00.000
[ "Materials Science" ]
Hydrogenated Amorphous TiO2−x and Its High Visible Light Photoactivity Hydrogenated crystalline TiO2 with oxygen vacancy (OV) defect has been broadly investigated in recent years. Different from crystalline TiO2, hydrogenated amorphous TiO2−x for advanced photocatalytic applications is scarcely reported. In this work, we prepared hydrogenated amorphous TiO2−x (HA-TiO2−x) using a unique liquid plasma hydrogenation strategy, and demonstrated its highly visible-light photoactivity. Density functional theory combined with comprehensive analyses was to gain fundamental understanding of the correlation among the OV concentration, electronic band structure, photon capturing, reactive oxygen species (ROS) generation, and photocatalytic activity. One important finding was that the narrower the bandgap HA-TiO2−x possessed, the higher photocatalytic efficiency it exhibited. Given the narrow bandgap and extraordinary visible-light absorption, HA-TiO2−x showed excellent visible-light photodegradation in rhodamine B (98.7%), methylene blue (99.85%), and theophylline (99.87) within two hours, as well as long-term stability. The total organic carbon (TOC) removal rates of rhodamine B, methylene blue, and theophylline were measured to 55%, 61.8%, and 50.7%, respectively, which indicated that HA-TiO2−x exhibited high wastewater purification performance. This study provided a direct and effective hydrogenation method to produce reduced amorphous TiO2−x which has great potential in practical environmental remediation. Introduction Hydrogenated crystalline TiO 2−x (C-TiO 2−x ) has been extensively investigated owing to its full-spectra absorption and effective solar energy conversion, deriving from the selfdoped states created by O V and Ti 3+ species [1][2][3]. Apart from crystalline TiO 2 , amorphous TiO 2 as a common type of titanium oxide has not been reported on after hydrogenation for its photoactivity utilization. Unlike C-TiO 2−x , amorphous TiO 2 typically possesses many special properties including characteristic long-range disordered structure, high specific surface area, and narrow bandgap [4][5][6][7][8]. In order to tailor more narrower bandgap and acquire more photons utilization, amorphous TiO 2 should be considered as an ideal candidate for hydrogenation treatments, which could largely boost the photoactivity and bring about some original and significant physicochemical observations. Unfortunately, owing to its poor solar energy conversion and ineffective charge separation, amorphous TiO 2 is ignored as an advanced photocatalyst [9][10][11]. More importantly, hydrogenation with amorphous TiO 2 inevitably calls for annealing or thermal hydrogenation, which enables its crystallization and totally changes into hydrogenated anatase/rutile TiO 2−x . Herein, it is a huge challenge to synthesize hydrogenated amorphous TiO 2−x (HA-TiO 2−x ) and, particularly, equip it with the oxygen vacant disordered surface. In addition, there arises a controversy on the origins of low-energy photon absorption and band structure regulation theory in hydrogenated TiO 2−x nanomaterials [12][13][14]. Some viewpoints consider that the disordered surface layer, instead of the crystalline core, is responsible for the low-energy photons absorption [15,16]. But others suggest both disordered surface and crystalline core, as well as the interface, play a synergistic effect on capability of capturing visible to infrared light [17,18]. To clarify the contribution of disordered surface and crystalline core, constructing a distinctive model of a disordered surface with an amorphous core can exclude the influences of a crystalline core, and could lead to more explicit investigations on its photoactivity mechanism. Considering these aspects, the aim of this work is to produce HA-TiO 2−x with disordered surface with an amorphous core configuration, which may reveal unusual electrical and optical properties deviated from traditional hydrogenated TiO 2−x . In this work, we conducted an in-situ synthesis of HA-TiO 2−x using a synergistic method involving anodization and liquid-plasma induced hydrogenation. As shown in Figure 1A, the synthesis setup was composed of one titanium mesh anode, two titanium rod cathodes, and electrolytic cell. Once applied high-voltage pulses, anodization reaction occurred and resulted in the generation of nanopores on the surface of Ti mesh. Meanwhile, two bright liquid plasma were generated on the surfaces of cathodic titanium rods. The optical emission spectrum of liquid plasma as shown in Figure 1B, which clearly exhibited various emission peaks including Ti I (neutral), Ti II (single-charged ions), hydroxyl radicals, hydrogen, and atomic oxygen. A distinguished hydrogen atom peak at 656 nm confirmed the production of massive hydrogen atoms associated with the hydrogen reduction environment in electrolyte [19,20]. As described above, the Ti mesh thus experienced in-situ synergistic treatments including anodization and liquid-plasma induced hydrogenation. As shown in Figure 1C, after 1 h synergistic treatments, the color of Ti mesh changed from silvery white to dark gray, along with lots of nanopores emerging on the surface shown in Figure 1D. We hereafter dubbed the sample according to the treatment time, for example AT-60 refers to the HA-TiO 2−x obtained by applying a treatment time of 60 min. This is the first report on the synthesis of hydrogenated amorphous TiO 2−x to the best of our knowledge. Behind the systematic optical and electrical investigations, HA-TiO 2−x was found to exhibit superior visible light photoactivity as well as long-term stability. Based on the comprehensive experimental and theoretical analyses including electron paramagnetic resonance, X-ray photoelectron spectroscopy, positron annihilation spectrometry, and density functional theory results, the correlation among O V concentration, electrical structure, optical property, and photoactivity were clarified. The shallow states bellow the conduction band and above the valence band were formed in HA-TiO 2−x , which originated from the oxygen vacant disordered surface. The extended shallow states between the bandgap was responsible for the extraordinary visible-light absorption. With the benefits of excellent visible-light absorption and robust surface O V , HA-TiO 2−x exhibited superior and stable visible-light photodegradations, but alternatively, poor photoactivity in the UV region. After analysing the reactive oxygen species (ROS) and its scavenging experiments, uncommon occurrence was observed that electrons were hardly not transferred from the valence band to O V induced shallow states or conduction band, thus preventing the generation of h + radicals and reducing the UV-responded photoactivity. The synthesis of HA-TiO 2−x avoided the annealing and crystallization processes, simplified its preparation procedures, saved the costs, and reduced the consumption, which most likely lead to breakthroughs in nano-architecture of novel amorphous photocatalysts for practical and industrial applications. Reagents and Materials Ti mesh (purity 99%) was purchased from Hebei Borui Metal Materials Co., Ltd. (Handan, China). All chemicals with analytical grade and no further purification were purchased from Shanghai Aladdin Biochemical Technology Co., Ltd. (Shanghai, China). Preparation of HA-TiO 2−x One anodic titanium mesh (20 × 20 × 1 mm 3 , purity 99%) and two cathodic titanium rods (4 mm diameter, purity 99%) sealed into a corundum tube were placed in a cell filled with 300 mL nitric acid electrolyte (HNO 3 ) as shown in Figure 1A. Two cathodes were used to generate glow discharges and avoid unbalanced flow and temperature gradients in the HNO 3 electrolyte. Pulsed voltages were applied between anode and cathodes to produce intense plasma on the cathode surfaces. Liquid plasma was produced with an appropriate pulse voltage power (600 V, 1 kHz). To prevent the electrolyte evaporation, a water chiller was used to maintain the electrolyte temperature at 80 • C. After the synergy treatments, HA-TiO 2−x was washed with ultrasonic wave and dried with oven at 50 • C for 12 h. All samples were treated with the same output power of 420 W (600 V, 0.7 A, 1 kHz). Hereafter, we dub the sample according to the treatment time, for example AT-60 refers to the HA-TiO 2−x obtained by applying a treatment time with 60 min. The amorphous TiO 2 nanopowder was synthesized by hydrolysis of tetrabutyl titanate under ambient conditions. Firstly, 2 mL tetrabutyl titanate and 5 mL deionized water were mixed for 1 h, and then the mixture was dried with oven under 30 • C for 10 h. The white amorphous TiO 2 nanopowder was then prepared. Characterization The phase and crystallinity for all samples were tested by X-ray powder diffraction (XRD) using a Rigaku Smartlab (Rigaku) machine equipped with Cu Ka irradiation (λ = 1.54056 Å). The morphology was characterized by scanning electron microscope (SEM) using a ZEISS MERLIN instrument operated at an acceleration voltage of 200 kV. High-resolution transmission electron microscopy (HRTEM) images were acquired using a JEOL JEM-2100F. UV-Vis diffused reflectance spectra (DRS) were measured by Shimadzu UV-2700 spectrophotometer at a wavelength range of 200-800 nm at room temperature. The X-ray photoelectron spectra (XPS) were recorded with thermos Escalab 250Xi. The existence of defects doped in the a-TiO 2−x nanoparticles was confirmed by the X-band electron paramagnetic resonance (EPR) spectra recorded at room temperature. The surface wettability of as-prepared sample was tested by angle-of-contact method using KINO SL200KB contact angle meter. The total organic carbon (TOC) of the reaction solution was determined using a Shimadzu TOC-L TOC analyzer. The positron annihilation lifetime spectrum (PAS) used a 22 Na positron emission source with an activity of about 2 × 10 6 Bq. When it underwent β + decay, it mainly produced positrons with kinetic energy of 0-540 keV and almost simultaneously emitted γ photons with energy of 1.28 MeV. Therefore, the appearance of this gamma photon can be regarded as the time starting point for the generation of positrons, and the appearance of 0.511 MeV annihilation gamma photons was the end of the positron annihilation event. This interval can be regarded as the lifetime of the positron. The radioactive source was sandwiched between the sample to form a sandwich structure with a total of 2 million counts, and the positron annihilation lifetime spectrum of the sample thus can be obtained. The time resolution of the system was about 190 picosecond (ps) and the track width was 12.5 ps. Photodegradation Performances of HA-TiO 2−x The evaluation of visible-light photoactivity was tested with three typical wastewater pollutants, including rhodamine B (RhB), methyl blue (MB) and theophylline. We used a 300 W Xenon lamp with a 420 nm cut-off filter as the visible light source. The concentrations of RhB, MB and theophylline were all 10 mg/L. Black tea water pollutant was produced by 20 mg dry tea leaves that were put into 100 mL boiling water and cooled to room temperature, and the brown water was obtained when removal of tea leaves. Firstly, one HA-TiO 2−x @Ti mesh (20 × 20 × 1 mm 3 ) and 50 mL pollutant solution were put in a 500 mL beaker. Before illumination, the solution was placed in dark environment for 30 min with magnetic stirring for adsorption-desorption equilibrium. During photodegradation, we took 1 mL solution from the beaker with a fixed interval to analyse the time-dependent concentration of pollutant solution at specific wavelength by UV-2700 spectrophotometer, where RhB, MB, and theophylline were located at 554, 661, and 271.6 nm, respectively. Photodegradation experiments for all tested samples were carried out under the same conditions. In addition, the concentration of all sacrifice agents including ammonium oxalate (AO, h + scavenger), Fe(II)-EDTA (H 2 O 2 scavenger), potassium iodide (KI, OH ads and electron scavenger), p-benzoquinone (BQ, O 2 − scavenger), and isopropanol (IPA, scavenger for OH in the bulk solution) were 0.2 mM/mL. The EPR signals of radical spin-trapped by 5,5-dimethyl-1-pyrrolin-Noxide (DMPO) were recorded with visible-light illumination. In UV light mediated photodegradation, we used a 300 W Xenon lamp with a 365 nm cut-off filter as the UV light source (200-365 nm), and other experimental conditions including concentration of dyes and area of Ti mesh were unchanged. Theoretical Calculation Methods We employed the Vienna Ab Initio Package (VASP) to perform all the density functional theory (DFT) calculations within the generalized gradient approximation (GGA) using the Perdew-Burke-Ernzerhof (PBE) formulation [21][22][23]. We selected the projected augmented wave (PAW) potentials to describe the ionic cores and take valence electrons into account using a plane wave basis set with a kinetic energy cutoff of 450 eV [24][25][26]. Partial occupancies of the Kohn-Sham orbitals were allowed using the Gaussian smearing method and a width of 0.05 eV. The electronic energy was considered self-consistent when the energy change was smaller than 10 −4 eV. A geometry optimization was considered convergent when the force change was smaller than 0.05 eV/Å. Grimme's DFT-D3 methodology was used to describe the dispersion interactions. We calculated the hydrogenation process on amorphous TiO 2 surface by ab initio first-principles calculations with 10 ps. Characterizations of HA-TiO 2−x The crystal structures for all samples were detected by XRD analysis as shown in Figure [27]. Obviously, the intensity of main peak at (101) facet decreased gradually with treatment time from 40 min to 120 min (i.e., from AT-40 to AT-120). Generally, the weakened intensity in diffraction peaks can be explained by long-range lattice disorder, which verified that amorphous TiO 2 was generated on the Ti mesh surface, and in particular its concentration increased with the treatment time. The DRS spectrum was used to evaluate the light absorption performances of all samples. As displayed in Figure 2B, Ti mesh showed almost no light absorption while all as-prepared samples exhibited a wide-range absorption from ultraviolet to visible and even infrared regions. Compared with silver color of Ti mesh, AT-60 showed dark grey color as displayed in the inset of Figure 2B. All treated samples followed the plots of (αhν) 1/2 versus hν by using the Kubelka-Munk function [28], and from which the calculated bandgap of AT-40, AT-60, AT-80 and AT-120 were 2.57, 2.35, 2.52, and 2.66 eV, respectively, shown in Figure 2C. The decreased bandgap in our case implied some localized states caused by surface lattice defects could be created, for instance O V and/or Ti 3+ species [29]. To prove this assumption, AT-60 was heated at 400 • C for 3 h in the atmosphere and its XRD pattern in Figure 2D confirmed that anatase TiO 2 @Ti mesh was obtained. The color as expected changed from grey to white shown in the inset, indicating surface lattice defects transferred from surface to bulk or were oxidized by air. In order to characterize the surface morphology in HA-TiO 2−x , SEM examinations are illustrated in Figure 3. As shown in low-magnification SEM pictures from Figure 3A-E), all samples have rough surface compared with Ti mesh, and the surface corrosion increased with treatment time. As exhibited in high-magnification SEM pictures from Figure 3F-J, large quantities of nanopores with around 20 nm diameter arose on the Ti mesh surface as seen in AT-40 and AT-60. Observably, owing to the rough surface in AT-60 sample, some protuberances were formed on the nanopores surface where the thickness was about 560 nm as seen in Figure 3K,L. As seen in AT-80 and AT-120, these nanopores gradually vanished with the increase of the treatment time, which suggested that an appropriate anodization treatment time was necessary for fabrication of nanopores structures. As a result, amorphous TiO 2 with massive nanopores structures was manufactured on the Ti mesh surface. On the other hand, these nano-structures also explained the excellent visible light absorption, because incident light can be diffused by the nanopores array which largely weakened the reflected light intensity [30]. The HRTEM images of AT-60 sample are shown in Figure S1. Formation of O V in HA-TiO 2−x Electron paramagnetic resonance (EPR) was conducted to confirm the existence of surface defects in HA-TiO 2−x as shown in Figure 4A. The EPR spectrum displayed apparent signals of g = 2.002, g = 2.008, and g = 2.02 which was performed at room temperature without light irradiation. The g-value of 2.002 is attributed to surface oxygen vacancies due to unpaired electrons trapped at the oxygen vacancies on TiO 2 [31]. The signal of g = 2.008 is related to oxygen vacancies with one electron located in the sub-surface or bulk regions of TiO 2 [32]. The signal of g = 2.02 is ascribed to O 2 − , which was generated from the reduction of adsorbed O 2 by surface Ti 3+ and thus confirmed the existence of surface Ti 3+ [33]. Generally, the formation of O V is always connected with the generation of Ti 3+ species, and O V has a strong ability to preserve the surface Ti 3+ [34]. However, surface Ti 3+ species are unstable due to the ambient oxidation and, therefore, the stability of surface O V should be explained. Positron annihilation spectrometry as a useful technology can characterize the size, type, and relative concentration of vacancies in the surface region of nanomaterial. As shown in Figure 4B, three kinds of positron lifetime components in AT-60 are referred to as τ 1 , τ 2 , τ 3 , with relative intensities noted as I 1 , I 2 , I 3 , respectively. The longest component (τ 3 ) is generally considered as the annihilation of ortho-positronium atoms generated in large voids [35]. The smaller lifetime component (τ 2 ) can be attributed to larger size defects for instance O V clusters or surface defects [36]. The shortest component (τ 1 ) resulted from free annihilation of the positrons in the lattice and at small O V sites [37]. Herein, these results confirmed that the O V defects not only distributed in the surface and subsurface, but also in interior regions of HA-TiO 2−x , instead of merely accumulating on its surface. Compared with surface O V , subsurface or interior O V can prevent gradual oxidation by air and water, and herein are quite stable. We further investigated surface chemical compositions and valence states for all samples by using XPS spectra as shown in Figure 5. All XPS spectra were calibrated with reference to C 1 s at 284.8 eV. No apparent differences among these samples are shown in full XPS spectra ( Figure S2). From Figure 5A, the binding energies of Ti mesh located at 458.2 and 463.9 eV are ascribed to Ti 2p 3/2 and Ti 2p 1/2 peaks of Ti 4+ , respectively [38]. In comparison, AT-60 displayed an apparent negative shift in binding energy, which suggested Ti 3+ species existed in the surface of HA-TiO 2−x . We thus subtracted the Ti 2p spectra of AT-60 with that of Ti mesh as seen in the bottom of Figure 5A, exhibiting two distinguished peaks located at 457.1 and 462.8 eV that can be ascribed to Ti 2p 3/2 and Ti 2p 1/2 peaks of Ti 3+ , respectively [39]. The deconvoluted Ti 2p 3/2 spectra in Figure 5B presented the Ti 3+ and Ti 4+ content variations along with the treatment time, and detailed data are listed in Table 1. Figure 5C demonstrates that binding energy shift of Ti 2p 3/2 peak for all samples and AT-60 received the largest blue shift in binding energy with 457.8 eV. As displayed in Figure 5D, the deconvoluted peaks of O 1s of Ti mesh located at 529.3 and 531.1 eV can be attributed to the lattice oxygen (Ti-O) and surface absorbed oxygen (Ti-OH), respectively [40]. By comparison, the peak intensity of Ti-OH in AT-60 was much stronger than that of Ti mesh, indicating a higher concentration of hydroxyl groups in HA-TiO 2−x . Moreover, a clear negative shift in binding energy of Ti-OH peak was due to the rich amount of surface O V , which can accumulate sufficient electrons [41]. Figure 5E exhibited the differences of deconvoluted O 1s spectra, in which AT-60 showed the strongest intensity of the Ti-OH peak. Figure 5F illustrated the shift of valence band spectrum with the increase of treatment time, and the valence band maximum (VBM) of AT-60 possessed the largest blueshift about 2.03 eV. The band tail of AT-60 was estimated about 1.27 eV as shown in inset of Figure 5F. The whole parameters including bandgap, VBM, and relative ratio of Ti 3+ /Ti 4+ were shown in Table 1. With the increase of Ti 3+ content (AT-40 to AT-60), both bandgap and VBM decreased, but when the decrease of Ti 3+ content (AT-60 to AT-120), both bandgap and VBM increased. Herein, these results revealed Ti 3+ content can directly engineer the electrical bandgap structure in HA-TiO 2−x . The regulated mechanism of Ti 3+ content should be scrutinized later. Electronic Structures of HA-TiO 2−x DFT calculation was employed to explore the surface O V generation and the theoretical bandgap in HA-TiO 2−x . As shown in Figure 6A-D, simulated time-resolved hydrogenation process of HA-TiO 2−x was illustrated with picosecond-scale frame. Originally, hydrogen atoms moved to amorphous TiO 2 surface but without interface interaction shown in Figure 6A,E, and this moment was set as 0 ps. As time increased to 10 ps, several hydrogen atoms started bonding with surface oxygen atoms to generate Ti-OH bonds as shown Figure 6B,F. As time proceeded to 20 ps shown in Figure 6C,G, massive hydrogen atoms bonded with surface oxygen atoms, facilitating breaking up of Ti-O bonds on the surfaces of amorphous TiO 2 , which was represented as (~Ti-O~) + H 2 → (~Ti-H) + (~O-H) [42]. Therefore, until now, surface O V had been created, leading to the disordered surface simultaneously. When time reached to 30 ps, hydrogen atoms moved to the inner atomic layer to break the surrounding Ti-O bonds, resulting in the formation of subsurface O V in HA-TiO 2−x . Eventually, a stabilized disordered surface marked with blue dashed square was established after hydrogenation of amorphous TiO 2 . Herein, unique configuration of disordered surface@amorphous core in HA-TiO 2−x was generated. The partial density of states (PDOS) for amorphous TiO 2 and HA-TiO 2−x are shown in Figure 6I,J, respectively. The bandgap of amorphous TiO 2 was estimated at 3.68 eV, with narrow shallow states (marked with blue rectangle) near the valence band edge and several deep midgap trap states (marked with yellow rectangle) shown in Figure 6I. Observably, after hydrogenation, the number of midgap trap states was decreased, and the shallow states near the valence band edge and conduction band edge emerged in HA-TiO 2−x displayed in Figure 6J. The simulated bandgap of HA-TiO 2−x was around 2.45 eV which was consistent with the experimental value. To clearly demonstrate the bandgap structures, the schematic illustration was presented in Figure 7. The bandgap of amorphous TiO 2 was measured from Figure S3 which was 3.21 eV and well accorded with theoretical value. As shown in Figure 7 (left), two continuous deep midgap trap states filled in the bandgap, which could have originated from large bulk voids or long-range lattice disorder in amorphous TiO 2 [43]. As mentioned in previous literatures, these midgap trap states served as e-h recombination centers can inhibit the transition of electrons from valence band to conduction band (VB-CB for UV response), as well as from valence band to defect states (VB-defect for UV and visible light response) [13]. As for defect-CB (visible light response), the potential of superoxide radicals O 2 − was lower than the conduction band maximum (CBM) of amorphous TiO 2 . Thus, photoinduced holes and electrons were wasted regardless of UV or longer wavelength light, which could explain the non-photoactive of amorphous TiO 2 . By contrast, unique configuration of disordered surface with an amorphous core in HA-TiO 2−x revealed distinct electrical bandgap framework compared with amorphous TiO 2 . Both band tail states (shallow states) near valence band and conduction band were generated, and the bandgap between VBM and CBM was estimated to 1.26 eV, resulting in extraordinary visible light absorption proved by DRS data. Therefore, to gain efficient light absorption, it is indispensable to broaden shallow states and narrow deep ones. Afterwards, in retrospect, the controversy with the origins of low-energy photons absorption and bandgap structure regulation theory in hydrogenated TiO 2−x nanomaterials should be discussed. First of all, our results proved that O V disordered surface induced a narrow bandgap by introduced shallow states which was responsible for the low-energy photon absorption. Regarding hydrogenated crystalline TiO 2−x (disordered surface@crystalline core), we thus inferred that surface O V could take the major role in visible light absorption (VB-Defect and/or Defect-CB), whereas the untreated inner-bulk crystalline region should be in charge of UV photons capturing (VB-CB). The annealed amorphous TiO 2 , i.e., crystalized anatase TiO 2 shown in Figure 2D, was confirmed without surface O V and associated visible-light photoactivity but has UVresponded photoactivity (Figures S4 and S5). However, the effect of crystalline core cannot be ignored yet as it can also modify bandgap by constructing the disorder/crystalline interface [17,18]. Moreover, the electric potentials of governing radicals in photoactivity including superoxide radical (−0.18 eV) and hydroxyl radical (1.99 eV) were posited in the bandgap of HA-TiO 2−x . Nevertheless, the electric potential of h + (2.7 eV) was much higher than that of VBM, as well as, the deep states below CBM acted as e-h pairs recombination centers, which could reduce the UV photoactivity. Photocatalytic Performance Examinations The photodegradation of rhodamine B under visible light illumination (λ > 420 nm) is shown in Figure 8A. AT-60 exhibited the best visible-light photoactivity in all samples. To evaluate its stability, repeated experiments for five times under the same conditions showed no obvious difference between each cycle in Figure 8D. The visible-light degradations of methyl blue and theophylline are shown in Figure 8B,C. Almost complete photodegradation of MB (99.85%) was obtained after 1 h visible light irradiation. Recycle experiments were also tested five times and it showed a stable performance in Figure 8E. Theophylline as a typical pharmaceuticals and personal care products (PPCPs) that has been universally applied in pharmaceuticals, food additives, and personal care products [44]. Nevertheless, insufficient decomposition of PPCPs in wastewater treatments enables the remains to disrupt human endocrines. As shown in Figure 8C, the intense peak located at 271.6 nm was regarded as the characteristic peak of theophylline. After a 2 h reaction, all peaks in ultraviolet range hugely attenuated and the photodegradation rate reached at 99.87%. Repeatability test was also conducted and exhibited a steady performance from Figure 8F. Finally, some representative studies about TiO 2 nano-structures with high performance in dye photodegradation were provided for comparison with as-prepared HA-TiO 2−x as shown in Table S1, showing a much higher visible light photoactivity of HA-TiO 2−x . In general, according to the above results, it can be concluded that visible-light photocatalytic efficiency was mainly affected by the optical bandgap of HA-TiO 2−x . To investigate the photocatalytic performance in actual pollution water, black tea water was applied in visible-light photodegradation experiment. Being rich in tea polyphenols, theaflavins, thearubigins, amino acids, and especially theophylline, black tea possesses many benefits for instance anti-cancer, anti-oxidant, anti-obesity, and atherosclerosis prevention [45]. From Figure 9A, the observed DRS curves of black tea water were similar with that of theophylline, which suggested that the main ingredient in black tea water was theophylline. Owing to the high concentration of black tea water, it showed several extremely sharp peaks in 200-220 nm range. During visible-light degradation, two pieces of AT-60 meshes with same size (2 × 2 × 1 cm 3 ) were stacked in polyethylene plastic bottle filled with 50 mL black tea water. With the increase of irradiation time, the intensities of all peaks in ultraviolet region gradually decrease. Clearly, the color was changed from brown to almost transparent, responding to the decline of DRS curves in visible regions (400-600 nm) shown in the inset of Figure 9A. The photodegradation efficiency reached up to 85% over 2 h calculated by the peak intensity located at 271.6 nm. On the other hand, the super-hydrophilic surface of AT-60 was observed in Figure 9B, which led to a uniform brown color of black tea covered on AT-60 as seen in Figure 9C. Observably, the surface color varied from brown to grey again along with irradiation time in air, indicating the self-cleaning performance of HA-TiO 2−x surface. To investigate the visible-light photodegradation pathway of theophylline, the GC/MS system was used to analyse the intermediates as shown in Figure 10. Initially, only theophylline m/z = 181 was observed. After 1 h photodegradation, nine kinds of intermediates were detected and the detailed information was displayed in Table S2. The main compounds were theophylline m/z = 181, and 8-Hydroxy-1/3-methyl-3,7,8,9-tetrahydro-1Hpurine-2,6-dione m/z = 185. After 2 h reaction, three kinds of intermediates were still observed, except theophylline. The relative content with treatment time is shown in Figure S6, which confirms the nearly total photodegradation of theophylline into CO 2 and H 2 O. Reactive Species Tests and Photodegradation Mechanism The total organic carbon (TOC) as an important evaluation for polluted water purification is shown in Figure 11A. The TOC removal rates after 2 h reaction of MB, RhB, and theophylline were measured to 61.8%, 55%, and 50.7%, respectively, which indicated that HA-TiO 2−x exhibited high visible-light photodegradation performance for wastewater purification. During the photocatalytic reaction process, different kinds of reactive species including OH, photoinduced holes (h + ), O 2 − and H 2 O 2 are involved in degradation. To clarify the contribution, reactive species trapping experiments were carried out in the presence of AT-60 under visible-light irradiation. Five kinds of scavenger including ammonium oxalate (AO, h + scavenger), Fe(II)-EDTA (H 2 O 2 scavenger), potassium iodide (KI, OH ads and electron scavenger), p-benzoquinone (BQ, O 2 − scavenger), and isopropanol (IPA, scavenger for OH in the bulk solution) were applied in photodegradation. As shown in Figure 11B, the visible-light photodegradation rate of AT-60 without scavenger was 88.67%, while in the presence of AO, Fe(II)-EDTA, KI, BQ, and IPA were 88.06%, 25.63%, 16.22%, 21.24%, and 81.95%, respectively. Therefore, Fe(II)-EDTA, KI, and BQ can heavily hinder theophylline photodegradation performance but AO and IPA had no influence. Evidently, O 2 − , H 2 O 2 , and OH ads (adsorbed OH radicals on catalyst surface) were the dominant reactive species contributing to high visible-light photoactivity in HA-TiO 2−x . Moreover, the similar situations were observed with RhB and MB in presence of above scavengers using AT-60, as shown Figure S7. Moreover, EPR signals of O 2 − and OH were verified by applying in-situ trap the spin-reactive species as shown in Figure 11C,D. Moreover, the amount of O 2 − was 2.7 times as much as that of OH as listed in Table S3, suggesting O 2 − principally contributed to the high visible light photodegradation. In conclusion, solid evidence confirmed that O 2 − and OH accounted for visible light photodegradation, however, h + did not participate in the visible photoactivity. To further confirm the effect of h + , UV-light photodegradation of HA-TiO 2−x was carried out. As shown in Figure S8, the poor photoactivities using the AT-60 sample indicated UV light responded transitions of VB-CB and/or VB-defect state were almost invalid. On the basis of the above experimental results and theoretical analyses, schematic diagram of the photodegradation of HA-TiO 2−x are illustrated in Figure 12. Verification of Long-Term Stability of Surface O V In order to confirm the stability of surface O V in HA-TiO 2−x , the AT-60 sample after 12 months storage was used to conduct XPS and EPR examinations. As in Figure 13A, there was a little positive shift in binding energy after photodegradation usage, but it still showed an obvious Ti 3+ peaks according to XPS results, suggesting the existence of surface O V and Ti 3+ species. The EPR spectrum of AT-60 after 12 months storage showed two kinds of signal of subsurface O V (g = 2.008) and bulk Ti 3+ species (g = 1.997) [46]. Unfortunately, the surface Ti 3+ species (g = 2.02) and surface O V (g = 2.002) disappeared as shown in Figure 13B. In expectation, O V and Ti 3+ species at the surface layer of HA-TiO 2−x were all recovered, but unexpectedly, interior defects were still preserved, leading to high visible photodegradation again ( Figure S9). Anyhow, mere interior defect structure cannot explain the stubborn subsurface O V , especially in the presence of strong liquid plasma oxidation. Herein, the stability of subsurface O V in HA-TiO 2−x should be further clarified. The Formation Mechanism of HA-TiO 2−x Given the above results and analyses, the formation mechanism of disordered surface with the amorphous core structure should be discussed. The essence of liquid plasmainduced hydrogenation in our case is considered identical with thermal hydrogenation. According to current experience in thermal hydrogenation, the longer hydrogenation proceeds, the higher the concentration of surface Ti 3+ species, and the more visible-light photons hydrogenated TiO 2−x absorbs [3]. Nevertheless, it was confusing that visible-light absorption in HA-TiO 2−x represented a completely opposite trend that weakened the visible-light response achieved if the treatment time was prolonged (according to DRS results). Actually, apart from the hydrogenation reaction, liquid plasma also generated many kinds of active substances including electrons, hydroxyl radical, hydrogen peroxide, and ultraviolet radiation, which enables strong oxygenation with as-prepared HA-TiO 2−x . As a result, there existed two intense reactions associated with liquid plasma-induced hydrogenation and oxidation. In the primary stage, hydrogenation played the major effect, leading to O V disordered surface with treatment time (AT-40 and AT-60). Oxidation took a small effect owing to the low concentration of active substances during the primary status of liquid plasma generation [47]. When the treatment time increased and reached a threshold, the oxidation effect dominated the whole reactions and set to heal the surface O V . In addition, anodization of Ti mesh anode can accelerate the surface corrosion and surface amorphization, resulting in oxygenation of surface O V (AT-80 and AT-120). The bandgap engineering in HA-TiO 2−x can be manipulated through controlling the synergy effect of hydrogenation and oxidation in liquid plasma, and especially, regulation of the synergistic treatment time. Finally, some analyses should be undertaken with respect to the stability of subsurface O V . On the one hand, the as-obtained surface amorphization by anodization can produce the top amorphous layer wrapped on HA-TiO 2−x , hindering further oxidation. On the other hand, these O V and Ti 3+ species at a disordered surface could form point defect structure of Ti 3+ -O V -Ti 3+ , which was verified rather stable because of the electrostatic balance [48]. Accordingly, both surface amorphization and inner-bulk defect structure prohibited liquid plasma from recovering interior O V , which explained the long-term stability in the harsh environment. The schematic representation of the formation mechanism was shown in Figure S10. We also tried virous discharge times such as 20, 40, 60, 80, 100, 120, and 150 min, and the results of all samples including XRD, DRS, and visible-light photodegradation are shown in Figure S11 and Table S4. Conclusions In summary, hydrogenated amorphous TiO 2−x (HA-TiO 2−x ) with stable surface O V has been successfully prepared. The highlights and novelties in this work are as below. 1. Hydrogenated amorphous TiO 2−x was reported for the first time. First-principle calculations revealed the unique bandgap structure that both band tail states near valence band and conduction one were generated, leading to extraordinary visiblelight absorption. 2. The distinct liquid plasma hydrogenation strategy can effectively produce abundant surface O V on amorphous TiO 2 . 3. The special photodegradation mechanism. In visible-light photodegradation, O 2 − and OH were accounted for polluted water decomposition, nevertheless, h + was almost not contributed to the visible photoactivity. 4. The concentration of O V heavily affected photocatalytic efficiency. The higher O V concentration the HA-TiO 2−x possessed, the narrower the bandgap it received, and the higher photocatalytic efficiency it exhibited. 5. The excellent visible-light photodegradation and stability. HA-TiO 2−x exhibited superior visible-light photodegradations in RhB (98.7%), MB (99.85%), and theophylline (99.87). Moreover, surface O V in HA-TiO 2−x was rather stable and can be preserved in an ambient atmosphere over 12 months. This study provided a novel type of hydrogenated TiO 2−x photocatalyst, which could trigger a series of visible light-driven amorphous photocatalysts in practical solar light conversion. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nano11112801/s1, Figure S1: (A) and (B) are the high resolution transmission electron microscopy (HRTEM) images of AT-60 sample, and (C) is the energy dispersive X-ray spectrometry (EDS) spectrum of green rectangle marked region in Figure S1(B). Figure S2: The full XPS spectra for all HA-TiO 2-x samples. Figure S3: The DRS and valence band spectra of amorphous TiO 2 nanopowder, the band tail state is posited at 2.8 eV (red line). Figure S4: The EPR spectra of anatase TiO 2 @Ti mesh and AT-60. Figure S5: The photoactivity of anatase TiO 2 @Ti mesh under UV light irradiation for 1 h. Figure S6: The relative content of intermediate products with irradiated time during visible-light photodegradation of theophylline using AT-60. Figure S7: The reactive oxidant scavenging experiments of (A) MB and (B) RhB using AT-60. Figure S8: The UV photodegradation experiments of RhB, MB, and theophylline using AT-60. Figure S9: The recycle test of AT-60 after 12 months storage under visible light illumination for 2 h. Figure S10: The schematic representation of the formation mechanism of HA-TiO 2−x . (A) is the interaction process between hydrogen atoms and amorphous TiO 2 surface, and (B) is the detailed formation process of HA-TiO 2−x nanopores. Figure Table S1: Representative studies about TiO 2 nanostructures with high performance in dye photodegradation for comparison with HA-TiO 2−x @Ti mesh photocatalyst. Table S2: Main products during visible light degradation of theophylline. Table S3: The amount of ·O 2 − and ·OH by employing of 5,5-dimethyl-1-pyrroline-N-oxide (DMPO) to in situ trap the spin-reactive species. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the reason that the data also forms part of an ongoing study. Conflicts of Interest: The authors declare no conflict of interest.
8,730.4
2021-10-22T00:00:00.000
[ "Chemistry", "Environmental Science", "Materials Science" ]
Aging characteristics of asphalt binders modified with waste tire and plastic pyrolytic chars Globally, the growing volume of waste tires and plastics has posed significant concerns about their sustainable and economical disposal. Pyrolysis provides a way for effective treatment and management of these wastes, enabling recovery of energy and produces solid pyrolytic char as a by-product. The use of pyrolytic chars in asphalt binder modification has recently gained significant interest among researchers. As asphalt binder aging influences the cracking, rutting, and moisture damage performance of asphalt binder and the mixtures, evaluation of aging characteristics of char modified asphalt binders is quite important. The main objective of this study is the investigation of the aging characteristics of asphalt binders modified with waste tire pyrolytic char (TPC) and waste plastic pyrolytic char (PPC) through rheological and spectroscopic evaluations. To imitate short-term and long-term aging conditions, the asphalt binders were first treated in a rolling thin film oven (RTFO) and then in a pressure aging vessel (PAV). The aging characteristics were determined using four rheological aging indices based on complex modulus (G*), phase angle (δ), zero shear viscosity (ZSV), and non-recoverable creep compliance (Jnr) from multiple stress creep and recovery (MSCR) test. The fatigue cracking potential was then measured through binder yield energy test (BYET). These parameters were measured through a dynamic shear rheometer. Fourier transform infrared (FTIR) and proton nuclear magnetic resonance (1H-NMR) spectroscopy analyses were then used to investigate changes in chemical composition due to aging in the char modified binders. Both TPC and PPC improved the high-temperature deformation resistance properties of asphalt binder. The TPC-modified binder showed better aging resistance than the control and PPC-modified binders, based on the different rheological and spectroscopic indices. The pyrolytic char modified binders also demonstrated good fatigue performance. Introduction About 1.4 billion tires reach the end of life and become waste each year globally [1]. Consequently, the disposal of scrap tires is an increasing environmental and ecological problem. Plastic wastes are produced in large quantities due to their heavy demand in diverse sectors such as packaging, automobiles, electronics, household, agriculture, and other applications [2,3]. Therefore, the sustainable disposal of tire and plastic wastes has become a significant concern in many countries, including India. Concerted efforts are being made to develop sustainable and energy-efficient processes for treating and processing end-of-life tires and plastics. Pyrolysis technology has gained enormous research interest in recent years as an interesting thermochemical route to address the treatment of waste tires and plastics. The products of the pyrolysis process include liquid pyrolytic oil, gases, and solid char. Some factors influencing the composition of pyrolytic products are raw material, pyrolysis reaction conditions, reactor type, and catalyst [4]. Asphalt binder (also called bitumen) is the binding agent used for road pavements, parking areas, and driveways worldwide. Chemically, asphalt binder consists of a large number of various types of organic compounds, ranging from paraffins to alkyl polyaromatics containing heteroatoms (N, O, S) and metal traces (vanadium, nickel, iron). During its service life, asphalt binder is subjected to a series of complex physio-chemical processes such as oxidization, volatilization, condensation, polymerization, thixotropy (or steric hardening), causing the binder to become stiffer (harder) and brittle [5][6][7][8]. This phenomenon is termed asphalt aging. A highly aged asphalt binder (or asphalt mixture) may mobilize an early onset and propagation of pavement distresses such as fatigue cracking, thermal cracking, and moisture damage [8,9]. Asphalt binder ages in two phases: (1) short-term aging (manifests during production, placement, and compaction of the asphalt mixture), and (2) long-term aging (manifests during the service life of the asphalt pavement when exposed to the proximate environment). Many wastes and by-product materials such as plastic wastes, crumb rubber, slags, and crushed concrete have previously been explored as alternative additives to asphalt binder or hot mix asphalt [10,11]. While the tire and plastic pyrolytic oils find applications in energy generation and gases are reused for their heat value, the solid carbonaceous char produced in pyrolysis is regarded as a by-product. There is a growing need to finds routes for its broader utilization in bulk quantities. Several carbonaceous materials have also been utilized in previous research studies, such as biochar, carbon black, and carbon fibers [12][13][14]. In recent years, some studies have reported the use of tire pyrolytic char (TPC) in asphalt binder modification and evaluated its effect on binder physical properties, binder rheology, and mixture properties [15][16][17][18][19]. The use of plastic pyrolytic char (PPC) for asphalt modification has also recently gained interest [20,21]. Considering the need and importance of evaluation of aging characteristics for a modified asphalt binder, only limited studies have been done on the characterization of aging behavior of an asphalt binder with TPC modification. No study was found on the evaluation of the aging properties of PPC modified binders. Feng et al. [16] evaluated the aging behavior of asphalt binders modified with tire pyrolytic carbon black (PCB) employing penetration, ductility, softening point, and viscosity tests (four aging indices were formulated) before and after short-term and long-term thermo-oxidative aging, and photo-oxidative aging. The results showed that PCB modified binders exhibited better aging resistance than unmodified binders based on all four aging indices. In another study, Wang et al. [18] assessed the aging properties of modified asphalt containing tire vacuum pyrolysis derived carbon black using multiple stress creep and recovery (MSCR) tests. A single aging index was formulated based on the ratio of non-recoverable creep compliance (J nr ) before and after short-term and long-term aging. The MSCR J nr results showed improvement in aging resistance with further improvements at higher PCB contents. As observed from the literature review, although some efforts have been made to characterize aging properties with TPC modified binders, works directed to comprehend the aging properties considering both rheological and spectroscopic techniques are still quite limited. TPC and PPC originate from quite different raw materials, hence it is expected that they will have differing effects on the physio-chemical and aging properties of the resulting modified binders. No comparative study is available to understand and compare the aging properties of asphalt binders modified with char from the pyrolysis of two abundant waste materials (waste tire and waste plastic). This study aims to evaluate the effects of aging on the rheological and spectroscopic properties of asphalt binders modified with chars derived from the pyrolysis of waste tires and waste plastics. To detect changes in rheological parameters and chemical composition under various aging states, a dynamic shear rheometer (DSR), Fourier transform infrared (FTIR), and proton nuclear magnetic resonance (NMR) spectroscopies were employed in this study. Frequency sweep tests were performed to obtain linear viscoelastic properties (complex modulus and phase angle) and zero shear viscosity (ZSV). MSCR test was conducted to understand the high-temperature deformation resistance of binders in the nonlinear viscoelastic regime. Aging indices were then formulated based on rheological tests and FTIR analysis to quantitatively analyze the changes occurring in the modified and control binders due to aging. Longterm aged binders were also subjected to binder yield energy test (BYET) to assess the effect of aging on the fatigue resistance of the long-term aged binders. Materials Viscosity grade 30 (VG30) asphalt binder was used as the base (control) binder in this study. The basic properties of the VG30 binder are enlisted in Table 1, along with their requirements as per specifications followed in India. TPC and PPC were respectively obtained from the industrial scale pyrolysis of waste tires and waste plastics and were supplied by Innova Engineering and Fabrication (Mumbai, India). The pyrolytic chars were sieved on a 75 μm (No. 200) sieve and the material passing the sieve was used for asphalt modification. Fig 2 shows the physical appearance of TPC and PPC. PLOS ONE Aging characteristics of asphalt binders modified with waste tire and plastic pyrolytic chars Preparation of modified asphalt binders TPC and PPC modified binders were prepared using a high shear mixer equipped with rotorstator assembly. The control (VG30) asphalt binder was heated to 160˚C followed by gradual addition of 10% by weight of TPC and PPC particles. High shear mixing was continued for 30 min at a shear rate of 12,000 rpm to obtain the modified binders. The abbreviations 'TPCMA' and 'PPCMA' are used to denote TPC-modified asphalt and PPC-modified asphalt. Aging methods Short-term aged binders were obtained on a rolling thin film oven (RTFO) operating at 163˚C for 85 min and airflow of 4000 mL/min according to ASTM D2872-19 [23]. Binders were then simulated for long-term aging on a pressure aging vessel (PAV) according to ASTM D6521-19 [24]. The aging temperature was 100˚C with air pressure maintained at 2.1 MPa for 20 h. All binder samples were then vacuum degassed at 170˚C under an absolute pressure of 15.0 kPa. PAV aging of binders was preceded by RTFO aging. Fig 3 shows the photographs of RTFO and PAV used in this study. Rheological tests Rheological properties of the asphalt binders were measured on a DSR (make: Anton Paar MCR-102). Frequency sweeps were conducted in three decades (0.1-1, 1-10, 10-100 rad/s) at 60˚C at low strain (to measure the binder properties in the linear viscoelastic domain). Complex shear modulus (G � ) and phase angle (δ) were determined at different frequencies. Based on complex viscosity data derived from oscillatory shear frequency sweep tests, the ZSV of the binders was determined using the Cross-model (Eq 1): where, η � = complex viscosity (Pa.s); η 0 = ZSV (Pa.s); η 1 = viscosity at infinite frequency; ω = angular frequency (rad/s), k and m are model constants. MSCR test was conducted at 60˚C to investigate the high-temperature rutting resistance of the binders under different aging states. Two stress levels (0.1, 3.2) were used with 1 s creep and 9 s recovery intervals. A total of 20 cycles were used at 0.1 kPa (10 cycles for conditioning and next 10 cycles for data acquisition), and 10 cycles were used at 3.2 kPa stress levels as per ASTM D7405-20 [25]. Nonrecoverable creep compliance (J nr ) was calculated using Eq 2: where, ε nr = non-recovered strain and σ = stress level. The BYET is based on the notion that crack propagation in the asphalt pavement is a function of an energy threshold where the applied stress is greater than the binder's resistance to damage [26,27]. In the BYET, the asphalt binder's resistance to yield failure under a monotonic loading (at a fixed strain rate) is assessed. To evaluate the binder's yield energy, AASHTO TP 123-16 [28] specifies the application of a total 4167% strain achieved by rendering a fixed strain rate of 2.315% s -1 for 30 min. The test was carried out at 15 and 25˚C (representing usual intermediate service temperatures) on long-term aged binders. During the test, the asphalt binder starts to yield after reaching a peak point, implying that the binders cannot resist further stress. The area up to maximum shear stress from the stress-strain curve gives the binder yield energy. All rheological tests were performed thrice, and the average results were reported. Rheological aging indices To quantitatively characterize the effect of aging, four aging indices (AI) based on rheological properties were formulated, as shown in Eqs 3-6. These indices are based on binder complex modulus (G � ), phase angle (δ), ZSV, and MSCR J nr before and after being subjected to aging. Rheological aging indices defined as the ratio of a parameter before and after aging have been used in several asphalt aging studies [29,30]. The rheological indices are defined such that a lower value of each aging index represents more minor changes due to aging in the rheological property under consideration and, therefore, a lower aging susceptibility. This definition allows convenience to interpret the rheological as well as FTIR-based indices. Frequency sweep The results of two viscoelastic rheological parameters G � and δ measured at 60˚C for PPC and TPC modified asphalt (abbreviated as PPCMA and TPCMA) are presented in Fig 3 at 10 rad/s frequency. The error bars represent one standard deviation from the replicate measurements. Similar trends were found at other frequencies. The G � describes the total resistance to deformation of a binder when subjected to a repeated sinusoidal shear load, whereas δ reflects the ratio of elastic and viscous behavior of the binder. A higher G � and a lower δ are desirable attributes for an improved high-service temperature performance [31,32]. It can be seen from Fig 4 that the addition of both TPC and PPC leads to an increased G � , indicating that modification by both pyrolytic chars improves the stiffness of the binder. However, an appreciably lower δ is only seen for the PPC-modified binder. As expected, aging causes an upward shift in the G � values of the control and modified binders. Aging causes a stiffer (higher G � ) and more elastic (lower δ) response of the asphalt binder. Results indicate that both TPC and PPC have a positive effect on the deformation resistance of the binder. In previous studies, the addition of tire and plastic pyrolytic char to the asphalt binder resulted in a similar enhancement in complex modulus [16,19,20]. The aging indices derived from G � and δ are shown in Figs 5 and 6, respectively, at three frequencies (1, 10, and 100 rad/s). As noted earlier, a lower value of the indices indicates lower changes in the material properties on aging. Figs 5 and 6 show that the indices are the lowest when TPC is used under both short-and long-term aged states and the three frequencies. It is therefore inferred that the aging susceptibility of the TPC-modified binder is the least. Lower frequencies lead to higher aging index values, suggesting that aging is more severe at low frequencies (representing slower traffic speeds). A discussion on mechanisms contributing to the observed trends is provided later in this section. Zero shear viscosity (ZSV) ZSV indicates the viscosity of an asphalt binder under a very low shear rate (or angular frequency) and is often used as a parameter to evaluate high-service temperature properties of modified binders since the response of these binders may not follow a Newtonian relationship at high-service temperatures. From Fig 7, it is seen that the ZSV of the asphalt binders enhances with an increment in the aging severity/period. A sharp increase in ZSV occurs from short-term to long-term aged conditions. The ZSV values of TPC and PPC-modified binders are larger than the control binder under the three aging states, indicating that TPC and PPC can improve the high-temperature performance of asphalt. This finding is consistent with the previous discussion based on G � . PLOS ONE Aging characteristics of asphalt binders modified with waste tire and plastic pyrolytic chars The aging index formulated based on ZSV is computed as the ratio of ZSV after and before aging, and therefore a lower value indicates fewer changes in ZSV due to aging. Fig 8 presents the result of the ZSV-based aging index in short-term and long-term aging conditions. The long-term aging index is significantly higher than the short-term index. This is due to the higher degree of aging undergone by the binder in the PAV process. Under both aging conditions, the aging index for TPC-modified binders is found to be the lowest. The G � and δ based aging indices also ranked the binders in the same order. Multiple stress creep and recovery (MSCR) The nonrecoverable creep compliance (J nr ) from the MSCR test is commonly used to assess the high-temperature rutting performance of asphalt binders. Smaller the value of J nr , the better the high-temperature resistance of the binder since it indicates lower unrecovered strain at the end of an MSCR creep-recovery cycle. Fig 9 shows that aging has a significant effect on the J nr of binders. J nr decreases with a higher degree of aging with a minimum value for the longterm aged binders. Both TPC and PPC reduce the J nr values at the three aging conditions compared to the base binder indicating the superior rutting resistance derived from the addition of the pyrolytic chars. These results correspond well with the earlier results of ZSV and G � of the binders. A similar reduction in MSCR J nr was also observed in previous studies with incorporation of pyrolytic char in asphalt binder [18,20]. The aging index based on J nr shown in Fig 10 ranks the binders in the same order as for the indices based on G � , δ, and ZSV. The lowest index is observed for the TPC-modified binder, followed by the control and the PPC-modified binder. Now, a discussion is presented to explain the observed trends in the aging performance of the binders. PLOS ONE Aging characteristics of asphalt binders modified with waste tire and plastic pyrolytic chars Discussion on contributing mechanisms Carbon black is added to tire rubber during its manufacturing to improve its strength and aid in resistance to abrasion. TPC, therefore, consists of the recovered carbon black, inorganic compounds used in tire making, and condensed by-products formed during the pyrolysis [17,33,34]. Carbon black has been reported to function as an effective antioxidant to the asphalt binders due to chemical species such as quinones, carboxyphenols, phenols, and lactones [35]. When subjected to aging, these chemical species may have a higher tendency to react with oxygen than the asphalt binder, leading to a better aging resistance of the TPC modified binder. Wang et al. [18] used aging index based on MSCR J nr and also reported improvements in binder aging resistance with tire pyrolytic carbon black and attributed them to the presence of functional groups which are more likely to react with oxygen than bitumen. Another possible mechanism for better aging resistance for TPC modified binder was reported by Wang et al. [18] based on the results of electron spectroscopy for chemical analysis (ESCA), infrared, and ultraviolet spectroscopic studies on TPC modified bitumen by Chaala et al. [36]. The TPC particles absorb maltenes when added to bitumen, and a thin boundary layer composed of asphaltenes exists on the TPC surface. Now, the aging of bitumen is mainly related to the depletion of maltenes and the formation of more asphaltene-like compounds. When the binder is aged, such a system (TPC particle surrounded by asphaltenes) inhibits the depletion of maltenes and thus contributes to improving the binder aging resistance. However, such in-depth chemical studies on PPC modified asphalt are currently not available. Char obtained from plastic waste pyrolysis is reported to have higher amounts of volatile matter [37,38]. Proximate analysis conducted on TPC and PPC revealed that volatile matter content in PPC was 27.7% compared to~4% in TPC. The volatile matter becomes gaseous and will likely escape when the binder undergoes aging at high temperatures (short-term aging) or under a combination of high temperature and high pressure (long-term aging). This, in turn, is expected to contribute to a higher aging index of the PPC-modified asphalt binders. Binder yield energy test (BYET) An asphalt binder should perform well against rutting at high-service temperatures and fatigue cracking damage at intermediate service temperatures. The susceptibility of long-term aged TPC and PPC modified binders to fatigue damage was characterized using the BYET. Based on the stress-strain response, the area under the curve up to the peak stress is obtained as the binder yield energy. Higher yield energy corresponds to a better fatigue damage resistance. The BYET was performed on long-term aged binders at two intermediate service temperatures: 15 and 25˚C. Despite showing higher aging indices, the PPC modified binder demonstrates the highest yield energy at both temperatures (Fig 11). On the other hand, the control and TPC-modified binders have quite close values of the yield energy at both test temperatures. It is to be noted that the aging evaluation based on rheological variables compares the stiffness properties (G � , ZSV, J nr ) of the binders before and after aging. Although binder stiffness increased with the addition of PPC, this is accompanied by an improved elasticity as evidenced by a lower phase angle of aged PPC-modified asphalt than the control binder, which can impede the damage generation and propagation in the BYET test. However, the improved fatigue resistance needs to be confirmed further by testing the asphalt mixtures fabricated with the modified binders. FTIR spectroscopy Chemical composition of asphalt binder changes with aging. During oxidative aging, the chemical groups in the asphalt binder react with oxygen, and therefore monitoring the changes in oxygen-based functional groups is quite helpful to understand changes brought in the binder due to aging. FTIR was used to observe the progress of chemical functionalities for control, TPC-modified and PPC-modified binders subjected to short-and long-term aging. FTIR spectra of the binders under different aging states are shown in Fig 12. Peaks corresponding to two oxygenated functions, namely carbonyl (C = O, centered around 1700 cm -1 ) and sulfoxide (S = O, centered around 1030 cm -1 ), are used to evaluate changes caused due to aging in asphalt binders [39][40][41]. Distinct regions corresponding to carbonyl and sulfoxide are also indicated in Fig 12. It can be seen that new carbonyl and sulfoxide functional groups are formed as the duration of aging increases for all binders. These groups are formed when other chemical bonds such as C-C, and C = C dismantle and react with oxygen or when sulfurbased compounds of the asphalt react with oxygen [39,42]. Quantitative analysis of the FTIR spectra was performed by calculating carbonyl (C = O) index (CI) and sulfoxide (S = O) index (SI) based on the peaks in distinct regions of the spectra [43] as per Eqs 7 and 8: where, ∑ A = A 1010−1043 + A 1350−1510 + A 1535−1625 + A 1678−1725 . Figs 13 and 14 show the results of FTIR indices for all binders at different aging conditions. In general, all indices consistently increase as the severity of aging progresses in the order: unaged < short-term aged < long-term aged. The highest increase is found between the shortterm and long-term aged conditions for all binders. In the short-and long-term aged conditions, the CI and SI are the lowest for the TPC-modified binder. This indicates that the modification by TPC leads to the formation of fewer carbonyl and sulfoxide groups. Therefore, it is inferred that the addition of TPC can help inhibit the increase in chemical functionalities and improve the resistance to thermal-oxidative aging. These findings also coincide with the observations from the rheological aging indices. H-NMR spectroscopy The 1 H-NMR spectra of control, TPC-modified, and PPC-modified binders in different aging states are displayed in Fig 15. Prominent absorption peaks appear in the region from 0 to 3 ppm and can be assigned to linear and substituted hydrocarbons with saturated alkanes (CH 3 , CH 2 , and CH) [6,44]. The 1 H spectra of binders at different aging states have relatively similar positions of peaks. For example, the absorption peaks of the control (unaged) binder appear at 0.89, 1.26, and 2.53 ppm, while those for the control (short-term aged) binder appear at 0.92, 1.30, and 2.61 ppm. Due to the possible influence of solvent and magnetic anisotropy, the major peaks of the binders deviate slightly toward high or low shifts. However, no new prominent peaks appear between the 1 H-NMR spectra of different binders. The two strong peaks around 1.25 and 0.85 ppm are assigned to protons on methyl and methylene groups, respectively [45]. The signal at 7.26 ppm belongs to the solvent (CDCl 3 ). No signal on the 4-6 ppm region was observed, which corresponds to olefinic hydrogen, indicating that olefinic hydrocarbons are negligible in the binders. Similar peak assignments for asphalt binders were also reported by Rossi et al. [45]. NMR spectra of a binder in unaged and aged states are reported to be quite similar by other researchers [46], which is also observed in this study. Therefore, a quantitative analysis was performed to understand the differences better. The distribution of protons among aromatic (chemical shift: 6-9 ppm) and aliphatic regions (chemical shift: 0.5-4 ppm) was determined for the three binders at all aging states based on integration and normalization of the 1 H-NMR spectra. The aliphatic protons were further segregated into three groups based on their positions: H α : 2-4 ppm; H β : 1-2 ppm; and H γ : 0.5-1 ppm. Table 2 presents the details of protons assignment from 1 H-NMR spectra. Fig 16 shows the aromatic hydrogen (H ar ) distribution of all binders in unaged, short-term aged, and long-term aged states. Comparing the H ar values for unaged binders, it can be seen that H ar of TPC-modified asphalt declined whereas that for PPC-modified asphalt slightly increased compared to the control binder. This suggests that some light asphalt components such as aromatics are added to the binder on modification with PPC. Furthermore, the H ar values of aged binders were lower than unaged binders. This may be due to higher condensation and substitution of aromatic structures in the binders due to aging, as also concluded by Ma et al. [47]. The ranges of H α , H β , and H γ were found as 0.010-0.153, 0.478-0.773, and 0.160-0.363, respectively. Among the three aliphatic protons (H α , H β , and H γ ), the H β represents the major saturated proton for all binders under all three aging states. This observation is also in agreement with those reported by [47,49,50]. No apparent changes could be determined for H α and H β with an increase in the degree of aging. However, a strong trend was found between H γ and the severity of aging, with a greater H γ value indicating to a more aged binder (Fig 17). Such a trend was also reported in [47], where higher H γ values were found when the binder aging exposure period was increased from 30 days to 60, 90, and 120 days. Zhang and Hu [49] also observed a higher H γ for the control (unmodified) and a crumb rubber-SBS-sulfur composite binder after the binders were subjected to thin film oven aging (another protocol that simulates short-term aging). H γ describes aliphatic hydrogens in methyl (CH 3 ) or methylene (CH 2 ) groups in the γ position to an aromatic ring. In this study, H γ was also found to have a strong negative correlation with the binder J nr (3.2 kPa), as shown separately for the three binders in Fig 18. The correlation suggests that higher H γ may be linked to the increase in binder rutting performance (indicated by a lower J nr ). However, such a correlation may be specific to the control, TPC and PPC modified binders considered in this study, and further analysis with binders from other sources/modifiers is needed to generalize the finding. Conclusions In this study, rheological and spectroscopic characterisation were used to assess the effect of aging on the properties of TPC and PPC modified asphalt binders. Four rheological aging indices were based on complex modulus (G � ), phase angle (δ), zero shear viscosity (ZSV), and multiple stress creep and recovery (MSCR) tests conducted on a DSR. The fatigue cracking potential was also measured through binder yield energy test (BYET). Fourier transform infrared (FTIR) and proton nuclear magnetic resonance ( 1 H-NMR) spectroscopic analyses were then used to investigate changes in chemical composition due to aging. Based on the results and analyses, the main conclusions drawn from the study are: • Both TPC and PPC improved the high-temperature deformation resistance properties of asphalt binder as seen from the results of G � , δ, ZSV, and MSCR tests. • Rheological indices based on the ratio of G � , δ, ZSV and J nr showed that TPC-modified asphalt binder suffered the slightest aging compared to control and PPC-modified binders. • FTIR results indicated that the addition of TPC retarded the carbonyl and sulfoxide chemical groups formed due to aging. • Despite the observation that PPC-modified asphalt binder suffered more aging than other binders, it showed the highest binder yield energy at both test temperatures (15 and 25˚C), indicating that the along with the increase in stiffness, it provided higher fatigue cracking resistance. • 1 H-NMR results indicated that aromatic hydrogen decreased with aging and aliphatic γhydrogen had strong correlations with binder J nr in different aging states. The research findings from this study provide a detailed insight on the changes due to aging on rheological and spectroscopic properties of TPC and PPC modified binders and their comparison to the control (unmodified) binder. The results obtained are quite encouraging in the direction of waste tire and plastic management using pyrolytic chars as sustainable asphalt binder modifiers. The conclusions drawn in the present study are based on the rheological and spectroscopic characteristics evaluated for the short-term and long-term aged asphalt binders modified with TPC and PPC, and further investigation at the level evaluation of asphalt mixtures is recommended for future works. Supporting information S1 File. Data. This file includes all the test data of the asphalt binders used in this study. (XLSX)
6,781.6
2021-08-19T00:00:00.000
[ "Materials Science" ]
A Fast Color Image Segmentation Approach Using GDF with Improved Region-Level Ncut . Color image segmentation is fundamental in image processing and computer vision. A novel approach, GDF-Ncut, is proposed to segment color images by integrating generalized data field (GDF) and improved normalized cuts (Ncut). To start with, the hierarchy-grid structure is constructed in the color feature space of an image in an attempt to reduce the time complexity but preserve the quality of image segmentation. Then a fast hierarchy-grid clustering is performed under GDF potential estimation and therefore image pixels are merged into disjoint oversegmented but meaningful initial regions. Finally, these regions are presented as a weighted undirected graph, upon which Ncut algorithm merges homogenous initial regions to achieve final image segmentation. The use of the fast clustering improves the effectiveness of Ncut because regions-based graph is constructed instead of pixel-based graph. Meanwhile, during the processes of Ncut matrix computation, oversegmented regions are grouped into homogeneous parts for greatly ameliorating the intermediate problems from GDF and accordingly decreasing the sensitivity to noise. Experimental results on a variety of color images demonstrate that the proposed method significantly reduces the time complexity while partitioningimageintomeaningfulandphysicallyconnectedregions. Themethodispotentiallybeneficialtoserveobjectextraction and pattern recognition. Introduction Image segmentation [1][2][3] is a process of partitioning an image into meaningful disjoint regions such that each region is nearly homogenous with no intersection.It has become a basic issue of image processing and computer vision.For example, object detection [4,5], object recognition [6,7], knowledge inference [8,9], image understanding [10], and medical image processing [11,12] are all dependent on image segmentation, whose accuracy determines the quality of image analysis and interpretation. Segmentation problem is essentially equivalent to a clustering issue, which aims at grouping the pixels into local homogenous regions.Many existing methods consider image segmentation as the clustering problem and have succeed in dealing with this problem, such as -means [13], regionbased merging [14], mean shift [15], model-based clustering [16,17], and parameter-independent clustering [18].Of these methods, -means [13] is a parametric method, requiring a prior knowledge of the number of clustering centers.Hou et al. [18] further introduces a parameter-independent clustering method for image segmentation in attempt to remove the heavy dependence on the user-specified parameters.Comparatively, region-based merging [14], mean shift [15], and model-based clustering [16,17] are all nonparametric approaches, which require no prior assumptions of the number of clusters, spatial distribution, and so on. Nonparametric clustering has been advantageously used in image segmentation.Mean shift (MS) [15] is a typical segmentation approach of nonparametric density estimation.The principle behind it is that dataset characteristics are depicted with empirical probability density distribution in the feature space.However, three difficulties of MS algorithm are usually hard to tackle.First, a single MS is sensitive to the bandwidth selection, producing quiet different segmented results because of the choice of different parameters.Second, MS suffers from oversegmentation.It preferably results in massive fragments and possible erroneous partitions, especially when processing those images with only subtle distinctions between different clusters.Third, MS is very time-consuming.The high time complexity is due to execute large amounts of iterative filtering and subsequent clustering computations on every single pixel. To overcome the oversegmentation problem of MS algorithm, the authors [19] have combined MS with graph-based Ncut method [20] for segmenting images.In [21], Bo et al. propose the use of dynamic region merging for automatic image segmentation.This approach first employs MS to generate an initially oversegmented image, on which finial segmentation is achieved by iteratively merging the regions according to a statistical test.The experimental results of these two MS-based segmentation approaches demonstrate that the inaccurate clusters of some regions are indeed improved to a certain extent.However, the whole time complexity was further increased because of Ncut implementation dynamic region merging procedure in [21].In view of this situation, a new algorithm, GDF-Ncut, is here proposed to partition color images into meaningful regions by incorporating GDF (generalized data field) and Ncut (normalized cuts).GDF-Ncut is beneficial to improve the quality of image segmentation, as well as reducing the computational complexity. The rest of this paper is organized as follows: Section 2 briefly introduces the related backgrounds, including the idea of data field and graph-based Ncut partition.The proposed GDF-Ncut is elaborated in Section 3 in detail.Section 4 presents the experimental study.Finally, Section 5 concludes the paper. Background In this section, we briefly review the theories of generalized data field and normalized Ncut, which constructs the crucial components of the proposed GDF-Ncut. Extending Physical Field to Data Field.The physical field is initially defined by Michael Faraday, the famous British physicist, as a kind of media, which is able to transmit the noncontact interactions between objects, such as gravitation, electrostatic force, and magnetic force.And the theories of interaction and field have been successfully used for the description of the objective world in physics at different scales and levels, in particular for mechanics, thermal physics, electromagnetism physics, and modern physics.Particularly, potential field is a type of a time-independent vector field with desirable mathematic properties, explicitly including gravitational field, electric field, and nuclear field.We here select electrostatic field as an example for a detailed illustration of field theory. In reality, electrostatic field can be generated by a point charge [22].Assuming that the electronic potential is zero at infinite point, the potential value of point is where is a two-dimensional coordinate in the electrostatic field, is the radial coordinate of point with the point charge as the origin of spherical coordinates, and 0 is the vacuum permittivity.Figure 1 is the distribution of an electrostatic field.It can be easily seen that the potential is higher with denser equipotential lines while being close to the center of the charge.Moreover, the potential of any point in field is isotropic and single-valued with respect to its spatial location, which is positively proportional to the electricity of the point charge but negatively correlates with the distance between the point and field source.Suppose that there are point charges 1 , 2 , . . ., with individual charges 1 , 2 , . . ., in space.The potential of any point is subject to the field jointly generated by all of the point charges, where ‖ − ‖ is the distance from point to the particle .Figure 2 visualizes the distribution of electrostatic field resulting from the superposition of two point charges. Depending on the idea of physical field, the concept of interaction between physical particles and the way of field description are introduced into the abstract number field space in an aim to discover the hidden but valuable information.Each data object in the space is viewed as a particle with certain mass.A virtual field is thus generated owing to the joint interactions among these data objects, which is called data field [23,24].Assuming that data field is generated by a data object in space Ω, ∀ ∈ Ω, the potential is accordingly defined as where is the mass of object and is an single-valued impact factor indicating the interaction range of object .The function (⋅) is defined as a unit potential function, satisfying the following criterion: The unit potential function can be defined on the basis of different physical fields, such as electrostatic field and nuclear field.We, for illustration, exhibit two specific definitions of (⋅).Considering the electrostatic field, the unit potential function is given, Corresponding to the nuclear field in physics [25], the unit potential function can be written as where = ‖ − ‖/, ≥ 0 is the mass of data object, and ∈ is the distance index.The parameter is commonly set to 2 for convenient calculation of ‖ − ‖.Assuming that the parameter = 1, the field distribution is generated by a single data object in Figure 3.In detail, Figures 3(a) and 3(b) separately visualize the distribution with different unit functions.Seen from Figure 3, the energy of data object disperses evenly in all directions, which highly accord with that in physical field.Given a dataset = { 1 , 2 , . . ., } in space Ω, is the number of data object, these data objects interact with each other and jointly generate a data field.The potential of any given point in this data field is written, Data object (red) Equipotential line (black) where (⋅) is the unit potential function, is an single-valued impact factor, and is the mass of object with = 1, . . ., . The mass = 1, . . ., satisfies For multidimensional space, the parameter is defined to be the same value in different dimensions according to (6). In this case, the potential estimation is probably unable to represent the truthful distribution of data objects since data objects usually own different prosperity in each dimension.In response to this issue, we extend data field to generalized data field, in which the impact factor is anisotropic; that is to say, data have different impact factors along different dimensions.At this point, ( 3) is rewritten, where is a diagonal matrix.The diagonal elements of matrix are diag() = ( 1 , . . ., ) , > 0, = 1, . . ., .Equation ( 6) also becomes For clear illustration of the difference between data field and generalized data field, we exhibit the distribution of singleobject data field in two-dimensional space, as shown in Figure 4.The parameters and are set to diag() = (1, 0.5) and = 1, respectively.In comparison with Figure 3, the generalized data field is representative of variation among dimensions. We are then motivated to analyze the characteristics of any dataset in data field or in generalized data field.For example, Figure 5(a) is the test set consisting of 600 data generated for three Gaussian Models and the mass of each data is 1/600.The potential distribution of data set is displayed as Figure 5(b) after calculating the potential of each data points, where = 1 (see (9)).It can be easily seen that data objects collected around three centers based on the distribution of the equipotential lines.Equipotential lines are distributed densely while being close to centers, indicating that the potential achieves higher value with close distance to centers and finally achieve the maxima in the centers.Under this situation, data objects can be assigned into three clusters.This result conforms to the intrinsic distribution of data points, which are generated by three Gaussian models.Therefore, we can utilize the potential distribution of data field to get the natural clustering of data objects.In detail, the realization of clustering chiefly consists of two procedures.The clustering centers are first obtained by the detection of the maxima of potential distribution and then objects are grouped into clusters by arranging them to the nearest centers. Graph-Based Partition. Graph-based methods are mainly involved in normalized cuts (Ncut) [20], average association [26], and minimum cut [27].These methods are used for image segmentation by constructing a weighed graph for describing relationships between pixels.Specifically, each pixel is regarded as a vertex, two adjacent pixels are connected with an edge, and the dissimilarity between such two pixels is computed as the weight of the edge.We here simply introduce the idea of well-known normalized cuts since it is applied in our proposed algorithm. Graph-based partition assumes that a graph is characterized as = (, , ), where is the set of nodes, is the set of edges connecting nodes, and is the weight matrix.The edge weight, (, ), is defined as a function of the similarity between nodes and .We can partition the graph into two disjoint sets, that is, and ( ∩ = 0, ∪ = ) by simply removing the edges connecting and .And the total weight of removed edges is thus called a cut in graph theoretic language [20], According to (10), the cut measures the degree of the dissimilarity between and .Therefore, many algorithms have been proposed to discover the minimum cut of a graph [19].However, the minimum cut has a bias for partitioning small sets of isolated nodes in the graph since the cut in (10) increases with the rise of the number of edges connecting the two partitioned parts.To avoid this problem, normalized cuts (Ncut) are defined for a given partition: where assoc(, ) is the total number of connections from the nodes in to all nodes in the graph and assoc(, ) is similarly defined.In this case, the cut criterion indeed avoids a bias in partitioning out small isolated points.Assume that a partition of nodes leads to two disjoint sets and .Denote || as the number of nodes in .Let be a || dimensional vector, = 1 if node is in and −1 otherwise.Let () = ∑ (, ) be the total connections from node to all other nodes.Thus Ncut(, ) can be rewritten as Let be a ||×|| diagonal matrix with on its diagonal and be a symmetrical matrix with (, ) = .Minimizing Ncut(, ) in ( 12) is deduced as with the condition If is relaxed to take on any real value, (13) can be minimized by solving the generalized eigenvalue system, The vector corresponding to the second smallest eigenvalue is the solution to Ncut problem, which is called the second smallest eigenvector of (15). GDF-Ncut Principles GDF-Ncut is proposed for fast color image segmentation by combining generalized data field with improved normalized cuts.The implementation procedures of GDF-Ncut are specifically displayed in Figure 6. Considering that image segmentation can be performed under various color spaces, it is essential to choose the most suitable space before describing the details of the algorithm.The spaces * * V * and * * * are commonly applied to image segmentation since the color difference is in accordance with the Euclidean distance in either feature space [28,29]. * represents the lightness coordinate in both cases.The chromaticity coordinates are defined differently for * * V * and * * * .We practically find no obvious distinction on segmentation results while implementing the proposed algorithm for image segmentation based on the two color spaces.In this paper, the space * * V * is selected as the feature space because it retains linear mapping property during the process of image segmentation. After color space conversion, we are motivated to hire the theory of generalized data field for naturally grouping all pixels into clusters in three-dimensional * * V * space.Each pixel is viewed as a data object for nonparametric clustering in GDF; however, it would be computationally expensive for dealing with larger image.To overcome this shortcoming, we implement hierarchy-grid clustering in GDF.To this end, it mainly involves the three crucial parts: hierarchy-grid division, cell potential estimation, and cell-based clustering.Then the pixel clusters are projected to the original images domain and yield disjoint regions.Unfortunately, these regions are vulnerable to contain massive fragments such that the image is oversegmented.In response to this issue, the improved Ncut is further used to merge the initial oversegmented regions and achieve final segmentation results. Hierarchy-Grid Clustering in GDF. In this section, the idea of hierarchy-grid clustering in GDF is illustrated in detail for producing initial image segmentation.We first construct the two-level hierarchical grid division in * * V feature space.To do this, we partition 2 × 2 × 2 uniform cells in * * V space as the first-level gird division and then repartition × × uniform cells as the second-level grid division.Actually, each cell in the second-level grid division can also be expediently created by merging every eight-neighborhood first-level cells.For example, in Figure 7, the left image is a profile of eight-neighborhood cells in the first-level grid division and the right one is its corresponding profile by merging the left cells. Based on the two-level grid division in feature space, we conduct the clustering of cell objects in the first-level division rather than the pixel-based objects.More precisely, the potential of each cell in the first-level division is calculated relying on the contribution of all the discrete cells in the second-level division.The time complexity is consequently reduced to (8 6 ), which is independent of the size of the image and is only determined by the division level in the feature space. Assume that dataset denotes cells in the first-level division and dataset corresponds to cells in the secondlevel division.We define to present the average colorful feature of pixels within a cell of the first-level grid division, where ( , , ) ∈ , , , ∈ * .Similar to cells in the second-level grid division, is defined as follows: where where is the number of pixels located in the ( , , ) cell and is a diagonal matrix of impact factors.Let (⋅) be the unit potential function defined in ( 5); (18) thus becomes where exp(⋅) = (⋅) .As the estimation of potential is similar to kernel density estimation, the diagonal value of the impact factor , diag(), can be easily set to a multiple of the window width ℎ, that is, diag() = ℎ, where is the proportionality coefficient and ℎ = (ℎ 1 , ℎ 2 , ℎ 3 ) is the window width of the kernel estimation.The parameter can be selftuned to obtain different levels of image segmentation and ℎ is assigned with Rules of Thumb [30,31] where ∇ is the vector differential operator and φ( )/ (1) , φ( )/ (2) , and φ( )/ (3) are the partial derivatives of function φ(⋅). The specific implementation process of this part is depicted in form of pseudocodes as follows. Algorithm 1 (cell initialization with their gradient calculation). Preset. 𝑁. Input.All image data objects with their color features. Output.Cell set and initialization. Process. (1) Construct 3-dimensional grid cells with side length of 2 × 2 × 2, where is set by the experimenter; for example, = 8.If the cell has data objects, its feature is the mean of features of data objects; otherwise, it is the center of the cell. (2) In a similar way, obtain the second grid with side length of × × . (3) Initialize the datasets and .(4) Calculate the impact factor in formula (19) and then calculate the gradients of all the cells in according to formula (20). Based on the potential estimation of cells in the firstlevel grid division above, we proceed with the clustering of these cells.The first step of clustering in data field is to detect the clustering centers, namely, maximal points of potential distribution.The maximal points of potential are located at the zero of the gradient; that is, In practice, it is so in theory but impracticable owing to difficulties of solving (21) despite the fact that the maximal points are of great significance group cell objects into welldefined clusters.In view of the fact, candidate cluster centers are proposed as cluster centers instead of maximal points.Each candidate center meets the condition that it consists of one cell at least.Algorithm 2 describes the process of finding the candidates cluster centers as follows. Algorithm 2 (detection of candidate cluster centers). Input.The gradient of each cell in datasets . Output.Candidate cluster centers. Process. (3) Go back to step (1) until all the cells in are traversed.(4) Accordingly, initialize sets and V for the dimensions and V. (5) Calculate the intersection of the initial sets , , and V as the candidate cluster set. (6) Merge every two candidate clusters if there are common cell elements in their content lists until all clusters have been processed.Update the merged clusters as the candidate centers and ensure that no duplicate cell elements are involved. To do this, we start with sequentially detecting their adjacent cells outward around candidate centers on the basis of the "gradient criterion."Given a cell of candidate center, we find its 6-neighbor cells and compare the gradients between them.The cells whose ‖∇ φ()‖ are larger than that of the center are assigned to the cluster.Then we continue to recursively compare the gradients between the added cells and their 6 neighbor cells.The qualified cells are assigned into the cluster until no new cell increases in the cluster.Every candidate center repeats the same work to group different clusters.All cell objects are eventually arranged to be one of the clusters.We accordingly classify each pixel into the cluster that its located cell belongs to.The clustering pixels are mapped to the spatial image, which yield continuous but disjoint regions in the image.We eliminate the smaller regions with less than forty pixels; however, the initial segmentation results are still oversegmented.The core execution of this part is depicted as follows. Algorithm 3 (clustering in GDF and segments generating). Input.The gradient of each cell in datasets . Process. (1) For each cluster in the merged result set, complete the cluster by searching for all cells that confirm to the "gradient criterion" described along the direction of coordinate axis. (2) According to the generated clusters, assign cluster label for all cells and pixels. (3) Eliminate small fragments whose size is less than forty by adjusting the cluster label of each point with respect to the "smooth standard."That is, when the size of the neighbor region is less than forty, revise the label value of the point so that it is equal to the label of the largest periphery of the neighbor region. (4) Generate all segments with respective segment labels.Thus is the initial segmentation. Merging Criteria Based on Improved Region-Level Ncut. Construct a graph = (, , ) by applying GDFpartitioned regions as the nodes and connecting each pair of regions via an edge.The weight on each pair of nodes should signify the similarity that two regions belong to one object.Suppose that an image is segmented into disjoint regions Ω ( = 1, 2, . . ., ), which contain pixels.We define = ( (1) , (2) , (3) ) ( = 1, 2, . . ., ) to be the mean vector of pixels in each region Ω .The weight of each edge is measured by the similarity between each pair of regions, where ‖ ⋅ ‖ is the vector norm operator and is the fixed scaling factor.An example of the graph structure is depicted in Figure 8.The segmented image is composed of six regions, as shown in Figure 8(a).Each region is represented by a node and adjacent regions are connected by an edge, which generate the weighted graph in Figure 8(b). According to (22), the weight matrix is expressed as The core procedures of this part are depicted in form of pseudocodes as follows.Algorithm 4 (building feature space matrix and normalized cuts). Process. (1) Form the initial feature space matrix in terms of (22) and let = . (2) Bipartition the current segment set-father segment set.First, the feature space matrixes and for the current set should be acquired by selecting corresponding values stored in .Second, solve the generalized eigenvalue system (−) = , then obtain the eigenvalues and eigenvectors. (3) By computing and comparing the cut values for all possible division schemes, determine optimal partition pattern and generate two child segment sets. (4) Recursively execute steps (2) and (3) until the current father segment set could not be bipartitioned, scilicet, the size of set is 1, or all cut values of possible division schemes surpass the threshold. (5) Arrange the cut result.Set group label for each segment and obtain final segmentation. The region-level Ncut reduces the size of the weight matrix in comparison with pixel-level Ncut, which significantly results in lower time complexity.However, for some images, the initial segmentations still have many smaller regions.Under this situation, the Ncut procedure still consumes much time to group initial regions and furthermore has difficulty in good performance.In [19], the authors use multiple nodes rather than a single one for each region to improve segmentation quality at the expense of time.Instead, we propose an effective solution to the problems.A fast merging is achieved to reduce the number of the initial regions before implementing the Ncut procedure. With initial segmentation generated by GDF, the distance between adjacent regions is calculated based on the criterion of Euclidean distance.Then we define threshold 0 and merge the regions if their distance is less than 0 with the precondition that the number of all the regions is more than 0 .We iteratively execute this merging until all the adjacent regions cannot satisfy the distance condition.Evidently, the threshold plays a significant role in the process of merging, which determines the quality of segmentation.Here, we calculate the threshold as follows.For any region Ω , = 1 ⋅ ⋅ ⋅ , we find its nearest adjacent region, denoted as Ω 0 .The threshold 0 is set to the geometric mean of all the adjacent distance; that is, Note that the parameter 0 depends on the initial segmentation and is a fixed value in the coming iteration.After our implementation of fast merging, the Ncut algorithm is employed to obtain the final segmentation results. Algorithm 5 (implementation of the fast merging and Ncut). Output.Segmentation results after the fast merging. Process. (1) For each region, compute the distance with its adjacent regions and find the nearest region. (4) For region Ω , if the number of all the regions is more than and dist(, 0 ) < 0 , then merge the two region. (5) End for.(6) Update the regions and their adjacent regions and the number of regions . (7) Go back to step (3) until no region is merged. Algorithmic Description. Based on the situation above, GDF partitions an image into oversegmented regions and then the improved normalized Ncut merges these regions into homogenous.Algorithm 5 describes in detail the whole implementation process of GDF-Ncut as follows. Algorithm 6 (segmentation by the proposed algorithm). Input.All data objects of an image. Output.Final segmentation results. Process. (2) Use Algorithm 1 for cell mesh and the calculation of the cell gradient. (3) Use Algorithm 2 to find the candidate centers and then use Algorithm 3 to group the cells in the first-level grid division into different clusters.Map the clusters into the plane domain, which produces the initial segmentation. (4) Use Algorithm 5 to quickly merge the initial regions. The proposed algorithm contains five parameters {, , 0 , 0 , } for image segmentation.In implementation, the two parameters 0 and 0 are fixed to a certain value.For parameter , it is tuned to 1 in most cases; otherwise, it is equal to 0.5 when = 1 is not satisfied to obtain reasonable initial segmentation.Therefore, the algorithm is mainly driven by two parameters: and threshold.We can adjust to correctly initialize the segmentation.The threshold is a parameter of controlling recursion in normalized cuts, which can be artificially adapted within a certain range to achieve final segmentation. Experiments and Comparisons GDF-Ncut is experimented for color image segmentation using the Berkeley Segmentation Dataset and Benchmark (BSDS), which are primarily evaluated in efficiency and quality of image segmentation.The BSDS is composed of a train set and a test set, containing 200 train images and 100 test images.The proposed algorithm is implemented with java to complete color image segmentation and all experiments are executed under the same computation environment, a PC equipped with 2.8 GHz Intel Core i7 CPU and 2 GB DDR3 Memory. We, for illustration, first exhibit a specific segmentation example by gradually executing GDF-Ncut algorithm in Section 4.1.Section 4.2 tests the performance of the proposed algorithm in comparison with two rivals, MS [15], and DRM [21].We further employ GDF-Ncut to segment representative examples from the train and test images of BSDS in Section 4.3, which further demonstrate the effectiveness of the proposed algorithm. Illustration of GDF-Ncut Implementation. We randomly selected an image from the Internet to illustrate the segmentation implementation of GDF-Ncut.The size of the original image is 350 × 350, as shown in Figure 9(a).As discussed in Section 3, GDF-Ncut is determined by two parameters and .We here initialize the two parameters as (, ) = (8, 0.05) to finish the segmentation of this image. Figure 9(b) shows the pixels in * * V * space after the transformation from RGB to * * V * .The procedure of two-level hierarchical grid division is conducted in three-dimensional * * V space.In Figure 9(c), each point Performance of the Proposed Algorithm by Examples. Examples from the train set of BSDS are used to illustrate the segmentation of the proposed algorithm, as shown in Figure 11.These images have been evidently partitioned into meaningful regions.Seen from Section 3, the proposed GDF-Ncut algorithm is determined by three parameters, , , and .In particular, the parameter is tuned to 1 in most cases and it is tune to 0.5 only when the colors of the objects are not easily distinguishable.The value of controls the granularity of cell mesh, which decides the results of the initial segmentation.The is the threshold of the regionlevel Ncut, determining when to terminate the segmentation process.In general, we thus adjust the value of parameter to get appropriate initial segmentation and fix the value to cease the merging of improved Ncut.Table 3 shows the values of three parameters for segmenting image (a)-(h). Base on the analysis in Sections 4.2 and 4.3, the advantages of GDF-Ncut can be summarized in the following aspects: (a) Initialize regions with GDF.Using hierarchy-grid, the initialization procedure is quickly implemented to Table 3: The values of parameters for segmenting images in Figure 11. Conclusion In this paper, a novel algorithm, GDF-Ncut, has been proposed for color image segmentation by integrating two algorithms: generalized data field (GDF) and normalized cuts (Ncut).In GDF, a nonparametric clustering method based on hierarchal grids is presented to partition an image into disjoint regions.However, the GDF algorithm is prone to attain small fragments of being logically homogeneous.To overcome this defect, the improved region-level Ncut is employed to modify segmentation results of GDF.We conduct an experiment to test the effectiveness of the proposed algorithm, which demonstrates that the GDF-Ncut segments images into meaningful regions compared with other alternatives.Furthermore, the proposed algorithm significantly reduces time complexity because clustering in GDF is implemented on the basis of hierarchy-grid division and the merging process is region-based in Ncut. Figure 1 : Figure 1: The distribution of equipotential lines in the electrostatic field. Figure 2 :Figure 3 : Figure 2: The electrostatic field distribution generated by two point charges.(a) Electrostatic field generated by two positive charges; (b) electrostatic field generated by a positive charge and a negative charge. ( a ) (⋅) is a continuous and bounded function defined in the space Ω; (b) (⋅) is symmetric in the space Ω; (c) (⋅) is a strictly decreasing function in the space Ω. Figure 4 : Figure 4: The distribution of generalized data field with different unit potential functions.(a) The unit potential function accords with (4); (b) the unit potential function accords with (5). Figure 5 : Figure 5: Potential distribution of test set.(a) Test set; (b) equipotential lines of test set. Figure 7 : Figure 7: Profile changes from eight-neighborhood cells to a cell. Figure 11 : Figure 11: Segmentation results of test images.From left to right, (a)-(h) left panels show original images; (a)-(h) right panels present segmentation results of the images by the proposed algorithm.
7,172.2
2018-01-01T00:00:00.000
[ "Computer Science" ]
THE ROLE OF STATIC VAR COMPENSATOR AT REACTIVE POWER COMPENSATION As the need for electricity has increased and the cost of energy generation has arised. For this reason, it is important to use the generated energy in good quality, safely and efficiently. Reactive power compensation is one of the most effective application to reduce transmission losses, prevent voltage drops, prevent consumers from paying for reactive energy, facilitate operating, increase efficiency and save energy in energy systems. However, due to improvements in semiconductor technology, reactive power compensation systems have gained a new dimension. The power compensation made by the use of semiconductor power elements is called Static VAR Compensation. This compensation is used for the compensation of loads such as arc furnaces, elevators, automotive; paper packaging, food and textile, point welding machines, port cranes, flat welds. Transient events are minimized, losses are reduced, back uped possibility,controled flexibility and reliability are ensured by using Static VAR Compensation systems in power network. In this study, thyristor triggering angles and power factor values of a system, that contains reactive loads and controlled by Static VAR Compensation, were obtained by using the computer software developed by Microsoft C Sharp (C#) programming language. The results were discussed in terms of importance of using such controlling structures in power networks. Introduction Due to the fact that operating costs are high in energy generating units, the efficiency of renewable energy sources is in the development process, the leakage losses in the transmission and distribution system are large, the state does not have enough resources to reduce losses and the demand for electricity increases day by day, need of more quality and reliable energy has appeared [1]. For this reason, reactive reactive power compensation has become compulsory and the prospect has begun to increase day by day to use the produced energy better and more efficiently, to reduce transmission losses, to prevent voltage drops, and to pay the price of reactive energy that causes financial burden on consumers [2]. Because reactive power compensation is one of the most effective measurements to facilitate operation in power systems, it increase efficiency and ensure energy saving. According to the "Procedures and Principles Regarding Tariff Applications of Distribution Licensee Legal Entities and Supply Companies" accepted by the Energy Market Regulatory Authority at its meeting dated 30.12.2015, For customers with a power of less than 50 kVA, the inductive reactive power consumption exceeds 33% of the active power, the capacitive reactive energy to the system exceeds 20%; Inductive reactive power consumption exceeding 50 kVA and over 20% of active power users are obliged to pay reactive power rating in case of capacitive reactive energy supply to the system exceeding 15%. For this reason, when the reactive power limits are exceeded, reactive power consumption fee is applied. While different methods are being developed for the production of electricity every day, various techniques and methods are being investigated in order to use the produced energy in the most efficient way. Reactive power can be controlled by switching shunt capacitors and reactors. If thyristors are used as a switch, these use for the current control within capacitors and /or shunt reactors. These can provide fast and stepless control of reactive power. In recent years, with the developing technology, power electronics components can be manufactured with greater power. In addition, the performance of the control elements has also been improved [3]. The Flexible AC Transmission System (FACTS) implementation is predominantly for dynamic issues, this design proves the point that there are mainly three main variables whose direct control in the power system might have an impact on performance. They are voltage, angle, impedance. FACTS Controllers use for Power Systems' Controllability [4]. In general terms FACTS devices, especially SVCs have became fundamental components in planning, operation and control stages of power systems [5]. There are different types FACTS Controllers. These are; Thyristor Controlled Series Compensation (TCSC), Static Synchronous Series Controller (SSSC), Thyristor-switched series capacitor (TSSC), Static Synchronous Compensator (STATCOM), Static VAR Compensator (SVC) and Unified Power Flow Controller (UPFC). Research focuses on the impact of compensation of the SVC and STATCOM controllers, especially on the steady-state analysis and for first-swing stability enhancement [6][7][8]. Comparison for different FACTS Controllers given by Table 1. Classification of Reactive Power Compensation In practice, compensation is provided by controlling reactive power by using contactor and semiconductor power elements. The compensation made using the contactor is called conventional compensation and the compensation made using semiconductor power elements is called Static VAR Compensation. Conventional compensation Most of the electrical loads used in the industry are inductive reactive power from the electric grid. In order to compensate this inductive reactive power, capacitors groups with different capacity values are used after the meters in the plant. Devices called reactive power control relays determine the reactive power needed by the system and either activate or deactivate groups of capacitors that will provide the capacity value to compensate for this power [10]. When reactive power compensation is required, the capacitor groups are only activated within 5 to 10 seconds in the conventional compensation systems. Such a long time causes overloads and major losses in the electric grid. The total loss that occurs when losses are taken into account by thousands of end-users is reaching levels of reparability for power distribution companies. In addition, these systems cause transient voltages, arcs, sudden voltage increases and electrical noise during switching because the power factor correction process takes the capacitor blocks out of contact by means of contactors [11]. Along with that, this system tries to supply capacitive reactive energy to the system through reactive power control relay and contactor. However, it can not respond to loads that powerful and quickly enter and exit the circuit. To be able to perform a complete compensation, a large number of single-phase grades must be used. In addition, entering and leaving a large number of circuits in a short time has an adverse effect on the capacitor and arcing occurs in the contacts of the contactors [12]. The In order to compensate the reactive power when the loads are switched on and off fast, a instantaneous start current as shown in Figure 1. on capacitors when capacitors switched on contactors. As the loading and unloading operations of the loads become more frequent, instantaneous start current and arc effect cause deformation of the contactor contacts and after a while the contacts stick. In many cases, this situation, which occurs in contactors, can cause the contactors to burn. This also causes shortening of life due to continuous current draw of the capacitors. The disadvantages of conventional compensation systems and the development of semiconductor technology have brought reactive power compensation systems to a new dimension. It is not possible to respond to fast variable loads by switching with mechanical contactors in conventional systems. Such workings can only be answered by switching with a thyristor. In thyristor systems, since capacitors are activated at zero crossings, the obligation to wait for discharge times is eliminated. In addition, when the capacitors are switched on for the first time, it is possible to switch them on and off at a high speed since the current drawn is minimum. Thus, the life and power quality of capacitors and switching elements are also positively affected. In addition, panel maintenance costs are minimized. For this reason, using static VAR compensator at reactive power compensation is important. Static VAR Compensation The static VAR compensators (SVCs) are traditionally used to dynamically compensate reactive power [13]. SVC systems, which are implemented as thyristor-switched compensation systems, are one of the most effective methods of ensuring energy efficiency. The basic types of reactive power control elements that bring the whole or a part of a SVC system into play are; thyristor controlled reactor (TCR), thyristor switched reactor (TSR) ve thyristor switched capacitor (TSC) [14][15]. SVCs increases the quality of power in many respects. There are many functional benefits of the SVC. Such as flicker reduction, Voltage stabilisation, reactive power compensation, reduction of harmonics, energy savings, increase in productivity. It has the ability to provide maximum capacitive and inductive power limits and every value between these limits. In general, the SVC circuit diagram is given in Figure 2. Figure 2. SVC circuit diagram It is provided the capacitors to be switched on in less than 10 ms by using thyristor modules. In addition, a high instantaneous starting current generated during the switching of the capacitors is prevented and only the normal capacitor current flows, as shown in Figure 3. In this way, rapidly varying loads can be easily compensated, the compensation response time can be adjusted between 20 and 500 ms. Static VAR compensation systems have advantages such as minimizing transient events, reducing losses, providing redundancy, control flexibility and high reliability. However, it is an important question that causes harmonics that affect energy quality. This compensation is used compansation of loads such as especially in the fields of arc furnaces, elevators, automotive, paper, packaging, food, textile, glass, cement sectors, point welding machine, harbor cranes, flat welds, where power factor coming in and out very frequently and in short period shows frequent and big changes. The implementation diagram for a three-phase Static VAR Compensation implemented as a thyristorswitched compensation system shown in Figure 4. Static VAR Compensation Application In a fixed-capacitor thyristor controlled reactor (FC-TCR), the fixed capacitors generate reactive power while the TCR will consume power. Since the reactive power generation of the capacitor group is fixed at a certain voltage level, the reactive power generation of the system provided by the reactor is determined by changing the triggering angle of the thyristors. Changing the triggering angle of the thyristor will control the main component of the reactor current, thus controlling the magnitude of the reactive power [16]. An facility has been selected for such a static VAR compensation system application. The maximum power of the selected facility is 292 kW and the maximum current drawn from the network is 738 A. When all loads in the system are activated, the power factor cos φ = 0.6, ie the largest phase difference is 53 °. In this facility, FC-TCR compensation method is used to raise the power factor to nearly 1. The fixed capacitor value used in the system has been selected to be 8,817 μF, the power 400 kVAr and the reactor value XL = 1.1 mH. The principle connection diagram of the compensation system is given in Figure 5[11]. System includes a control and measurement unit as shown in Fig.5. This structure is the most important component of overall system. Because triggering angles are being adjusted by this unit. This adjustment can be summarized as given below; . Measure grid current and terminal voltage The power factor is reduced when dissimilar loads in the system enter the operation at different times. The fixed capacitor group will remain active after compensating the reactive load to bring the power factor closer to 1. For this power compensation, this increased capacitive reactive power will be compensated by producing an inductive reactive power, depending on the α angle of trigger of the thyristor [17]. In order to find the reactive power generated on the reactor depending on the angle of trigger of α thyristor, it is necessary to calculate the effective values of the current and the voltage depending on the trigger angle of the Equation 1. The time-dependent variation of the current and voltage at the α triggering angle of the thyristorconnected reactor circuit is given in Figure 6. (2) If Equation (4) is written in place of Equation (3), Equation (5) is obtained. The effective value of the current passing through the reactor after integration in Equation (6) depends on the α trigger angle is calculated by Equation (7). Similarly, the effective value of the voltage, depending on the α trigger angle, is calculated from Equation (8). According to the information described above, the flow chart of the overall system includes thyristors and other components is given in Figure 7. As shown in Figure 7, the current and phase difference obtained from the network is measured. Grid voltage and fixed capacitor power are known. Active, reactive and apparent powers are calculated from these values. A loop was created to see which of the triggering angles of the thyristor of the inductive reactive power compensating capacitive reactive power remaining from the reactive power difference with the fixed capacitor power. The inductive reactive power generated on the reactor due to the α triggering angle of the thyristor is calculated according to Equations. (1), (7) and (8). These calculate has been provided by the computer software developed by Microsoft C Sharp (C#) programming language. If the power factor is close to 1 or 1, the program will start to measure new values for the load changes after having found the α trigger angle of the thyristor. The state of the computer program that computes the angle of trigger α and the after compensation power factor according to the different load cases in facility with the program are given by Table 2-5. Conclusion The energy used in industrial facilities must be uninterrupted and of good quality. Therefore, the important of the continuity and quality of the electric energy transmitted is increasing day by day. It has become imperative to reduce energy quality deficiencies resulting from reactive power change and voltage fluctuations. Conventional compensation systems cause disruptive effects in the network and cause overcurrents and voltages. SVC systems eliminate such negative effects. SVC application is the most suitable solution for medium voltage installations where disturbing effects are present in the grid. Voltage stability is ensured with reactive power compensation by this application, harmonic and flicker levels are reduced to the values according to the standards. Thus, in addition to saving energy, production time is also shortened. SVC systems; it is the ideal solution for the compaction of loadings coming and going fast in the circuit. These systems, requirements provide a great advantage over conventional reactive power compensation systems because they provide precise and complete compensation against unbalanced loads. For this reason, SVC systems are provide a lot of benefits to facilities as an alternative to conventional compensation systems. With the development of technology, the prices of power electronic components will be reduced and SVC systems will be an economical solution within small businesses. In this respect, the role of SVC in reactive power compensation at power plants is large. With the computer program developed, the α trigger angle of the thyristor is calculated according to the different load conditions in operation and the states on the application are given. As the loads change, the reactive power is compensated by changing the α trigger angle of the thyristor with programme.
3,431.4
2020-06-01T00:00:00.000
[ "Engineering" ]
Standardization Issues of Mobile Usability The benefits of maintaining standards can be summarized from different perspectives: consumers, businesses, governments, economically. They inform and shape the products we buy, ensuring quality and safety. Standardizing in dynamic activities such as the development of mobile applications and devices can be not easy task. In this regard, the aim of the current paper is to revise some international standard towards usability problems of mobile user interfaces and to analyze the possibility of standardization of mobile usability. Keywords—Mobile usability, international standards, interface design, standardization problems Introduction The specifics of mobile application development are mainly related to the limitations imposed by the devices themselves. For example, low power consumption, small physical sizes, various input methods (so called "multimodality" 1 ), limited memory, processing power and screen size, poor connectivity to mobile and internet networks. It is also necessary to take into account the fact that the use of mobile technologies takes place in different, often changing environments in which users are exposed to changes in their environments. For example, different noise levels, changes in brightness, temperature, etc. These affect the attention of individuals, which is a prerequisite for making more mistakes when working with mobile applications. The ease of use, efficiency, and satisfaction of using mobile devices and the various types of software designed for them is related to the so-called "mobile usability". With the active use of mobile devices, there is a recent need for mobile applications to be designed in a user-friendly way. Efficiency affects fast and easy navigation in mobile applications. Based on our previous research [1], we can define usability as a qualitative criterion that applies not only to the user interface of a product, but also to its functionality, considered in a specific context of use. When determining the usability of a product, account should be taken of the ease of its use, which, according to the author of this 1 It is associated with the provision of multiple approaches for human-technology interaction so that users are able to work with most of their senses. Most technologies provide a three-modal interaction approach -visual, auditory and tactile, with other senses usually excluded. study, it covers ease of learning, ease of remembering, tolerance for errors. If the additional factors that determine usability are to be defined and which are properly reflected in its evaluation by methods and software, then these are: efficiency, productivity, ease of interaction, security and utility, set out in a specific context of use and always considered in combination to achieve satisfaction for users using the technology concerned. This requires that the term "usability" be defined as a context-dependent criterion that does not have a single measurement. In practice, there are standards and recommendations for creating usable applications, including methods and tools that are used to determine usability. For the most part, their wording is general, universal in nature, not concretely targeting the specifics of the particular type of mobile application. In this regard, the aim of the current paper is to revise some international standard toward to usability problems of mobile user interfaces and to analyze the possibility of standardization of mobile usability. International Standards Comparison For successful communication between human and technology, the user interface must be developed in a usable way. This can be done by: • Complying with basic design rules and principles and / or standards that are generally applicable when creating a human-machine interface. • Complying with the rules / principles and / or standards related to development of mobile application interfaces. • Applying mobile app user interface design templates that integrate good and commonly used design practices for this type of applications. On the one hand, design rules, principles, standards and patterns are usually designed based on the experience of design professionals and scientists examining the individual characteristics of users and the specifics of the environment and their impact on people of the technologies concerned. They are the result of studies in the field of humancomputer interaction. On the other hand, the specific characteristics of mobile devices, the current work environment and mobile and wireless networks pose a number of significant challenges in exploring the usability of mobile applications. In this regard, it should be noted that problems with the usability of the user interfaces, not only of the mobile but also of the software applications as a whole, arise from the failure of some of the best practices, defined in international standards, by leading specialists in the field and / or software companies developing the operating systems running the applications. They are usually defined as recommendations, rules, or principles. These are the result of years of research teams working on the testing of specific technology groups of users. The standards should be relevant to creating user-oriented technologies. The difficulty in the situation stems from the fact that there is no clear definition of the meaning of the term "usability". However, developing usability standards for different types of technology would bring a number of positives, such as independence, balance and authority, to promote consistency. Some standards are defined by the International Standards Organization, but other international organizations are working in this direction too. Some of them consider the usability of mobile applications, while others look at high concept issues. At the time of the research on the paper, we found that the usability of mobile technology user interfaces is concerned in the standards ISO/IEC 24755:2007, ISO/IEC 18021:2002 and ISO/IEC TR 15440:2016. ISO/IEC 24755:2007 "Information technology -Screen icons and symbols for personal mobile communication devices" defines a consistent set of screen icons and symbols intended for mobile communication devices (for example, mobile phones and personal digital assistants) [4]. This international standard provides a set of icons for the use of personal information of applications related to device management. Although each platform has specific requirements for the applications it develops, including in terms of user interface, and full design guides are provided, maintaining such a standard is useful as it aims to provide a universal approach to creating icons. This also helps to ensure unity in the look of the same application created for different platforms. The standard ISO/IEC 18021:2002 "Information technology -User interfaces for mobile tools for management of database communications in a client-server model" defines only features of the user interface for managing communication when exchanging data between a mobile device -client and server. Two user interfaces are defined for: • Approving when update a client database or when data in a client database are transmitted to another database. • Providing feedback to the user after the client or database data of the server running the mobile device has been updated. ISO/IEC TR 15440:2016 "Information technology -Future keyboards and other input devices and entry methods" covers the following aspects [6]: • Different login requirements, consistent with national and international practices and support for cultural and linguistic diversity • Recognition of requirements in accordance with the comfort of use (regardless of the target group -children, adults, people with disabilities) • Improving user productivity when entering data • Enhancements to keyboards and related input devices and methods required for newtype applications (such as virtual reality ones) • Input requirements for virtual keyboards • Labeling (permanent and temporary labels), functional symbols and icons. Another standard-setting organization that sets international standards but focuses only on the Web is the World Wide Web Consortium (W3C). It defines standards for mobile web applications in the form of the Mobile Web Initiative, which addresses APIs related to graphical and multimedia elements, forms, user interactions with devices, usability and accessibility, data storage, customization of applications across devices, sensor and hardware integration, personal information management, network settings, payments, security issues, and more. In addition, sample architectures and developer guidelines are provided along with sample code to help developing mobile web applications. The advantage of maintaining such a standard is that various aspects of the lifecycle of this type of application are fully addressed, in terms of applying technology and good practices. Detailed instructions are given for developers when working with them. But on the other hand, not all types of mobile applications (intended for a specific platform -native, and hybrid / cross-platform -or hybrid / cross-platform) are covered, and there are no stated principles for the design of usable mobile applications. Mobile app user interface design templates or other examples of usable composition of interface elements are also not provided. European Telecommunications Standards Institute (ETSI) creates globally applicable standards for ICT-enabled systems, applications and services. It defines many standards related to human factors and mobile communication. Aspects of mobile usability are concerned in some of the standards. They refer to related ISO and W3C standards. ETSI EG 202 048 Human Factors (HF); Guidelines on the multimodality of icons, symbols and pictograms gives "guidelines for the design and use of multimodal symbols using a Design for All approach" [12]. ETSI EG 202 416 Human Factors (HF); User Interfaces; Setup procedure design guidelines for mobile terminals and services "provides user interface design guidelines for setup procedures, applicable to mobile terminals and services throughout the product life-cycle" [13]. ETSI TR 102 972 Human Factors (HF); User Interfaces; Generic user interface elements for 3G/UMTS mobile devices, services and applications "addresses the user interfaces of 3G/UMTS-enabled devices, services and applications from the end users' perspective, and provides generic design, development, deployment and evaluation recommendations" [15]. ETSI EG 202 132 Human Factors (HF); User Interfaces; Guidelines for generic user interface elements for mobile terminals and services "addresses key issues from the end user's perspective, providing guidance on proposed generic user interface elements for basic and advanced mobile terminals, services and certain aspects of application handling" [16]. Its aim is providing simplified access advanced functions of mobile communication. ETSI EG 202 191 Human Factors (HF); Multimodal interaction, communication and navigation guidelines "identifies key issues, solutions and actions for multimodal interaction, communication and navigation at the user interface with ICT systems and terminals. It specifically addresses the usage context of transactional interactions for independent living" [17]. It should be noted that the presented standards partially mention the usability of mobile applications but do not make comprehensive recommendations, which can be considered a disadvantage. Not all the features of mobile devices and their applications are taken into account. One of the main reasons for this is that such type of technology is evolving too dynamically, and creating an exhaustive standard that includes definitions, usability criteria, recommendations for creating usable user interfaces, testing methods and evaluation, it takes a time -in the order of 5 years. In such a situation, it is correct to look for standards that address the usability problems of software applications at a higher, conceptual level. The only such and comprehensive standard to date that "provides a detailed guide to user interface design" [2] is ISO 9241. It covers many aspects for people working with a computer. The part of the standard that sets universal guidelines for developing usable software user interfaces is Part 110: Dialogue Principles, and Part 129: Guidance on software individualization can be used as a supplementary part. Part 210 deals with the so-called "User-Centered Design" (UCD) or "Human-Centered Design" (HCD) which were part of ISO 13407: 1999. Based on the UCD definition as "an interactive systems development approach that focuses on creating usable systems" [7], it can be defined as a multidisciplinary activity that incorporates ergonomics, performance and productivity enhancing techniques working conditions of people with a system and neutralizing the possible adverse effects of its use on human health, safety and productivity. The principles behind the custom design described in Part 210 of ISO 9241 and adopted by those skilled in the art are as follows [3]: • Active involvement of users of the system and a clear understanding of the tasks they are to perform • Proper distribution of functions between users and technology • Reuse of design solutions • Multidisciplinary design. In addition, UCD activities defined in the standard build the guideline for creating user-oriented designs: • Learning and specifying the context of use: this involves knowing the user, the environment of use, and the tasks for which he / she uses the product • Specifying user and organizational requirements: This activity refers to defining product success criteria for user tasks, such as how quickly a typical user must be able to complete a product task. This includes setting design guidelines and imposing various restrictions • Specifying user and organizational requirements: such activity refers to defining product success criteria for user tasks, such as how quickly a typical user must be able to complete a product task. This includes setting design guidelines and imposing various restrictions • Design evaluation in accordance with the imposed requirements: the usability of the designs is evaluated according to user tasks. It should be noted that neither the principles nor the activities in Part 210 of ISO 9241 set specific rules that are advisable to follow when creating user-oriented designs, but are essential in defining the major factors that influence the creation of usable user interfaces. It can be concluded that the restrictive conditions that are taken into account relate to the audience, the context of use and the system requirements defined, both by users and by the organization. The series and other parts of ISO 9241 focus, in general, on the basics of ergonomics and the basics of human-computer interaction, for both software and hardware; accessibility of the software; user-oriented design of interactive systems, etc., which are not relevant to the present study. Due to the limitations imposed by the format of the study submission, it is not possible to look at all the rules defined, but it can be summarized that Part 110 sets specific criteria for evaluating the dialogue that would serve as guidance for the desired end result of the user interface design process, namely its usability. The criteria are as follows: acceptability of the dialogue when performing tasks, informativeness, compliance with the needs of the target audience, possibility for self-education, controllability, resistance to errors, adaptability to the individual characteristics of users. The standard ETSI EG 202 116 Human Factors (HF); Guidelines for ICT products and services; "Design for All" is based on ISO 13407: 1999 too. It is "applicable to ICT products with a user interface that are connectable to all kinds of fixed and mobile telecommunications networks" [14]. As it is mentioned in the document designing ICT products and services results in a three-level model: 1. Mainstream products designed according to good Human Factors practice, incorporating considerations for people with impairments, that can be used by a broad range of users 2. Products that are adaptable to permit the connection of assistive technology devices 3. Specially designed or tailored products for very disabled users. In Table 1, we synthesize some of the characteristics of the international standards presented so far, including the period of revision. The W3C standards fully address various aspects of the lifecycle of web application, in terms of applying technology and good practices. Detailed instructions for developers are provided. On the other hand, not all types of mobile applications (intended for a specific platform -native, and hybrid or still multi-platform -hybrid / cross-platform) are covered. There are no design principles for usable mobile applications. There are also no user interface design templates or other examples of usable composition of interface elements. The ISO 9241 and ISO 13407: 1999 standards address the problems of user-oriented interfaces, in particular usability, but do not affect the specifics of mobile applications. ISO 13407: 1999 was discontinued and, at the time of the paper studies, was revised by ISO 9241-210: 2010, which is currently ISO 9241-210: 2019. ISO 9241 covers many aspects of the ergonomics of human-computer interaction, including and the principles of dialogue (Part 110) and user-oriented designs of interactive systems (Part 210). Parts 129 and 161 may be used as supplements to set up dialogue principles and a guide for creating user interface elements, but without taking into account the particular context of use. Part 210 of the ISO 9241 standard sets out the principles and activities of user-centered designs that are perceived by those skilled in the art as fundamental. Implementing principles and activities defined in the standards in the context of mobile applications should provide a good basis for building usable interfaces. ETSI standards mostly affect mobile usability, but they have not been updated in recent years. The most recent of these standards is from 2010. Unfortunately, the renewal period has not been described, only the establishment procedure [18]. We consider this a disadvantage in view of the rapid development of technology and changes in design trends. As an advantage, we can say that they are available online for free. The standards provide useful and detailed guidance related to developing the interface of mobile devices. It should be noted that neither the principles nor the activities in all the standards set specific rules that are advisable to follow when creating user-oriented designs, but are essential in defining the main factors that influence usable user interface design. It can be concluded that the restrictive conditions that are taken into account relate to the audience, the context of use and the defined requirements for the system, both by users and by companies. Conclusion In summary, as noted by some authors [11], adherence to a specific standard may create an unnecessary restriction on the product's innovation, in particular its design. However, even if not sufficiently detailed, the standard "describes principles for user interface design, not working solutions." It should also be borne in mind that due to the rapid development of information and communication technologies, the standard "may quickly become out of date" [11]. All international standards shall be reviewed at least every five years, as reflected in Table 1. Maintaining standards in dynamic areas, such as the development of mobile technologies, is not always possible but also effective. Therefore, it is necessary to pay attention to design recommendations and principles provided by professionals working in the field of mobile technologies and companies developing mobile operating systems.
4,208.4
2020-05-06T00:00:00.000
[ "Computer Science", "Engineering" ]
Comparative characterisation of conventional and textured 11 kV insulators using the rotating wheel dip test : Surface tracking and erosion is an irreversible degradation occurring on the insulator surface, and this can ultimately lead to failure of the insulator. Polymeric materials such as silicone rubber have many advantages; including a superior hydrophobic surface. However, polymers are exposed to ageing and degradation resulting from electrical and environmental stresses. The rotating wheel dip test is adopted to conduct a comparative study of surface conduction on the two insulators. A conventional design has been selected and compared with insulators having textured surface. Monitoring of the shed surface and insulator trunk using an IR camera were carried out to assess the temperature distribution along insulator profiles. A spatial analysis was also performed to identify key features of the two designs. Localised surface conductance measurements are proposed in this study. This helps to understand and distinguish the trends of conductance and its distribution on each surface, helping to predict the future surface degradation associated to each design. Pl e a s e n o t e: C h a n g e s m a d e a s a r e s ul t of p u blis hi n g p r o c e s s e s s u c h a s c o py-e di ti n g, fo r m a t ti n g a n d p a g e n u m b e r s m a y n o t b e r efl e c t e d in t his ve r sio n.Fo r t h e d efi nitiv e ve r sio n of t hi s p u blic a tio n, pl e a s e r ef e r t o t h e p u blis h e d s o u r c e.You a r e a d vis e d t o c o n s ul t t h e p u blis h e r's v e r sio n if yo u wi s h t o cit e t hi s p a p er. Thi s v e r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wit h p u blis h e r p olici e s. S e e h t t p://o r c a .cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s.Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s . Introduction Silicone rubber (SiR) polymeric insulators have now gained a great acceptance and are widely used in the overhead power distribution and transmission lines.This growth of use is attributed to inherent promising advantages over conventional porcelain and glass insulators [1,2].One of the most important properties of SiR insulators is hydrophobicity, which is defined as the ability of polymeric materials to repel water on their surfaces.SiR insulators are considered to have good hydrophobic properties and, so, have an excellent pollution flashover performance compared to that of traditional hydrophilic insulators made from porcelain and glass [3,4].On surfaces of the ceramic high voltage insulators, water readily forms a continuous film on the hydrophilic surface.In the presence of contamination, a leakage current develops, which may ultimately lead to flashover of the insulator.The hydrophobic surface properties of SiR insulators prevent the formation of a continuous film, and the water remains as individual droplets, which may simply flow away from the surface [5,6]. Pollution flashover can be mitigated by improving both insulator design and material properties [7,8].Ceramic insulators are made from an arc resistant material with high internal strength and excellent immunity to ultraviolet (UV) radiation.For ceramic insulators, the prevailing factor of pollution performance is leakage distance, where high leakage distance improves performance [9].SiR polymeric insulators are not arc resistant as ceramic insulators and are influenced by UV radiation.However, better pollution performance, reduced breakage and high flexibility make them an attractive option when selecting insulators for outdoor applications [10]. As SiR insulators can be sensitive to outdoor climate conditions, it is crucial to evaluate their long-term performance under different environmental conditions [11,12].The available literature review has shown that the electrical performance of SiR insulators is correlated to their surface properties.The surface situation of SiR insulators that are operated in severe environment conditions has been found to worsen with age.Ageing has been found to reduce the overall electrical performance of polluted SiR insulators and cause an increased incidence of discharge activities and dry band arcing on the surface.This increase can degrade the housing material by tracking and/or erosion, which in severe cases, leads to damage to the insulator [13].Therefore, the surface degradation and insulator flashover due to these discharges is still a subject of concern. Harsh environmental conditions would still result in discharge activities on the insulator surface, and the design of SiR insulators remains very simple due to the moulding limitations, which prevent the development of complex insulator profiles [14]. Textured insulators are a novel approach [15] to the design of polymeric insulators and have brought many advantages to improving insulator performance.Various textured designs of the polymeric surface may be achieved by using a pattern comprising an array of contiguous or overlapping protuberances.The aim of this design is to reduce the power dissipation by reducing both the electric field strength and the current density in the shank regions.This may be managed by increasing both the surface area and the creepage distance of the insulator without increasing its overall longitudinal length [14].Furthermore, it is observed that compared with conventional samples, textured patterns might mitigate the damage induced on polymeric materials due to surface discharges by presenting multiple paths for current conduction: when one current path initiates drying as a result of Joule heating, its resistance will increase.At this instant, the current flow will shift to a different path of lower resistance before significant thermal degradation occurs. This paper presents the details for the experimental setup, the insulator preparation, the mechanical design, the software and hardware computer tools and the measurement techniques used.Moreover, the performance of two different designs of SiR insulators textured and non-textured were investigated and compared. Insulator preparation Different types of four-shed insulator designs were used in this study, which were manufactured within the Cardiff HV laboratory facilities using room-temperature vulcanised (RTV-2) twocomponent SiR (600A/B) and by casting over a glass-fibre core with aluminium end fittings crimped at each end.The mechanical and electrical properties of the 600A/B liquid silicone used in this work are given in Table 1.The pigtail and pin aluminium fittings were attached directly to the fibreglass core, as illustrated in Fig. 1. High Volt., 2020, Vol. 5 Iss.6, pp.739-746 This is an open access article published by the IET and CEPRI under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/)A superior adhering primer was used to enhance the adherence of the SiR to the metal surfaces of the pin and the pigtail.Strong bonding between them was achieved and verified with laboratory tests. The insulator profile adopted was based on a conventional design (CONV), as shown in Fig. 2a.The same basic profile was used for the textured insulators.The textured surface having texture hemispheres of 6 mm diameter were moulded on the insulator trunk (TT6) (Fig. 2b).The dimensions of both insulator designs are summarised in Table 2.Both two types of SiR insulators were fabricated in the High Voltage Laboratory at Cardiff University.The conventional insulator was manufactured as a standard commercial insulator available on the market, while the TT6 insulator was designed based on the same basic profile of a standard insulator with added textured surface enhancement of the insulator trunk.The RTV-2 silicone material was prepared by mixing two components: base resin (600A) and the curative (600B) with a ratio of 9:1 using an MCP 5/01 vacuum casting machine (Fig. 3a).During the casting process, the mixed components were injected into the mould. As can be seen in Fig. 3a, the main components of the casting machine are (1) the top mixing chamber, (2) the lower injection chamber and (3) the touch control panel.The mixed components of SiR (600A/B) are placed in the top chamber while the mould is placed in the lower chamber. At the beginning of the casting process, trapped air is removed from the mixed components by degassing the top chamber for a few minutes.After about 12 min of degassing and mixing, the SiR component is ready to be injected into the lower part of the casting machine where the mould is located.During the injection process and due to the complex geometry of the mould, a variable pressure is applied to force the mixture to fill the mould cavity.When the materials start to emerge from venting channels, the mould cavity becomes full.The vacuum casting machine is then switched off and the mould placed on the oven for curing at 50°C for 8 h.Next, the mould is left to cool for 1 h, before the mould is separated to reveal the cast (Figs.3b and c).The insulator is then removed from the mould and visually inspected for any visible defect or void.If no imperfection appears, it is left for an extra 24 h at room temperature to ensure that the cross-linking of the polymer has been completed [17]. RWDT design and construction A rotating wheel dip tester was designed and constructed in accordance with IEC 62730 standards [18].The apparatus was designed to be able to accept AC and DC voltages.Figs. 4 and 5 show the circuit diagram and the arrangement setup of the RWDT test. The Ferranti high voltage 7.5 kVA, 100 kV step-up transformer was fed from the 230 V mains supply voltage through a voltage controller, an isolating transformer and a low-pass LC filter.For DC tests, the Glassman WX15P70 Series DC source was utilised to provide 1 kW of output power with a voltage up to 15 kV and a current of 70 mA.A 2 rpm DC permanent magnet motor with a single pole single throw normally open relay was used to attain the rotational movement of the test insulators.The tank used for the salt solution was made of a glass reinforced plastic material with dimensions of 1.6 m × 0.25 m × 0.75 m. The voltage and current transducers consisted, respectively, of a capacitive voltage divider with a ratio of 3750:1 and a 200 Ω shunt resistor with low inductance for current measurement.The voltage and current signals were acquired using a computerised data acquisition system.For this purpose, a data acquisition (DAQ) card (MIO-16E series) was used, and its input was protected using a three-stage overvoltage protection system. Experimental conditions Based on IEC 62730:2012 standard, the applied voltage during the test under alternating and direct excitations should be 35 V/mm multiplied by the leakage distance.The leakage distance of the tested conventional insulator was calculated as 375 mm.The salinity of the solution in the test tank, consisting of NaCl and deionised water, was 1.4 g/l, and the salt solution shall be replaced weekly. During the test, each revolution of rotation should have four test positions consisting of energisation, de-energisation, dipping and dripping positions.The test samples of the same design shall be evaluated together.Pairs of test samples of different design shall be assessed separately.To successfully meet the acceptance criteria and pass the tracking wheel test, the surface tracking and erosion should not reach the glass fibre core and puncturing of the shed or housing was not allowed. Experimental procedure Before starting the test, the insulators were cleaned with deionised water and then mounted in the wheel test ring as shown in Fig. 6.In this test, each revolution takes 192 s with four test positions consisting of energisation, cooling, dipping and dripping.For each position, the test sample remains stationary for about 40 s, and it takes 8 s to rotate through 90° from one position to the next.In the first position, the insulator is dipped into the saltwater.The second position of the cycle allows the excess saline solution to drip off from the sample, ensuring the formation of a thin wet layer on the surface.In the third position, the sample is subjected to high voltage energisation.In the last stage of the test revolution, the surface of the tested insulator, which had been heated by discharges activities and dry band arcing in the previous period, is allowed to cool down [18]. Electric circuit of RWDT As can be seen on Fig. 5, a metal frame is used to hold the wheel tester.At the top of this frame, a non-conductive bar is placed to support the electrode that was used to energise the insulator.In this circuit, the electrode is made of aluminium with some movement of the bar allowed to ensure good contact with the insulator at this position.The ground terminal of the insulator is electrically attached to the aluminium wheel; the wheel is then connected to the ring placed at the end of the rod through a cable that passes into a groove made in the rod.The ring then contacts a carbon brush, which is pressed against the ring during the revolution using a spring located behind it to ensure good contact.The brush is connected to a 200 Ω resistor, which is grounded and placed in a metal box equipped with a BNC connector and an overvoltage protection circuit [18].The voltage signal across this resistor is acquired by the DAQ card through a coaxial cable (RG58).The signal is then saved into a personal computer using a LabVIEW code.The operating voltage signal is acquired directly to the DAQ system through a capacitive voltage divider and a BNC attenuator with a total ratio of 3750:1. For the motor control system, the speed regulator and a box in which the relay is housed are placed on the other side of the frame.The relay receives the control signal from a digital output pulse generated by the DAQ card and the LabVIEW routine.The complete cell is firmly fixed to the metal frame equipped with wheels, which allow it to be easily moved. Motor and drive control In order to move the wheel where the insulator is mounted, a DC permanent magnet motor was used.Choosing such a kind of motor provides simple speed control and offers a fast response to starting and stopping.The requirement to have a simple speed control is due to the different weights of insulators; each type of insulator requires different torques to achieve the same movement time (8 s) between two different positions.As the torque/speed characteristic of a DC motor is quite linear, the speed can be simply controlled by changing the current [19]. Using the LabVIEW graphics programming language [20], an appropriate code was written to control the motor of the RWDT.The program was developed to control the motor speed and rotate the motor in the desired position.To generate the input signal for stopping the motor, it was decided to use the value of the instantaneous power as a threshold condition [21].Thus, when the insulator touches the high voltage electrode, the instantaneous power value reaches the threshold level.At this same time, the DAQ card generates a digital output signal to stop the motor.In this way, the user can follow the number of cycles that are run during the test and determine the optimal threshold value required to stop the motor.Fig. 7 shows the flow chart of the motor control program and how it communicates with the main data acquisition system.The data acquisition and the motor control programs were linked together to detect the number of cycles completed during the test.The program stages to achieve this interfacing are described as follows: (a) The global variable value of the instantaneous power is created to interface both programs, and through this value, it is possible to check whether the motor has to be stopped.(b) When the insulator rotates between the two positions, the program outputs a digital pulse of 5 V, and the average power value is under the threshold level.(c) While the insulator is in a vertical position, where it is energised, the average power value exceeds the threshold level and the digital output of the program is carried down at 0 V, thus driving the motor to stop.(d) Once the motor has stopped, the global variable of the average power is set to true, thus allowing the DAQ system to run.After An additional improvement is the implementation of an optical incremental encoder to control the rotational speed and to achieve very good precision on vertical alignment with a wide range of insulator design and weights. The AEDB-9140 series encoder is three-channel optical incremental module.Each module consists of an LED source and a detector enclosed in a small plastic package.The AEDP-9140 encoder has two-channel quadrature outputs and the third channel is an index output.This index output is a 90 electrical degree high true pulse which is generated once for each full rotation.The AEDP-9140 provides sophisticated motion control detection at low cost, making them an ideal option for different applications [22]. Data acquisition system The E-series MIO-16E National Instruments board was used to acquire the leakage current and the voltage signals during the tests.The maximum input range varied between −10 and +10 V.The data acquisition board can acquire either 32 single-ended analogue inputs or 16 differential analogue inputs.The physical connection between the board and the RWDT control unit was attained by an SCB-68 connector block, as illustrated in Fig. 8.The connection between the connector and the DAQ board was made with an SHC68-68-EPM shielded cable.The data acquisition (DAQ) program of the RWDT was also built in LabVIEW.The program was designed to acquire, monitor and store the waveforms of leakage current and applied voltage signals.The interface of the DAQ program with the motor control system is achieved. The DAQ software starts acquiring/saving data when the insulator makes contacts with the HV electrode.The sample rate of the data acquisition board is 10 4 samples/s for each of the two analogue channels of the voltage and current signals.The leakage current and voltage traces were stored in a Technical Data Management Streaming (TDMS) file format [23].Data were saved in sets of 200 samples, which represent one cycle of the 50 Hz voltage and current waveforms.The acquisition of samples was repeated until all 4 × 10 5 samples had been processed and, then, the next file was accessed.Each acquired file represented the saved data of one-wheel revolution. Post-processing data The post-processing LabVIEW program was developed to read and analyse the data acquired from the RWDT.The program allows the user to choose the path from which to read the saved files and determines a specific path for the calculated parameters.The code is also responsible for specifying the electrical characteristics to be analysed.The electrical parameters, such as the peak and root mean square (rms) leakage current of the applied voltage, average power dissipation, power factor angle and absorbed energy are calculated.Studying the behaviour of these parameters helps to evaluate the performance of SiR insulators and material degradation under test conditions.The rms leakage current magnitude is used to evaluate the behaviour of discharge activity that occurred on the insulator surface.These discharges may lead to surface drying and, ultimately, the formation of dry bands on the surface.The average power dissipation is used to measure the heat dissipated on the surface of the insulator since it is one of the major causes of material degradation.The cumulative dissipated energy is used to calculate the total energy absorbed on the insulator surface and based on these losses, the degradation level on the surface can be estimated.The power factor index calculates the phase shift angle between the voltage and the leakage current, which can be used to distinguish the features of the resistive surface conduction and surface discharge activity. Thermal and visual cameras records The visual and thermal observations of discharge activities and dry band formation on the surface of tested insulators during the wheel tests were performed using a FLIR A325 infrared (IR) thermal camera and a high-resolution video camcorder.The purpose of the IR imaging is to reveal any surface heating due to discharge activity and to detect dry band formation that may occur during the test, while the video camera was used to capture any discharge activity.The calibration of the IR camera was performed using an insulator at a known temperature.The calibration performance by the manufacturer in a recent maintenance service also confirms the precision of the device within ±0.2°C from the target temperature. Using the FLIR ThermaCAM Research software, the camera saves the IR records as SEQ files.The IR camera has a spectrum range from 7.5 to 13 μm with an IR resolution of 320 × 240 pixels and an image frequency up to 60 Hz.Both the IR thermal and video cameras were fixed on tripods and placed outside the rotating wheel dip tester.Fig. 9 shows the temperature distributions along the surface profiles of conventional and textured insulators. During the 10 h wheel test, the temperature distribution along the trunk regions (trunk A, trunk 1, trunk 2, trunk 3 and trunk B) of tested insulators was measured.It can be clearly observed from the graphs that a higher temperature value was recorded on the conventional insulator.This is attributed to the higher discharge activity and dry band arcing occurring on this surface, which is not observed on the textured insulator during the energisation period. The visual capture recorded by the digital camera reveals visible discharge activity and dry band arcing on the trunk surface of the conventional insulator.As illustrated in Fig. 10, the yellowish light of dry band arcing can be seen.Such arcing leads to tracking on the SiR surface [24]. After the end of the wheel test, significant changes in the tested sample surfaces were observed.Surface degradation by tracking and erosion was detected.Fig. 11 shows surface tracking on the moulding line of the conventional insulator trunk close to both energised and ground ends.Occurring of such tracking on the moulding seam can cause irreversible damage to polymeric insulators under severe pollution conditions [25,26]. Further test data processing is performed using the MATLAB environment in order to gain a better understanding of a temporal variation during the test and to identify any possible location of higher stress on the insulator surface.The MATLAB analysis identifies the temperature value for each pixel of each selected area and create the distribution of the temperature clustering the data in a class of 2.5 C.This help to understand the temperature distribution over the area and the extension of the hot area.Figs. 12 and 13 show the comparison of the analysis of the frequency temperature distribution of a conventional and textured insulators.The two designs show a very different pattern with the second distribution is more uniform and with lower average temperature. This spatial analysis helps to identify any improvement between designs.In this test, the presence of the seams due to the manufacturing process is evident in both designs.However, it impacts more consistently the smooth surface.In fact, the MATLAB analysis shows very localised and higher temperatures zones for the conventional insulator in comparison with the results of the textured insulator, where the area is more uniformly heated but with lower maximum values. Surface conductance measurements During a test using the RWDT, local surface conductance measurements were evaluated using a device built in according to IEC60507 standards [27].The possible arrangement of this device is described by a conductance meter with two spherical probes.The probes were made of stainless-steel pins 5 mm in diameter with a distance of 14 mm between their centres.The pins were springloaded so they could be pushed by hand against the insulator surface.The spring force at full compression was around 9 N.The voltage source with a Zener-diode at 6.8 V supplied the current through a probe between the electrodes.The measuring meter offered different full-scale deflections at values of 50, 100 and 500 µA.The selector switch was used to determine the scale deflection suitable for the measurement ranges.The measurements of the pollution layer were carried out at different places on the insulator surfaces.The uniformity of the surface layer is attained when the difference between each measured value and their average, as a percentage, is limited to ±30%. The probe was used to measure the conductance values on the insulator trunk, and top and bottom shed surfaces.The measurements were carried out for both insulator designs.The localised measurements were evaluated at selected times after the start of the test.This evaluation helped to identify and explain the trends of the conductance and its distribution on each insulator surface. The average conductance measurements of the top and bottom sheds of tested insulators are described in Figs. 14 and 15, respectively.It is clear from the graphs that the textured profile had a lower increase during the test.The average surface conductance values of the upper shed surfaces for the textured insulator were significantly decreased by 22% in comparison with the conventional insulator.This can be clearly observed on the bottom shed curve (Fig. 15) where the increase of surface conductance values was marginal for the textured design and 46% decrease of surface conductance measurements attained compared with the conventional. In the tested textured insulator, it should be highlighted that the sheds surfaces are not textured.Therefore, given inclined shed surfaces, the conductance of the upper surfaces of the conventional insulator is enhanced by a more uniform wetting of the surface compared to the textured insulator wetting which is affected by the trunk texture channelling water dripping from the hemispherical textures.For the bottom surfaces of the sheds, the difference is attributed to the higher power dissipation on the trunk of the conventional design causing more evaporation and condensation on the lower surfaces of the sheds. A similar variation trend was also observed on the insulator trunk and a 42% reduction in surface conductance values was achieved, as can be clearly seen in Fig. 16.Several AC tests were performed on the two insulator designs.It has adopted the RWDT technique to focus on the insulator wetting surface performance (leakage current, power and energy dissipation) which allows a direct comparison between textured and non-textured insulators.The test is not aimed at measuring flashover performance or complying with standards. Leakage current Fig. 17 shows typical waveforms of applied voltage and leakage current for the conventional and textured insulators, measured during the 15th test cycle, 24 s after the start of the third period.It is clear from the two plots that discharge activity occurred on both surfaces.However, it was more intense on the conventional surface.The high resistive leakage current behaviour observed at initial energisation leads to drying of the surface and the formation of dry bands.When the formation of dry band arcing occurs, this can be seen in the current waveform of the conventional insulator by the steps at 3.5 ms.This waveform is like that observed in [28].In the case of a textured surface, the discharge activity is much lower, and a mostly conductive behaviour is observed. RMS leakage current Fig. 18 shows typical shapes of RMS leakage current measured for the tested insulators.As described earlier, each insulator was energised for a period of 40 s. For the conventional insulator, the wet surface dried over time due to the dripping of water and Joule heating.Non-uniform discharge activity caused heavy arcing and further drying.This drying progression was accompanied by a fall in the leakage current magnitude reaching 3 mA at the end of the energisation period.For the textured insulator, a continuous thin conducting layer was maintained.Discharge activity was limited on the surface, and the magnitude of leakage current was slightly reduced.The key advantage of textured insulators is that of a multi-current path when wet pollution is present on the surface.This means that, as current flows and drives the path, it changes/jumps to another path that has lower impedance/resistance.Overall, this results in a highly distributed power consumption over the surface.However, the non-textured insulator tends to form focused currents paths which lead to dry bands and discharges; these cause tracking and erosion characterised by current/power values. Power dissipation Fig. 19 illustrates the average power dissipated on the surface of the test insulators during the 15th wheel revolution.It can be seen that the power dissipation for the conventional profile increases significantly until it reaches the initial peak value of 40 W after 20 s.However, with a textured profile, the trend of the shape is mostly constant with a lower value of power dissipation about 20 W. After the initial power peak on the conventional surface, the dissipated power decreases gradually and significant distortion is observed during this period.This distortion is caused by the increasing discharge activity occurring on the insulator surface.For textured insulator, the surface enables a significant reduction of current density associated to leakage current.The inclined plane test is an accelerated ageing test for material surfaces.Previous work in this area has been published in [29] and was shown that the multi- current path in textured surfaces protects the surface from tracking and erosion damage compared with non-textured surface.In this way, we have shown that the textured surface will survive longer periods of testing.If the contamination level is very high, the erosion will happen, however after 10/15 years the texture dimples will show some erosion and bringing the situation like a conventional surface but not to failure.If same conditions would have been applied to a non-textured surface, the erosion would lead to the failure of the insulator.Therefore, the damage on the textured surface has always been less significant. Conclusion Two different designs of SiR insulators were manufactured in house and were tested using the RWDT.The RWDT facility and its construction was described.The software and the assembled hardware used to construct the wheel test system were extensively detailed.It was clear from the results that the higher temperature value was recorded on the conventional insulator.The visual capture revealed visible discharge activities and dry band arcing on the trunk surface of the conventional insulator which was not always observed for the textured insulator.Surface tracking and erosion defects were also seen on the conventional design at the end of the test.Moreover, the IR record and the spatial analysis using the MATLAB analysis routine confirmed that the higher stress zones were identified on the smooth surface of conventional insulator in comparison with the results of the textured insulator.In addition, a reduction of the surface conductance value index was achieved in the case of the textured insulator.For AC tests, the leakage current measurements showed that drying and discharge activity are greater for the conventional insulator compared with a textured insulator.The power dissipated by partial arcing in conventional design is expected to be more damaging than ohmic power loss for textured.This tendency suggested that the textured design reduces the hydrophobicity characteristics, and consequently, it can promote an increase in insulator's life expectancy and improve overall tracking and erosion performance. Fig. 14 Fig. 14 Average surface conductance measurements for the upper surfaces of the four sheds Fig. 16 Fig. 16 Average conductance measurements for the trunk of tested insulators
7,296.8
2020-04-27T00:00:00.000
[ "Physics" ]
Spare Parts Forecasting and Lumpiness Classification Using Neural Network Model and Its Impact on Aviation Safety : Safety critical spare parts hold special importance for aviation organizations. However, accurate forecasting of such parts becomes challenging when the data are lumpy or intermittent. This research paper proposes an artificial neural network (ANN) model that is able to observe the recent trends of error surface and responds efficiently to the local gradient for precise spare prediction results marked by lumpiness. Introduction of the momentum term allows the proposed ANN model to ignore small variations in the error surface and to behave like a low-pass filter and thus to avoid local minima. Using the whole collection of aviation spare parts having the highest demand activity, an ANN model is built to predict the failure of aircraft installed parts. The proposed model is first optimized for its topology and is later trained and validated with known historical demand datasets. The testing phase includes introducing input vector comprising influential factors that dictate sporadic demand. The proposed approach is found to provide superior results due to its simple architecture and fast converging training algorithm once evaluated against some other state-of-the-art models from the literature using related benchmark performance criteria. The experimental results demonstrate the effectiveness of the proposed approach. The accurate prediction of the cost-heavy and critical spare parts is expected to result in huge cost savings, reduce downtime, and improve the operational readiness of drones, fixed wing aircraft and helicopters. This also resolves the dead inventory issue as a result of wrong demands of fast moving spares due to human error. Introduction Safety is the most desired feature for aviation, and safety critical spare parts hold special importance for aviation organizations.Forecasting future consumption of safetyrelated spare parts is the most critical part of inventory management, as its inaccurate prediction poses a serious challenge for the organizations responsible for the maintenance of aviation fleets [1].Keeping in view efficient spare parts management and decreasing maintenance budget, possession of a reasonable inventory level is critical in the aviation industry, where lead time does not always satisfy the actual demand due to which spare parts pile up in the warehouse.Figure 1 provides an example of a sample depot that is responsible for the storage of spares and for the maintenance of helicopters and fixed-wing aircraft from four different origins.The depot stores around 97,693 spares, including 18,930 time change components (TBOs) that are categorized as slow-moving spares.These spares are expensive and are replaced based on either operational time or calendar years.The depot maintains two types of spares-fast-moving spares (selective stock list (SSL) and expandable spares) and slow-moving spares (TBOs, and non-selective stock list (nonSSL)).The SSL items are stocked for one quarter based on the demand/consumption of the previous four quarters.The TBOs, being critical and costly items, are stored for the next five years, based on the last two years of consumption data from maintenance setups.NonSSL and expandable items are stored for four quarters based on projections.The problem of modeling future consumption is further aggravated by a lumpy spare part demand pattern [2,3], marked by periods with no demand along with sporadic demand [4,5], as shown in Figure 2. The continuous inspections and preventive maintenance render the aircraft unserviceable for operational requirements, which costs the maintenance organizations.Therefore, the topmost priority for small and medium enterprises is to make the right spare parts available at the need of the hour and at the required location.The increasing downtime of the aircraft can be managed with efficient forecasting to have better operational fleet performance, which is a less researched domain in the aviation sector and warrants investigation.In this connection, the system experts [6,7] have utilized the statistical techniques of exponential smoothing and regression analysis [8,9], but such approaches are found to perform inaccurately when a lumpy demand pattern is processed [10].Another interesting approach presents an innovative model for a single site to take advantage of a distribution, the zero-inflated Poisson.The authors demonstrate the model effectiveness, confirming that their approach outperforms the traditional Poisson-based approach [11]. Different methods, with varying degrees of success, have been used to forecast lumpy and intermittent demand data.These techniques comprise a variety of models, such as Holt-Winterss [12] as the statistical method.Similarly, machine learning methods such as the support vector regression (SVR) [13] are used, while long short-term memory (LSTM) networks [14] are used as a deep learning method.However, it is still debatable which technique or meta-level approach predicts lumpy and intermittent demand data most accurately.This is due to a lack of historical research on intermittent and lumpy time series data [15].In recent years, machine learning and deep learning research has also advanced rapidly. Among these, artificial neural networks (ANN) are considered the most recognized artificial intelligence (AI) method [16] used to handle lumpy demand patterns because they have outperformed the traditional techniques in several fields [17][18][19][20].Owing to the added advantage of performing in the way the human brain works by acquiring training from historical demand data, such models perform superbly.Afterward, they infer future demand based on the nonlinear pattern recognition and correlation establishment between the predictor variables and the outcomes.This way of learning has attracted many system managers in the field of aviation to harness the uncertainties and to solve future spare parts demand.On the other hand, the limitation to the use of ANN remains, as it is difficult to explore the best forecasting model because of the sensitivity analysis requirement on learning rate, momentum coefficient, and the number of hidden neurons.This work develops an ANN-based network that forecasts the demand for aviation service parts with lumpy patterns.This work is motivated by the fact that ANNs do not require any parametric assumptions about the data.Particularly, it has been demonstrated that the feedforward multilayer perceptrons utilized here are universal approximators; therefore, they should be able to capture the data-generating process of intermittent demand time series.ANNs are adaptive models that do not require human professionals to provide them with predetermined architecture.The proposed ANN technique enables interaction between the quantity of demand and the inter-demand intervals of demand events, or their lags if such interactions can be discovered from the data without requiring expert input.Their versatility makes them naturally beneficial for accommodating the irregular nature of demand.We demonstrate that the proposed method is superior to others in terms of forecasting and inventory performance based on a comprehensive comparison to the other techniques when measured using the MASE metric.The performance evaluation based on the MAE metric is comparable to other approaches. The remaining sections are organized as follows: Section 2 of this paper is a review of the relevant literature.The proposed methodology is comprehensively described in Section 3. Section 4 presents the findings and comparative analysis of the various existing approaches.Section 5 provides a summary of the paper based on our analysis of the results and discusses possible future research directions. Related Work John Croston was the first to introduce a method that was designed specifically for intermittent data [21].Croston proposed separating the data into two series: one for arrival times and another for positive demand.Croston noted, however, that the time series data for intermittent demand varied significantly from that of conventional time series data.The former have multiple zero-demand intervals that are different from the latter.He presented an alternative technique to forecast demand from intermittent time series data.Croston stated that his method that presupposes independence between demand size and demand intervals.However, Willemain et al. [22] casted doubt on this concept of autonomy.This, however, was maintained in subsequent work that improved Croston's original method, such as Syntetos and Boylan [23].Syntetos and Boylan investigated Croston's method and found it to be biased.Later, they presented a modified version of Croston's method to resolve the bias issue [24]. Leve'n and Segerstedt proposed a modification of Croston's technique that attempts to eliminate its inherent bias [8].Nevertheless, the Leve'n and Segerstedt method is more biased than the Croston method, particularly for highly intermittent series.Willemain et al. identified patterns of intermittent demand in several other scenarios such as heavy machinery, electronics, maritime spare parts, etc. [25].Similarly, Syntetos and Boylan studied intermittency in automotive spare parts [26].Ghobbar and Friend analyzed the demand for costly aircraft maintenance parts [1].They observed that businesses were holding too much inventory due to inaccurate demand forecasts, resulting in subpar service levels.Whether the demand is large or small or arrives at the correct or incorrect time, both can lead to mistakes.Therefore, accurate demand projections are necessary to support inventory holding and replenishment decisions.In addition, as a result of shortened life cycles, technology migrations, time for lengthy production cycles, and protracted lead times for capacity expansion, electronic companies face complexity in inventory management and risk of excess supply and shortfall of important components [27]. The work of [28] performs intermittent prediction on data collected from the Internet of things (IoT) using a recurrent neural network (RNN).The performance evaluation metric is accuracy.In this specific prediction task, RNNs outperformed ANNs.As an alternative to the Croston method, a new method based on stochastic simulation is used to conduct research in [29].Different evaluation metrics are used such as mean error (ME), mean absolute deviation (MAD), MASE, and D proposed by the authors.In conclusion, the proposed method did not outperform the existing standard.In [30], the ATA technique was compared with the Croston technique.The data from the M4 competition were utilized.The ATA and the exponential smoothing methods are similar, but their respective emphases differ.The study predicted six future time steps.As evaluation metrics, both mean squared error and standardized mean absolute percentage error are utilized.As a metric for out-of-sample evaluation, the ATA method is superior to the Croston method.In [31], a novel method, the modified SBA, is presented for intermittent forecasting.In conclusion, the proposed method can not compete with the current method.The authors of [32] employ a deep neural network (DNN) to predict sensor data.The ARIMA and generalized autoregressive conditional heteroskedasticity (GARCH) techniques have been surpassed by this method.This study utilized simulations that require a well-considered parameter design for this purpose.Ref. [33] provides an example of a simulation design that investigates multiple parameter combinations.The study results demonstrate promising outcomes. A study in [34] introduces a seasonal adjustment method and a dynamic neural network as the primary seasonal forecasting model.Zero demand is counteracted by preprocessing the initial input data and by adding input nodes to the neural network.This study proposed a revised error measurement method for evaluating performance.The proposed framework for forecasting outperforms competing models for intermittent demand.Two ANN models, each using 36 observations as training data, were proposed in [35].The proposed model was able to achieve competitive inventory performance relative to Croston despite low forecasting accuracy.In [36], an ANN and RNN are trained and compared for intermittent demand forecasting.The results show that ANN performance is superior to that of RNN in long-term demand series.Another work in [37] conducts an empirical analysis utilizing 5133 SKUs from an airline and acquired forecast performance and inventory performance outcomes for different methodologies.The findings of this study indicate that proposed ANN approaches are less biased as compared to other evaluated methods. Table 1 summarizes different datasets having different patterns including intermittent, lumpy, and smooth.It also shows the industry related to each dataset.The forecasting methods used in the referenced article and its evaluated metrics are also included. From the table, it can be observed that the performance of various methods varies with the input datasets. Materials and Methods In this section, different forecasting approaches are discussed in detail.The data procured from Central Aviation Spares Depot (CASD) require categorization based on the proposed lumpiness classification; then, the ANN architecture is discussed in detail. Proposed Lumpiness Classification The lumpiness factor provides a measure of the relative variability between the stochastic distributions of demand transactions and the intervals between transactions.Three statistical aspects of the historical demand data are • Intervals between the demands; • Demand size; • Relationship between intervals and sizes. The proposed lumpiness classification is based on the coefficient of variation (CV) concept, which is a useful approach when variability between point estimates is compared [44,45]. Croston Method This technique was first developed by [22] to predict intermittent and lumpy data.This method uses the amount of demand and occurrence of demand individually to estimate the forecast value.This indicates that the likelihood of a demand occurrence does not depend on the amount of demand at a certain moment.Let V t and Z t be the estimated average interval for the occurrence of non-zero demand and estimated mean for non-zero demand, respectively, at time t.The actual demand value at time t is represented as X t , and the number of consecutive periods of zero demand is indicated as q.Then, Y t is used to indicate an average estimate of the amount of demand.Mathematically However, other authors have observed that the Croston approach has a positive bias [46].The authors of [47] first demonstrated that there is a bias and then refined the bias by adjusting the multipliers Z t and V t in the (1 − α/2) form.In this section, we will compare the results obtained using the modified Croston approach. Auto-ARIMA ARIMA is the popular term for the Box Jenkins prediction method, which was created in 1970.It is essentially an extrapolation strategy that forecasts the underlying variable using existing time series data.It consists of three processes for estimating the ARIMA model, namely, identification, estimation, and diagnosis. According to this, the upcoming value of any variable is observed as a linear function of its previous errors and previous values, which are stated mathematically as where Y t is the estimated amount of the variable, and in the prior period t, it is denoted as a function of lag variables of itself.The amount of arbitrary error in this period is denoted by ε t .Here, β i and θ j are the constants.The moving average and autoregression lags are represented by q and p, respectively.The left-hand side of Equation ( 3) from β o to β p Y (t−ρ) is basically the autoregression (AR) part, while the remaining part of the equation is the moving average (MA) part.This is why the equation as a whole is referred to as the ARMA.The degree of the integral variable, where Y t the dependent variable attains its stationarity, is represented by I(d). Since ARIMA models account for the lag of the dependent variable, the random error of estimate, and the order of variables becoming stationary, the models are written as ARIMA (p, d, q).It simply plots the autocorrelation function (ACF) and partial autocorrelation function (PACF) on a correlation matrix to determine the relative locations of p and q in a scatter plot [24]. Simple Exponential Smoothing One of the basic methods for predicting a time series is simple exponential smoothing [48].This model's fundamental presumption is that the future will mostly resemble the recent past.As a result, the amount of demand will be the key trend this algorithm will detect from historical data [49].At each interval, the model learns some from the current demand estimate and retains a little of the previous forecast.This is noteworthy since it demonstrates that the most recent prediction already contains some data from prior demand estimates.This indicates that the prior forecast contains the demand pattern insights that the model has previously obtained.The alpha in the smoothing parameter decides how much weight is placed on the current demand estimate.Mathematically, it can be written as where α is the ratio that indicates how much weight the model will give to the current observation compared to the weight it will give to demand history. Artificial Neural Network Modeling The prime concern in a neural network system that is trained by examples is how well it generalizes out-of-sample.If the system memorizes the training set, then it is likely that it will train well in the sample and badly out-of-sample.For time series analysis, this translates to whether the trained net can make good forecasts [47,50,51].Several parametric values of neural network needed to be finalized in the process of designing the same, which are listed as The values of the above-mentioned parameters demand experimentation or trial-anderror methods until the best forecasting results are achieved.A multilayer perceptron (MLP), three-layered network is designed that contains the input and hidden and output layers to explore the correlation of input and output datasets [52]. The number of input neurons represents the number of influential or high-risk factors that contribute to the behavior of the neural network model.It becomes a real challenge to decide predictor variables, as too many will complicate the network, and the solution will diverge instead of converging. In backpropagation networks (BPN), how well a network can learn from historical demand data depends upon the number of hidden neurons.If too many of them are used, then the network does not generalize but rather memorizes the spare parts' past usage history and vice versa [21].Thus, evaluating the right combination of hidden neurons along with input neurons is a case of trial and error, but the approximation to the same can be made by Equation [53]. The output neuron in the output layer of the proposed ANN model signifies the future demand value or outcome that is one parameter to be evaluated. Recurent Neural Network Modeling A recurrent neural network (RNN) is another form of a conventional artificial neural network (ANN), which was developed by [31].The feedforward or multilayer perceptron designs are significantly different from RNN architectures.RNN is a dynamic system that represents the temporal state and has strong computational capabilities that can be applied to a wide range of temporal processes.RNN is frequently used to explore time-series data, including text, photos, and other sequential data.A straightforward RNN design with a feedback process is shown in Figure 3. Due to this architecture, information for state t + 1 will be provided by the output that has been received at stage t.Weights and parameters may be used to represent the information.This one-delay-step procedure takes place in the concealed layer and is often used.The hidden layer and output layer comprise the activation function that corresponds to the RNN.Mathematically, RNN can be represented as Long Short-Term Memory Modeling The LSTM neural network, which was proposed in [32], is a kind of recurrent neural network that is especially suitable for modeling long-range relationships.In contrast to recurrent neural networks, LSTM architecture consists of memory blocks as compared to hidden units.A memory block has one or more memory cells regulated by nonlinear sigmoidal gates and multiplicatively applied.Memory cells share the same gates in order to decrease parameters.These gates control whether the model retains the values at the gates or if they are discarded.Thus, the network is able to leverage long-term temporal context [33].The unit activation can be found with the following equations The logistic sigmoid function is denoted by a, activation vectors for cells are represented by the letters c, while input gates are denoted by the letters i, forget gates by the letters f , and output gates by the letters o.The size of the hidden vector h is the same as that of other vectors.The W notations represent the cell-to-gate vector weight matrices.The activation function at the output is denoted by the symbol tanh [54,55]. Calibration Calibration optimizes the network by computing the mean squared error between the actual and forecast demand values [5,56].As the training continues, the average error continues to get smaller, as shown in Figure 4. Finally, the average error reaches a point where it is fairly horizontal to the epochs' axis.The average error further continues to reduce to a point from where it slowly begins to get larger [57].This point is the optimal point, and from this point, training ceases to make any progress.Calibration saves the network at this optimal point. Implementation Methodology This section signifies the proposed method that was used in this research to find the results comprising the following stages [8,25] • Filtering the critical spare parts (CSP) from the whole collection of aviation spare parts having the highest demand activity and characterized by lumpiness [58]; • Defining the influential factors that signified variables most strongly predictive of an outcome [41]; • Utilizing the neural network model to forecast the unknown demand data values of future consumption. Lumpiness Factor Calculation The lumpiness factor was evaluated by the following equation Note that lumpy demand implies, σ I µ I > 0. Calculation of Size Information The coefficient of variance of demand size was found by taking the ratio of the standard deviation of the demand size to its mean using the following equation [58] where Here, N represents total number of observations, and i represents observation at a specific time. Calculation of Interval Information The coefficient of variance of the number of no-demand periods was found by taking the ratio of standard deviation to the mean (average of time periods existing between two successive demand values), i.e., The research process flowchart, given in Figure 5, shows the steps that were utilized to evaluate the best forecasting results.Dividing the dataset into training, validation and test sets is an essential step to develop an efficient machine learning model that requires sufficient observations.In our case, the data are randomly shuffled to ensure that samples are ordered randomly.Initially, they are divided into two parts: 80% of the data are kept in the training set, and 20% is kept in the test set to evaluate the final model.The training set is used to train the model while further selecting 20% as the validation set from the training data.The validation set is kept to monitor the model's performance during training to avoid overfitting while tuning the hyperparameters.The dataset has been collected from a spare depot that stocks fixed wing and helicopter spare parts and deals with their demand and supply process on a yearly basis and is in line with the operation shown in Figure 2. The sample size and the observations are around 23,600, including 3770 lumpy and rest as regular demand pattern spares from a list of 97,693 items related to flying machines from four different origins. Figure 2 is drawn for a single spare part number 8AT-1250-00-02 with nomenclature vibration damper assembly that is demanded in 24 quarters (6 years, 2009-2015), and it shows no regular pattern.The attention is also drawn toward Table 2, where ten highfrequency spare parts that are lumpy in nature have been selected, and the necessary details are given.For lumpiness classification, neural network models can be used to identify which spare parts have irregular demand patterns.This is conducted by training the model on a labeled dataset that consists of spare parts with known demand patterns.The input to the model can include demand data for each spare part, and the output can be a binary classification, indicating whether a spare part has a lumpy or regular demand pattern. The ANN-based demand forecasting procedure is as follows.The first step is the input variable selection.For all networks, the number of factors is the same as denoted by alphabets starting from x 1 through x 3 , which are listed as: • x 1 -Average of difference between the demand and its mean of four quarters over three years; • x 2 -Average of four quarters demand over three years; • x 3 -Average of four quarter interdemand interval over three years.Data pre-processing is the second step.Data truncation of leading and trailing zeros was performed both in the known inputs and output dataset values.In the third step, scaling is performed.The scaling of the inputs and target values was performed according to the equation [58]. Training is the fourth step of the methodology.The network training algorithm 'traingdm' was used.The activation function utilized in the hidden layer was set as 'tan-sigmoid', whereas the output layer contained the 'log sig' activation function [41].The learning rate was finalized as 0.05.The network was trained by setting the number of epochs as 500.The architecture of the proposed ANN model is given in Figure 6.The outcome of the trained model was simulated by the trained and validated ANN model [59].After that, post-processing was carried out.The descaling of the outcome was performed by the inverse transformation equation [58]. The mean absolute percentage error (MAPE) is a popular metric for measuring the accuracy of a predictive model.Lolli et al. [36] use MAPE as a criterium to evaluate the accuracy of different forecasting models.The MAPE is calculated as the average of the absolute percentage errors over all forecasted periods.The formula for calculating MAPE is as follows where F t is the forecasted value at time period t, A t is the actual value at time period t, n is the total number of time periods, and the absolute value and percentage are taken to ensure that the errors are positive and relative to the actual value.This criterion of MAPE is used to compare the forecasting performance of different models, including ARIMA, seasonal decomposition of time series, exponential smoothing, and neural networks, on various demand patterns for aircraft spare parts [36].There are other metrics such as MAE, MASE and root mean square error (RMSE) that are also found to be used for similar purposes in the literature [60][61][62][63]. MAE is a commonly used metric in time series forecasting to evaluate the performance of a forecasting model.MAE measures the average magnitude of errors between actual and predicted values.It measures the average absolute difference between the actual and predicted values of a variable.The formula for calculating MAE is where n is the total number of observations, y i is the actual value of the i-th observation and ȳ is the mean value of all actual values. In other words, it computes the absolute difference between each actual value and the mean value, sums them up, and then divides by the total number of observations.The MAE is a useful metric because it gives a sense of the magnitude of the errors in predictions, regardless of their direction.It is also relatively easy to interpret, as it is expressed in the same units as the variable being predicted. MASE is another common metric used to evaluate the accuracy of a time series forecasting model.It measures the forecast's accuracy relative to a naive forecast and allows for comparisons of different forecasting methods across different time series.A value of 1 indicates that the forecasting method is no better than the naive forecast, while a value less than 1 indicates that the method is better than the naive forecast.MASE is calculated as the MAE of the forecasted values divided by the MAE of a naive forecast, where the naive forecast is typically the simple persistence forecast (i.e., using the last observed value as the forecast for the next period).The formula for MASE is as follows where A t is the actual value at time t, F t is the forecasted value at time t, and n is the number of observations in the dataset. Results and Discussion The spare parts demand forecast by the proposed artificial neural network approach was explored in this section by using the mathematical tools and techniques discussed earlier.The discovery of the predictor variables or influential factors dictating the historical demand pattern was the most critical part of the research, which ultimately formed part of inputs to the artificial neural network model.Historical data were procured from CASD, which showed lumpiness, thus making forecasting a real challenge.The spare parts were then prioritized based on the highest demand activity and lumpiness factor as shown in Table 2. The plot square error dynamics for training a neural network are shown in Figure 7.The MSE training plot reaches a minimum value of 0.074371 after training for 500 epochs.Table 3 shows the computation for testing dataset.The inverse transformation formula requires the maximum and minimum demand data values of the year 2014-2015, which were unavailable; however, approximation to the unknown was carried out by taking the average of the corresponding maximum and minimum values of four quarters for 5 years (2009-2014).For issues involving time series prediction, RNNs are the most preferred models.In the field of natural language processing, they have performed particularly well.RNNs are all-purpose approximators, such as ANNs.Contrary to ANNs, recurrent cell feedback loops naturally handle the temporal dependencies and temporal order of the sequences [64].In [65], the authors proposed an RNN model to forecast intermittent and lumpy time series data.To compare our model with the RNN model, results of lumpy data from [66] are considered here.An R package 'tsintermittent library' is used for the simulation process.A two-layer RNN model is used, which consists of 64 recurrent layers and a single node dense layer.Sigmoid and rectified linear unit (ReLu) are used as activation functions.A mean absolute error (MAE) is used for performance evaluation.The lower the MAE score, the better.This is because MAE is a measure of the average error between the predictions and intended targets; thus, we want to minimize this value.The results shows that RNN performs well in forecasting tasks, and the calculated MAE value is 0.6 as compared to the simple ANN model, where the calculated MAE value is 0.4.Comparison of the RNN model with ANN, Croston, and ANN-based approach using the Levenberg-Marquardt training algorithm, SVM, adaptive univariate SVM [39] and single exponential smoothing (SES) models was also carried out.The results show that the proposed approach based on gradient descent with momentum term outperforms the other methods in the case of lumpy data.MAE values obtained for different methods are listed in Table 5.A lower MAE indicates a better performance and vice versa.In [67], a comparison is made between various approaches to forecasting time series, including classical, machine learning, and deep learning approaches.This study examined four distinct classes of data, labeled A, B, C, and D. These classes are distinguished from one another according to the degree of lumpiness and intermittency exhibited by the data.Class C data are lumpy, and their characteristics are very comparable to those of the data we use in our work.Therefore, we will compare the results of our predictions with the forecasting results obtained from data C using various models.The research study that was referenced selected Croston, Holt-Winter, and Auto-ARIMA techniques from the classical methods category; RF, XGBoost, and Auto-SVR models from the machine learning category; and MLP and LSTM models from the deep learning category for evaluation and comparison. The LSTM displays the best performance with values of 0.97 for MASE, followed by Auto-ARIMA with values of 1.04 for MASE.The Croston technique has the worst performance, and its MASE value is greater than 2. MLP performs second worst with a value of 1.68, listed in Table 6.We also included several more approaches based on ANN with the traditional Levenberg-Marquardt algorithm, SVM and adaptive univariate SVM, and the results indicate improved performance of the proposed method vs. existing approaches. Implications The results of this study suggested that ANN was the best forecasting technique to handle and infer future lumpy demand.Previous studies considered the data feature extraction along with the recorded data based on flying hours and operational intensity.The hybrid neural networks were also used by combining traditional techniques with neural networks [68].However, all these approaches led to magnified forecasting error due to a lack of data pertaining to fleet performance and its approximation. During the research, quantity demanded instead of quantity issued was taken into consideration.Forecasting future consumption was performed with the prior information that there were no spare parts available in the warehouse.Furthermore, a detailed survey was performed to choose CSPs from a whole collection of spare parts, which was a challenge solved by demand categorization based on the extent of lumpiness as shown in Table 2.The spare parts were sorted and selected, which showed variability in the demand interval and demand value.Hence, planning for the worst-case scenario was performed to forecast lumpy demand. The training MSE of the ANN model was found to decrease, which showed perfect training.The trained and validated model was then posed with a new set of inputs to evaluate the forecasted value of 7.989 as shown in Table 3, which was comparable with the value of nine actually demanded in the years 2014-2015. Conclusions This research holds a significant potential to forecast the demand for aviation spare parts, as it presents a new and more efficient way to forecast demand.The problem of accurately harnessing the uncertainties will prevail for a long time to come; however, an attempt is being made to keep the forecasting error to a minimum so that it can be effectively and practically utilized in the field of aviation.Ongoing work includes the application of the designed ANN model for the rest of the spare parts demand forecast.A comparative analysis of the forecast and actual demand values was carried out.Furthermore, this work also presents a detailed comparison of the proposed model with a few state-of-the-art models and observed that the proposed method's performance is superior to other methods.The current model can serve as a starting point for further advancements in forecasting the demand for aviation spare parts.This will increase the operational availability of aircraft and improve safety by reducing the number of flights with inoperative components. Figure 1 . Figure 1.A sample spare depot storage template. Figure 7 . Figure 7. Neural network performance plot.The trained model was further validated with a normalized (p2, t2) validation dataset.Ultimately, the normalized test dataset p3' was posed to the trained and validated model to obtain the normalized output dataset t3'.The forecasted result required descaling or inverse transformation to depict the real data as shown in Table 4 below (tabulated data are of Tail Rotor Blade). Table 1 . Datasets having intermittent lumpy and smooth patterns, their forecasting methods, and accuracies on different metrics.The '×' indicates that the value for this parameter is not reported. * Selected entries are shown here from the full sample space of lumpy spares held at an aviation spare depot. Table 4 . Forecasted set of values. Table 5 . Comparison of different methods based on MAE. Table 6 . Model comparison based on MASE.
7,873.6
2023-04-27T00:00:00.000
[ "Computer Science", "Engineering" ]
Numerical Simulation Study on the Influence of Different Factors on the Uplift Bearing Capacity of Root Piles, Straight-Shaft Piles, and Pedestal Piles Root pile (hereafter called RP), which is a promising new type of noncircular cross-section-shaped pile and meets the requirements of the development of the uplift pile, was introduced for promotion. On the basis of validation of experimental and numerical results, finite element models were established to study the influence of the arrangement of roots and dimension parameters on the uplift bearing capacity and the economy of RP compared with that of the straight-shaft pile and pedestal pile (hereafter called SP and PP, respectively). The results show that the uplift bearing capacity of RP is higher than that of SP and PP, and the longer the pile length is, the more the bearing capacity of RP would increase compared with that of SP and PP. In order to further improve the bearing capacity of RP, the bearing mechanism of the root was analysed, and the suggested values of root size and spacing of layers are given. In addition, the most economical way to increase pile bearing capacity is to increase pile length rather than increasing pile diameter. Introduction With the development of infrastructure construction, the development of piles is exposed to a series of challenges, including higher bearing capacity and less material consumption. Piles with noncircular cross-section shape which are promising new pile types developed based on conventional SP meet the requirements of the development of pile foundation. ere are several popular piles with noncircular cross-section shape in the last few years: squeezed branch piles [1,2], PP [3][4][5][6], screw piles [7,8], X-shaped piles [9,10], and so on, and PP was often adopted for buildings bearing uplift load. RP is a new type of pile and put forward by Yin [11] firstly, as shown in Figure 1, which is formed by grafting prefabricated roots onto SP. e mechanism of RP is including side friction of the pile shaft and cantilever action of prefabricated roots, and RP therefore could improve bearing capacity effectively. In addition, compaction effect generated by pushing roots into the surrounding soil will enhance the physical and mechanical properties of the surrounding soil; however, root compaction effect is difficult to be taken into quantitative consideration in practical engineering. e construction process of RP is relatively complicated, and the construction method varies according to the sequence of pushing roots and installation of reinforcement. Pushing roots firstly and installation of reinforcement later were adopted in the field testing, and the corresponding brief construction process is shown in Figure 2. RP was mostly used as a compressive bearing foundation since its advent, and investigation and application on the uplift bearing behaviour of RP is rare so that its uplift bearing capacity was often ignored by researchers. e compressive bearing capacity of root caissons and the optimal distribution of roots were investigated by Gong et al. [12] through field loading tests and Yin et al. [13] through numerical simulation, respectively, and their research studies both pointed out that the roots could significantly improve the compressive bearing capacity. Whether the existence of roots could improve the uplift bearing capacity is unknown, but it is worth to explore the influence of the arrangement of roots, pile length and diameter, and hollow sections on the uplift bearing capacity of RP and the advantages of RP compared with those of SP and PP. On the basis of validation of experimental and numerical results, the above subjects were studied by numerical simulation. e bearing mechanism of the root was analysed, and the influence of the number and size of the root and the spacing of root layers on bearing capacity was discussed. Based on the analysis of the influence of pile length and diameter and hollow sections on the uplift bearing capacity of different pile types, the advantages of RP over PP were summarized. In addition, suggestions were put forward on how to increase the uplift bearing capacity. Material. e soil was simulated by the Mohr-Coulomb model, and pile and roots both constructed by reinforced concrete were simulated by the elastic model. Material parameters are shown in Table 1, in which unit weight c, Poisson's ratio ], cohesion c, and internal friction angle φ were obtained by field tests and elastic modulus E and dilation angle ϕ were obtained by calibration. Constitutive Model. e Coulomb friction model was adopted to describe the friction behaviour between the contact surfaces. Since the holes of piles in the field tests were manually drilled and the side walls of the holes were relatively rough, the external friction angle δ between the soil and piles was chosen as 22°which was the same as the soil internal friction angle φ, and the corresponding friction coefficient μ was 0.40. e maximum side friction τ max was defined as follows: where p n is the normal pressure between the contact surfaces. In the Coulomb friction model of ABAQUS, the elastic slip distance Δu el,slip should be specified. For small displacement not greater than Δu el,slip , a linear increase of the side friction until reaching to τ max is assumed, and for larger displacement, the side friction remains the same as τ max . at is, the side friction conforms to the following relations: where k s � τ max /Δu el,slip is the tangential stiffness of the contact surfaces. According to previous studies [14,15], side friction would reach its maximum value τ max for a relative displacement of 5 mm. e elastic slip distance Δu el,slip in the numerical model of this paper was therefore 5 mm. Numerical Simulation. e diameter and the buried depth of the model piles were 1.2-2.8 m and 5-25 m. For all PP mentioned in this paper, the diameter of the enlarged base was 1.5 times the diameter of the pile shaft, and the height of the enlarged base was 1 m. Roots 0.15-0.3 m in width and 0.3-0.4 m in height were extended 0.6-1.6 m out of the pile shaft and were arranged in the circular direction with equal spacing. It should be noted that the compaction effect of roots pushing into soil was not considered in the numerical simulation. e radial and axial dimensions of the base were, respectively, 25 and 16.7 times the pile diameter, which was found from calculations sufficient to avoid a significant falsification of the calculation results. e displacement in X and Y horizontal directions of the lateral boundary and in all directions of the bottom boundary had been constrained. e piles and soil were discretized into C3D8R elements, and the 3D finite element mesh of the model is shown in Figure 3. e models were calculated in three steps. Firstly, the balance of the geostress field was carried out. In this process, the soil weight was applied to the whole model, and the horizontal stress coefficient k was defined. It is worth noting that the magnitude of k determines the magnitude of side friction, namely, k has a great influence on the numerical results. Kulhway and Kozera [16] pointed out that the value of k was related to the safety level of buildings and the stress history of soil (the overconsolidation ratio of soil (OCR)), which could be taken between the active earth pressure coefficient k a and the passive earth pressure coefficient k p . It was decided to adopt k p � tan 2 (45°+ (φ/2)) � 2.04 as the horizontal stress coefficient, and in this case, the numerical results can be in agreement with the experimental results to the greatest extent. Subsequently, the difference between pile and soil weight was applied to the pile again. At the end of this step, due to the difference of weight, small displacement and stress would be generated, which would be taken as the initial condition for numerical calculation. Finally, uplift load was applied to the pile top and increased gradually until the ultimate load was reached. e load was loaded by force until the calculation did not converge. For the uplift piles, load displacement curves can be divided into mutational and varying gradually types, and the ultimate uplift bearing capacity can be taken as the prior load before mutation for the mutational curves and the load inducing upward displacement of 60 mm for varying gradually curves correspondingly. Validation of the Numerical Model Field loading tests were performed on three piles, including No. 1 SP, No. 2 PP, and No. 15 RP (see Table 2). e soil profile consisted of large thickness and continuous loess strata. Freeze-thaw could affect the strength of the loess [17,18]; however, the freeze-thaw is not involved in field tests; the purpose of the tests was to compare the bearing capacity of three pile types, so the effect of freeze-thaw on loess was not considered. Soil samples were collected and tested in a laboratory, and the properties of samples are shown in Table 1. All of the loading tests were conducted according to the slow loading method and loaded by hydraulic jacks in an increment of 300 kN, as shown in Figure 4. Displacement sensors were placed at the top of the pile head to measure uplift displacement. e comparison of experimental and numerical results of these three piles is shown in Figure 5. It can be seen from Figure 5 that the uplift displacement obtained by the numerical simulation is larger than the experimental results, which is because the tangential stiffness of the contact surface remains unchanged at the elastic slip stage in the numerical simulation. However, for the field loading Study on Root Arrangement. In order to study the difference of bearing capacity of piles of different sections (including SP, PP, and RP) and the influence of the arrangement of roots on the uplift bearing capacity, numerical simulations were carried out. Specific simulation programs and the corresponding results are shown in Table 2, and the load displacement curves of pile No. 1-15 are shown in Figure 6. Root Pile with Single Root Layer (1) e Number of Roots. Figure 6 and Table 2 show that the ultimate uplift bearing capacity Q u of RP and PP was significantly greater than that of SP, and the load displacement curves of pile No. 2-5 varied gradually, which was different from the mutation curve of SP and would provide a higher safety factor for the upper structure. Furthermore, the more the root grafting on the pile shaft, the higher the uplift bearing capacity; however, with the increase in the number of roots, the increase in Q u decreases gradually. To further analyse the influence of the number of roots on Q u , the plastic zone of soil around roots under Q u was extracted, as shown in Figure 7. For the RP No. 3 (as shown in Figure 7(a), the number of roots was 4, and the annular spacing of roots was 0.792 m), the plastic zone of soil above the roots was concentrated around the roots and scattered, (2) e Bearing Mechanism of Root. Taking RP No. 4 (6 roots) as an example, the effect and the bearing process of root were analysed. Figure 8 shows the axial stress distribution of the root, and Figure 9 shows the bending moments of the root along the cross section. It can be seen from the figures that, except for the region of the joint of roots and pile shaft, the distribution of axial stress and bending moment was close to the cantilever beam under uniform load. e size of uniform load was determined by magnitude of uplift load, physical and mechanical properties of the surrounding soil, and the spacing between roots. e roots bear uplift load independently while the spacing among them is large enough; however, the roots and the soil would work together, and the soil arch effect would come into play when the spacing is relatively small. (3) e Buried Depth of Roots. As for RP No. 4 and 6-10, each pile has one root layer, and the roots' buried depth was 7.5, 1, 2, 3, 4, and 5 d, respectively. As seen from Figure 6 and Table 2, with the buried depth of roots changing from shallow to deep, load displacement curves gradually changed from sudden failure to slow destruction. Roots drive the soil to pull up together, which reduces the lateral pressure between base and pile shaft when the buried depth of roots was too shallow; for example, Q u and load displacement curve of RP No. 6 (roots buried at 1 d depth) were similar to those of SP. In addition, the uplift displacement of RP No. 9 with the buried depth of 4 d and 5 d was smaller than that of No. 4 with the buried depth of 7.5 d, i.e., a higher bearing capacity of pile No. 9 and 10; the reason of which was that the roots' buried depth of pile No. 4 was too large, and only when the uplift displacement was large enough, roots could come into full play. In conclusion, too shallow or too deep buried depth of roots was unfavourable to uplift bearing capacity, and the best embedding depth is between 4 and 5 d. As shown in Figure 10, when the depth of roots was shallower than 3 d (RP No. [6][7][8], the plastic zone would reach the ground under Q u , and the reason of that uplift load displacement curves of pile No. 6-8 was mutation. When the buried depth was deeper than 4 d (RP No. 4, 9, and 10), the plastic zone showed a mountain-shaped distribution, which was concentrated in the range of 2 d above the roots. Table 2, the bearing capacity of one root layer RP was smaller than that of PP; however, with the increase in root layers, the uplift bearing capacity increases significantly, and the opposite relationship was shown. For the RP with 2 layers (No. [11][12][13][14], their bearing capacity increases with the increase in layer spacing. Because of shallow buried depth of the first root layer, Q u of pile No. 15 with three root layers was only slightly larger than that of pile No. 14. Figure 11 shows the plastic zone distribution of RP No. 11-14 under Q u . Compared with Figure 10, it can be seen that the plastic zone of pile No. 11-14 was larger than that of piles with one root layer, which demonstrates that multilayer RP could mobilize soil to resist the uplift load in a larger range and lead to larger bearing capacity than one root layer RP. When the spacing was not more than 2 d (pile No. 11 and 12), the plastic zone between the two layers would be overlapped, which should be avoided. When the spacing was not less than 3 d (pile No. 13 and14), the plastic zone of the first and second layers would not be connected under Q u , which indicates that the effect of two layers could be independently played and not be affected by each other. e study in Section 4.1.1 showed that the plastic zone was concentrated in the range of 2 d above the roots; in other words, when the spacing between two layers is not less than 2 d, the influence between roots is small, which is the same conclusion as that in this section. rough the above analysis, the relationship between the optimal buried depth of the first root layer and layer spacing and pile diameter d (d was 1.2 m in this section) was discussed. In fact, these two values were totally unrelated to d, but are related to the actual distance, which will be further elaborated in Section 4.2.4. Table 3, and the load displacement curves of the piles are shown in Figure 12. e bearing capacity of piles increases with the increase of pile length, and the load displacement curves of SP gradually changes from mutation to gradual variation. e Q u increment of PP and RP compared with SP decreases with the increase in pile length, which is more obvious for PP. For example, load displacement curves of PP of length no less than 20 m are basically coincident with that of SP. e Q u increment of RP was 15-25% higher than that of PP. In addition, when pile length is large, roots arranged on the upper pile are available to bear the uplift load even when the uplift load is small; however, the enlarged base of PP could only bear load under the condition of the large displacement which would often exceed serviceability limit state. More importantly, the enlarged base of PP is difficult to operate, and collapse hole takes place frequently when the pile length is large; on the contrary, it is feasible to arrange several root layers on the upper part of the pile. Figure 12 and Table 3 indicate that for RP No. 24-27 with 3-6 root layers, respectively, the bearing capacity increases with the increase in the number of root layers. Besides, the bearing capacity of pile No. 26 (5 root layers, and the maximum buried depth of roots is 15.6 m) is almost the same as that of pile No. 27 (6 root layers), which means the effect of roots arranged on the lower part of the pile on the resistance of the uplift load is small especially for the long piles. e Influence of Pile Diameter. Piles with a diameter of 1.6-2.8 m were simulated to compare the influence of pile diameter and the length and number of roots on the uplift bearing capacity. Specific simulation programs and the corresponding results are shown in Table 4, and the load displacement curves of the piles are shown in Figure 13. Compared with SP, the Q u increment of RP with different number and length of roots is shown in Figure 14. e bearing capacity of RP increases with the increase in the number and length of roots, while the increment rate decreases with the increase in the number of roots. In general, the bearing capacity of RP increases slightly with the increase in the number of roots when the annular spacing between the roots is smaller than 0.4-0.5 m. e influence of root length on the bearing capacity is greater than root number, and it is suggested that the length of the root should not exceed 0.5 times the pile diameter due to the special construction process. e height and width of the root should match the pile diameter; besides, the influence of the construction of the root on the reinforcement should also be taken into account. In other words, the width and the height of the root should not exceed the spacing between main reinforcement and the spacing between stirrup bars, respectively. It is suggested that the height and width should be 0.3-0.4 times the length of the root and about 0.1 times the pile diameter, respectively. In addition, the height and width should be less than 400 mm and 300 mm, respectively; if not, the height and width should be reduced by increasing reinforcement ratio or increasing concrete strength of roots. e Influence of Hollow Sections. In order to compare the influence of wall thickness and burial depth of hollow sections on the uplift bearing capacity of piles and to optimize the design of large-diameter piles, the simulation of hollow piles with different parameters was carried out. Specific simulation programs and the corresponding results are shown in Table 5, and the load displacement curves of the piles are shown in Figure 15. 8 Advances in Civil Engineering distribution of the middle part of piles was extracted when the uplift load was 3000, 5000, and 6000 kN, respectively (the corresponding uplift displacement was all about 30 mm), as shown in Figures 16-18. It should be noted that the elastic model was adopted for piles, and the cracking failure of piles was not considered. Figures [16][17][18] indicate that the axial stress distribution of piles with a wall thickness of 0.5 m, 0.7 m, and 0.9 m is similar to that of solid piles with diameters 1.2, 2.0, and 2.8 m, respectively. Correspondingly, the ratio of wall thickness to pile diameter is 0.416, 0.35, and 0.32, respectively. Furthermore, it could be inferred that for large- Note. e length of the piles above is 10 m, and all of the RP were arranged as three root layers with buried depth at 1. Advances in Civil Engineering diameter piles, hollow sections could be adopted to reduce material consumption. e wall thickness of hollow piles should be 0.3-0.4 times the pile diameter, and the larger the pile diameter, the smaller the value. In addition, stress concentration would occur at roots and the joint between pile shaft and roots, and structural measures, such as increasing the concrete grade and reinforcement ratios, should be considered to get roots and the joint strengthened. Discussion on the Influence of Pile Length and Diameter and Hollow Sections. e comparison of the influence of pile length and diameter on the bearing capacity of different pile types is shown in Figure 19, in which the number of root layers of RP length ranging from 5 m to 25 m is 2-6, respectively, the length of the root is half of the pile diameter, and the annular spacing of roots is between 0.4 and 0.5 m. As seen from Figure 19, the Q u increment of PP and RP compared with SP remains basically unchanged with the increase in pile diameter, that is, the influence of pile diameter on bearing capacity of SP, PP, and RP is similar. However, the influence of pile length on bearing capacity of different pile types shows different trends. Both PP and RP could be used to improve the pile uplift bearing capacity when the pile length is no more than 10 m. For piles no less than 15 m long, compared with the increment of the bearing capacity of RP, the increment of PP is smaller. e influence of pile length and diameter and hollow sections on the bearing capacity of per unit of concrete (hereafter called Q unit ) was compared with pile No. 15 (solid pile of 10 m in length and 1.2 m in diameter), and the results are shown in Table 6. e Q unit increment ratio tends to remain unchanged with the increase in pile length, while Q unit decreases with the increase in pile diameter, and the larger the pile diameter, the higher the reduction. In addition, within the scope of diameter of 2.8 m, the arrangement of hollow sections has little influence on Q unit . Taken together, improving pile bearing capacity through increasing pile length is more economical than through increasing pile diameter, but it must be pointed out that the construction process becomes harder with the increase of pile length. On the whole, the influence of construction difficulty and concrete consumption should be considered comprehensively. In order to study the influence of buried depth of roots and pile diameter on the plastic zone above roots, the plastic zones of RP with diameters ranging from 1. Conclusions In this paper, the finite element models of RP under uplift load were established, and the influence of root arrangement and pile diameter and length on bearing capacity of piles was studied. e following conclusions are obtained: (1) e bearing capacity of piles could be greatly improved by arranging roots on the pile shaft, and the uplift bearing capacity of RP increases with the increase in pile length and diameter, root length and number, and number of root layers. In addition, the bearing capacity was basically unaffected by the arrangement of hollow sections. (2) Both PP and RP could be used to improve the pile uplift bearing capacity when the pile length is no more than 10 m. For piles no less than 15 m, the advantages of RP in bearing capacity and construction feasibility are more obvious than those of PP, and it is more convenient to use RP to improve the bearing capacity. e more the number of root layers, the higher the uplift bearing capacity, while the effect of the root arranged on the lower part of the pile on the resistance of the uplift load is small, and the number of root layers should not be too much. In addition, the spacing of root layers is independent of pile length and diameter and should not be less than 2.4 m. (3) Improving pile bearing capacity through increasing pile length is more economical than through increasing pile diameter, but it must be pointed out that the construction process becomes harder with the increase in pile length. In addition, the effect of hollow sections on reducing concrete consumption is not obvious for large-diameter piles. On the whole, the influence of construction difficulty and concrete consumption should be considered comprehensively. (4) Bearing mechanism of the root is similar to the cantilever beam under uniform load. e influence of root length on the bearing capacity is greater than the root number. It is suggested that the length of the root should not exceed 0.5 times the pile diameter and the number of roots should correspond with the root annular spacing of 0.4-0.5 m. In addition, it is suggested that the height and width of the root should be 0.3-0.4 times the length of the root and about 0.1 times the pile diameter, respectively, and should not affect the reinforcement. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
5,993.4
2020-11-16T00:00:00.000
[ "Geology" ]
Simulation of Percolation Threshold, Tunneling Distance, and Conductivity for Carbon Nanotube (CNT)-Reinforced Nanocomposites Assuming Effective CNT Concentration This article suggests simple and new equations for the percolation threshold of nanoparticles, the tunneling distance between nanoparticles, and the tunneling conductivity of polymer carbon nanotubes (CNTs) nanocomposites (PCNT), assuming an effective filler concentration. The developed equations correlate the conductivity, tunneling distance, and percolation threshold to CNT waviness, interphase thickness, CNT dimensions, and CNT concentration. The developed model for conductivity is applied for some samples and the predictions are evaluated by experimental measurements. In addition, the impacts of various parameters on the mentioned terms are discussed to confirm the developed equations. Comparisons between the calculations and the experimental results demonstrate the validity of the developed model for tunneling conductivity. High levels of CNT concentration, CNT length, and interphase thickness, as well as the straightness and thinness of CNTs increase the nanocomposite conductivity. The developed formulations can substitute for the conventional equations for determining the conductivity and percolation threshold in CNT-reinforced nanocomposites. Many parameters affect the conductivity of polymer nanocomposites. The main effects are attributed to the characteristics of conductive nanoparticles such as content, dimensions, conductivity, and dispersion quality [17][18][19]. Additionally, the size and density of conductive networks largely manipulate the conductivity of nanocomposites, because the networks provide the conductive paths for charge transfer. Moreover, the immense surface area of nanoparticles per unit volume creates an intermediate phase between the polymer matrix and the nanoparticles, called the interphase, which governs the behavior of nanocomposites [20,21]. The roles of the interphase in the mechanical performance of polymer nanocomposites have been widely analyzed by experimental and theoretical methods [22][23][24]. Moreover, the interphase zones can form the connected structures in polymer nanocomposites, which lower the percolation threshold [25,26]. Thus, the interphase areas definitely influence the electrical conductivity of nanocomposites by reducing the percolation threshold, although this matter has been relatively little discussed in the literature. Electron tunneling as the main mechanism for conductivity of PCNT includes the transferring of electrons between nearby nanotubes (tunneling spaces) based on quantum mechanics [27,28]. As a result, the tunneling effect does not involve the attached nanotubes, and the nearby CNTs can transfer the charges in nanocomposites. However, electrons can be transported via tunneling zones when the separation distance between nanotubes is small enough. Accordingly, the tunneling effect mostly depends on the distance between neighboring CNTs [29,30]. Only a few studies have focused on the tunneling mechanism in PCNT [31,32]. For example, Feng and Jiang [31] considered the interphase layer surrounding CNTs as a tunneling area and developed various equations for tunneling distance, interphase thickness, and conductivity. Generally, the extant studies on the tunneling conductivity show unclear and multifaceted terms and equations that are rarely applied in practice. Moreover, the available models inadequately reflect the influences of filler and interphase dimensions on the tunneling properties and conductivity. In particular, although filler size generally expresses the percolation threshold of nanoparticles, the roles of tunneling distance and the interphase layer have been ignored in the previous articles [33,34]. We have published some reports on the conductivity of PCNTs assuming the interphase zone surrounding CNTs and the tunneling region between adjacent CNTs [35][36][37][38]. Those studies considered a constant value for the tunneling distance (at different filler concentrations), and expressed the percolation threshold as a function of CNT dimensions and interphase thickness. In this paper, we join two exponential equations for the electrical conductivity of nanocomposites to derive the proper equations for the tunneling distance and percolation threshold. We consider the effective CNT concentration, which includes the concentrations of both CNTs and interphase zone. Moreover, we express the tunneling distance and percolation threshold by CNT concentration, CNT dimensions, as well as the interphase thickness around the CNTs. Additionally, we develop the exponential equation suggested by Ambrosetti et al. [39] for the electrical conductivity of nanocomposites using the mentioned terms. The developed equation suggests the electrical conductivity by CNT waviness, interphase thickness, CNT size, and effective CNT volume fraction. The estimates of electrical conductivity by the developed equation are compared to the various experimental results from valid literature. In addition, the impacts of various parameters on the mentioned terms are analyzed to confirm the correctness of the established equations. Methodology A simple model was suggested for the tunneling electrical conductivity of polymer nanocomposites [40] as: where σ 0 is a parameter, d is the tunneling distance, and z is the characteristic tunneling length. This model has been widely applied in different studies on the conductivity of polymer nanocomposites, especially PCNT [39][40][41]. Ambrosetti et al. [39] developed this model for different nanocomposites, and suggested the conductivity in nanocomposites containing cylindrical particles as: where R, ϕ f , and l denote the radius, volume fraction, and length of the nanoparticles, respectively. Comparing Equations (1) and (2) can suggest the following equation for the d parameter as: The tunneling distance between adjacent CNTs in the conductive networks can be expressed [40] by: where A is a constant parameter. Similarly, there is a maximum separation distance between CNTs allowing the tunneling effect (d m ), which suggests [31]: By replacing A from Equation (5) into Equation (4), the tunneling distance can be given by: By joining Equations (3) and (6), it is possible to express the percolation threshold assuming a tunneling distance as: However, the exceptional length of CNTs commonly causes waviness in polymer nanocomposites [42]. The effective length of nanotubes (l eff ) (Figure 1a) can be assumed by the waviness parameter as: where u = 1 and u > 1 denote no waviness (straight CNTs) and more waviness, respectively. Thus, the effective length of nanotubes is presented as: Assuming the roles of the interphase and waviness by Equations (9) and (10), the conductivity, tunneling distance, and percolation threshold are suggested as: Moreover, the interphase regions have a positive effect on the conductivity of nanocomposites by reduction of the percolation threshold and the growth of networked structures [43]. In fact, the interphase regions and waviness modify the effective volume fraction of CNTs in nanocomposites ( Figure 1b). The effective volume fraction in CNT nanocomposites [31] can be given by: where t is the interphase thickness. Assuming the roles of the interphase and waviness by Equations (9) and (10), the conductivity, tunneling distance, and percolation threshold are suggested as: which express the influences of waviness, interphase thickness, filler dimensions, and effective filler fraction on the mentioned terms. Electrical Conductivity We applied the suggested equations to calculate the conductivity at different levels of the material and interphase parameters. Moreover, we compared the calculations of electrical conductivity to the experimental measurements in some samples to demonstrate the predictability of the developed model. Figure 2 illustrates the effects of the d and z parameters on the conductivity by a contour plot at σ 0 = 1 S/m. The highest conductivity is calculated by the smallest d and the highest z. As observed, σ = 0.6 S/m is obtained at d = 1 nm and z = 5 nm. However, an insulating effect is observed at high d and low z. Therefore, a desirable conductivity is achieved by a short tunneling distance and high characteristic tunneling length. These results were expected because a short tunneling distance and a large characteristic tunneling length effectively amplify the tunneling effect, while a long tunneling distance and a small characteristic tunneling length weaken the tunneling mechanism. Figure 2 illustrates the effects of the d and z parameters on the conductivity by a contour plot at σ0 = 1 S/m. The highest conductivity is calculated by the smallest d and the highest z. As observed, σ = 0.6 S/m is obtained at d = 1 nm and z = 5 nm. However, an insulating effect is observed at high d and low z. Therefore, a desirable conductivity is achieved by a short tunneling distance and high Figure 3 depicts the electrical conductivity as a function of different parameters according to Equation (11). Figure 3a exhibits the effects of the l and z parameters on the conductivity at average = 0.01, R = 10 nm, t = 10 nm, and u = 1.3. The highest conductivity is observed at the highest values of the l and z parameters, while the low ranges of these parameters significantly diminish the conductivity. As a result, both the l and z parameters, being the CNT length and characteristic tunneling length, respectively, directly affect the tunneling conductivity of nanocomposites. In other words, long nanotubes and a high z value produce high conductivity in nanocomposites. Long nanotubes have good potential for connecting and networking because they have more contacts compared to short nanotubes. Moreover, the conductive networks produced by long nanotubes can cover a large area in the nanocomposite, which can positively affect the conductivity [44]. Accordingly, long nanotubes are necessary for large networks and high conductivity. It should be noted that the conductivity directly depends on the characteristic tunneling length, but this parameter has not been precisely defined to date. A thick interphase increases the effective volume fraction of the nanoparticles in nanocomposites based on Equation (10), and thus it can decrease the tunneling distance between neighboring CNTs (Equation 12) and lower the percolation threshold (Equation 13). As a result, a thicker interphase produces a shorter tunneling distance between CNTs and larger networks in nanocomposites, which considerably raises the conductivity, as predicted by the model developed here. The positive impacts of interphase zones on the percolation threshold of CNTs and the mechanical properties of nanocomposites have been addressed in the literature [22,25], but their influence on the conductivity has not yet been explained. The effects of f  and R on the conductivity are also depicted in Figure 3c at average l = 10 μm, t = 10 nm, u = 1.3, and z = 1. A high R decreases the conductivity to about 0, whereas the highest conductivity is obtained at the highest f  and the smallest R. Therefore, a high concentration of thin nanotubes causes a desirable conductivity, whereas the various concentrations of thick CNTs result in a low conductivity. The conductivity of CNTs was reported as about 106 S/m, which is about 1021 times that of polymer conductivity [18]. Thus, the conductivity of nanocomposites is controlled by Figure 3b illustrates the variation of conductivity at dissimilar ranges of the u and t parameters and average = 0.01, R = 10 nm, l = 10 µm, and z = 1. A high u value and small t value result in an insulated nanocomposite, while the lowest u and the highest t produce the best conductivity. These evidences show that the waviness and interphase thickness inversely and directly affect the conductivity, respectively. Therefore, a desired conductivity is obtained by low waviness and a thick interphase, while great waviness and a thin interphase cannot improve the conductivity of nanocomposites. The waviness lowers the effective length and conductivity of CNTs in the nanocomposites [42]. The waviness actually worsens the percolation threshold of nanotubes and the characteristics of conductive networks, leading to poor conductivity. The detrimental effects of waviness on the conductivity and mechanical performance of CNT nanocomposites were reported in previous articles [45,46]. Therefore, the inverse dependency of conductivity on the CNT waviness is sensible and confirms our new equation. A thick interphase increases the effective volume fraction of the nanoparticles in nanocomposites based on Equation (10), and thus it can decrease the tunneling distance between neighboring CNTs (Equation (12)) and lower the percolation threshold (Equation (13)). As a result, a thicker interphase produces a shorter tunneling distance between CNTs and larger networks in nanocomposites, which considerably raises the conductivity, as predicted by the model developed here. The positive impacts of interphase zones on the percolation threshold of CNTs and the mechanical properties of nanocomposites have been addressed in the literature [22,25], but their influence on the conductivity has not yet been explained. The effects of ϕ f and R on the conductivity are also depicted in Figure 3c at average l = 10 µm, t = 10 nm, u = 1.3, and z = 1. A high R decreases the conductivity to about 0, whereas the highest conductivity is obtained at the highest ϕ f and the smallest R. Therefore, a high concentration of thin nanotubes causes a desirable conductivity, whereas the various concentrations of thick CNTs result in a low conductivity. The conductivity of CNTs was reported as about 106 S/m, which is about 1021 times that of polymer conductivity [18]. Thus, the conductivity of nanocomposites is controlled by the concentration of CNTs because polymer matrices are generally insulated. Moreover, thin CNTs increase the effectiveness of the nanoparticles in nanocomposites because they produce a short tunneling distance and a low percolation threshold, according to Equation 12 and 13. Therefore, the developed model demonstrates the correct dependencies of conductivity on filler concentration and radius. The same outputs were also proposed by previous studies approving the present results [47]. We Figure 4 presents the predictions of the developed model and the experimental results for these reported samples. The good agreements between the experimental results and the predictions in all the samples illustrate that our new model can accurately estimate the conductivity of PCNT. These plots are the best fitting of calculations on the experimental data using the developed model. Some insignificant deviations are observed between the experimental data and model outputs, especially at high CNT concentrations, due to the CNT agglomeration at high filler loadings [45,53,54]. However, the deviations are below 5% (error < 5%), which are satisfactory for modeling. In other words, the plotted curves by the developed model are the best ones, and the minor deviations can be neglected because they are in the normal range of experimental and calculation errors. The best level of σ 0 for the current predictions is 1 S/m for all samples. Moreover, the values of (t, z) are calculated as (2, 0.11), (7, 2.15), (3, 0.51), (22, 0.17), and (3, 0.01) nm for the PDMS/MWCNT, UPE/MWCNT, PVC/MWCNT, PET/MWCNT, and epoxy/SWCNT samples, respectively. The values of t are reasonable because they are in a common range for polymer nanocomposites. It should be noted that the value of z for CNT nanocomposites differed from the 0.2 to~13 nm at the aspect ratio (length per diameter) below 1000 [39]. Nevertheless, our developed model based on the tunneling effect can suggest the proper calculations for the conductivity of PCNT. [51], and (e) epoxy/single-walled CNT (SWCNT) [52] nanocomposites. Tunneling Distance The effects of different parameters on the tunneling distance (d) (Equation 12) are observed in Figure 5. Figure 5a shows the impacts of eff Tunneling Distance The effects of different parameters on the tunneling distance (d) (Equation 12) are observed in Figure 5. Figure 5a shows the impacts of ϕ e f f and l parameters (effective filler fraction and CNT length, respectively) on d. The shortest tunneling distance of about 0.25 nm is obtained by ϕ e f f > 0.035 and l > 18 µm, while the largest d of 3 nm is observed at ϕ e f f = 0.02 and l = 5 µm. Accordingly, both the effective volume fraction and the CNT length inversely affect the tunneling distance. nanocomposite and produce strong particle-particle interactions. Accordingly, the opposite relation between tunneling distance and CNT length is also logical, which supports our developed equation. Figure 5b also presents the different roles of the u and t parameters in the tunneling distance at average f  = 0.01, R = 10 nm, and l = 10 μm. The tunneling distance is enhanced by high waviness and a thin interphase, whereas a short tunneling distance is produced by less waviness and a thick interphase. Therefore, it is important to reduce the CNT waviness and thicken the interphase to attain a short tunneling distance, which benefits the conductivity of nanocomposites. The effects of these parameters on the tunneling distance are reasonable because they manage the contacts among nanoparticles. Low waviness indicates straight CNTs in the nanocomposite, which effectively raises the number of contacts and shortens the distance between adjacent nanotubes. On the other hand, a thick interphase shortens the distance between any two nanotubes, because the interphase regions cover the nanotubes. In other words, the conductivity in the interphase zones is greater than in the polymer matrix and less than in the nanoparticles, which can benefit the dimensions of the networks and the conductivity of nanocomposites. Thus, the negative A high ϕ e f f is representative of high filler concentration, thin nanotubes, and a thick interphase. In this condition, a high number of thin nanotubes enclosed by a thick interphase are incorporated in the nanocomposites, which significantly lessen the distance between neighboring nanoparticles. Thus, the inverse relation between effective filler fraction and tunneling distance is true. In addition, long nanotubes have short inter-particle distances because they occupy a large space in the nanocomposite and produce strong particle-particle interactions. Accordingly, the opposite relation between tunneling distance and CNT length is also logical, which supports our developed equation. Figure 5b also presents the different roles of the u and t parameters in the tunneling distance at average ϕ f = 0.01, R = 10 nm, and l = 10 µm. The tunneling distance is enhanced by high waviness and a thin interphase, whereas a short tunneling distance is produced by less waviness and a thick interphase. Therefore, it is important to reduce the CNT waviness and thicken the interphase to attain a short tunneling distance, which benefits the conductivity of nanocomposites. The effects of these parameters on the tunneling distance are reasonable because they manage the contacts among nanoparticles. Low waviness indicates straight CNTs in the nanocomposite, which effectively raises the number of contacts and shortens the distance between adjacent nanotubes. On the other hand, a thick interphase shortens the distance between any two nanotubes, because the interphase regions cover the nanotubes. In other words, the conductivity in the interphase zones is greater than in the polymer matrix and less than in the nanoparticles, which can benefit the dimensions of the networks and the conductivity of nanocomposites. Thus, the negative correlation between the tunneling distance and the interphase thickness is attributed to the positive contribution of the interphase regions to the conductivity of nanocomposites as well as nanoparticles. The tunneling distance at different ranges of ϕ f and R and average l = 10 µm, t = 10 nm, and u = 1.3 are plotted in Figure 5c. These parameters considerably affect the tunneling distance because they change it from 0 to 18 nm. High ϕ f and low R obtain a short tunneling distance, whereas the tunneling distance grows by decreasing ϕ f and enhancing R. As a result, a short tunneling distance is produced by a high filler concentration and small filler radius. As observed, ϕ f = 0.005 and R = 25 nm result in d = 18 nm, while ϕ f > 0.013 and R < 15 nm produce d ≈ 0 nm. A high filler concentration creates a large number of nanotubes in the nanocomposite and, obviously, the space between neighboring nanotubes declines significantly, which diminishes the tunneling distance. Additionally, the number of nanotubes and their radius show an adverse relation, because a small R produces a high number of nanotubes in a unit volume. Furthermore, thinner nanotubes yield more surface area compared to thicker ones, which enhances the interphase regions [55]. Accordingly, thinner nanotubes can reduce the space between nanotubes, which creates a short tunneling distance between nanotubes. These observations endorse the different influences of the ϕ f and R parameters on the tunneling distance based on the developed equation. Figure 6 exhibits the impacts of various parameters on the percolation threshold based on Equation (13), assuming the tunneling mechanism. The roles of the ϕ f and d m parameters in ϕ p at average levels of the other parameters are plotted in Figure 6a. It is found that both ϕ f and d m parameters inversely control the percolation threshold, i.e., the high levels of these parameters reduce the percolation threshold. The ϕ p level of~0 is obtained at ϕ f > 0.01 and d m > 7 nm, while ϕ p reaches 0.0006 at ϕ f = 0.005 and d m = 5 nm. As a result, a desirable percolation threshold is obtained by the highest values of filler fraction and maximum tunneling distance. Percolation Threshold correlation between the tunneling distance and the interphase thickness is attributed to the positive contribution of the interphase regions to the conductivity of nanocomposites as well as nanoparticles. The tunneling distance at different ranges of f  and R and average l = 10 μm, t = 10 nm, and u = 1.3 are plotted in Figure 5c. These parameters considerably affect the tunneling distance because they change it from 0 to 18 nm. High f  and low R obtain a short tunneling distance, whereas the tunneling distance grows by decreasing f  and enhancing R. As a result, a short tunneling distance is produced by a high filler concentration and small filler radius. As observed, f  = 0.005 and R = 25 nm result in d = 18 nm, while f  > 0.013 and R < 15 nm produce d ≈ 0 nm. A high filler concentration creates a large number of nanotubes in the nanocomposite and, obviously, the space between neighboring nanotubes declines significantly, which diminishes the tunneling distance. Additionally, the number of nanotubes and their radius show an adverse relation, because a small R produces a high number of nanotubes in a unit volume. Furthermore, thinner nanotubes yield more surface area compared to thicker ones, which enhances the interphase regions [55]. Accordingly, thinner nanotubes can reduce the space between nanotubes, which creates a short tunneling distance between nanotubes. These observations endorse the different influences of the f  and R parameters on the tunneling distance based on the developed equation. When the filler concentration grows, the tunneling spaces between nanotubes condense and the nanotubes show a high number of contacts. Therefore, a higher filler concentration produces a better condition for percolating and networking, which decreases the percolation threshold. Furthermore, d m shows the maximum tunneling distance in which the tunneling effect occurs. In other words, the percolation threshold and formation of conductive networks occur when the tunneling distance is smaller than d m [31]. Thus, a high d m value permits the distant nanotubes to participate in the conductive networks, producing a low percolation threshold. In summary, our suggested equation properly predicts the converse effects of both ϕ f and d m parameters on the percolation threshold. Figure 6b also reveals the effects of u and t parameters on the percolation threshold at average ϕ f = 0.01, R = 10 nm, l = 10 µm, and d m = 7 nm. The d m value was reported in the literature as ranging from 1.8 to 10 nm [31,42]; therefore, we used an average value of d m = 7 nm for our calculations. The highest percolation is observed at high u and low t values, but ϕ p significantly falls as u reduces and t grows. The most desirable ϕ p is obtained as~0 at u < 1.5 or t > 17 nm, which demonstrates the benefits of less CNT waviness and a thick interphase in the percolation level. Less waviness or more straightness depicts the extraordinary effective length of nanotubes. Hence, a low percolation threshold is expected in this condition because the longer nanotubes are more likely to network. In addition, a thick interphase around nanotubes shortens the distance between adjacent nanotubes, raising the probability of percolating and networking. In fact, the interphase areas can establish conductive networks before the definite linking of nanotubes [56]. Accordingly, the positive effects of low CNT waviness and a thick interphase on the percolation threshold confirm our new equation. Percolation Threshold The influences of the R and l parameters on the percolation threshold are also observed in Figure 6c at average ϕ f = 0.01, t = 10 nm, u = 1.3, and d m = 7 nm. A high percolation threshold of 0.06 is observed at R = 20 nm and l = 5 µm, but the percolation threshold declines to~0 at R < 15 nm or l > 10 µm. As a result, a small R value and a large l value significantly lower the percolation threshold, demonstrating the positive effects of thin and long nanotubes on the percolation threshold. Previous studies have reported the identical influences of these parameters on the geometric percolation threshold [36,38], which confirm that the tunneling effect does not change the effects of the CNT dimensions on the percolation threshold. Thin and long nanotubes yield a high aspect ratio, which moves the percolation threshold to smaller filler concentrations. In fact, slim and long nanotubes increase the effective number and surface areas of nanofillers in a nanocomposite, decreasing the particle-particle distance, due to more inter-contacts between them and a thick interphase. As a result, the percolating and networking of thinner and longer nanotubes are easier than with thicker and shorter ones. In summary, the dependencies of the percolation threshold on the R and l parameters are reasonable, which demonstrate the correctness of the developed equation. Conclusions We simulated the electrical conductivity, tunneling distance, and percolation threshold of PCNT at various CNT concentrations. Good agreements between experimental results and estimations (deviation less than 5%) confirmed the predictability of the presented model. CNT length and characteristic tunneling length directly influenced the conductivity. High CNT waviness and a thin interphase generally resulted in an insulated nanocomposite, whereas less waviness and a thick interphase produced high conductivity. A high concentration of thin CNTs also caused high conductivity. The nanocomposite conductivity changed from 0 to 0.9 S/m at different values of all the studied parameters using the developed equation. A high filler volume fraction, long and thin CNTs, low CNT waviness, and a thick interphase produced a short tunneling distance, which is desirable for tunneling conductivity. The developed equation yielded minimum and maximum tunneling distances of 0 and 18 nm, respectively. Furthermore, the highest filler concentration, the longest tunneling distance, the least CNT waviness, the thickest interphase, and the highest aspect ratio caused the lowest percolation threshold, which advantageously controls the conductivity. Our calculations indicated the percolation threshold to be in the range of 0 to 0.06. Among the studied parameters, the concentration and dimensions of the CNTs had the most significant effects on the percolation threshold, tunneling distance, and conductivity of nanocomposites.
6,199.2
2020-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Estimating Scalp Moisture in a Hat Using Wearable Sensors Hair quality is easily affected by the scalp moisture content, and hair loss and dandruff will occur when the scalp surface becomes dry. Therefore, it is essential to monitor scalp moisture content constantly. In this study, we developed a hat-shaped device equipped with wearable sensors that can continuously collect scalp data in daily life for estimating scalp moisture with machine learning. We established four machine learning models, two based on learning with non-time-series data and two based on learning with time-series data collected by the hat-shaped device. Learning data were obtained in a specially designed space with a controlled environmental temperature and humidity. The inter-subject evaluation showed a Mean Absolute Error (MAE) of 8.50 using Support Vector Machine (SVM) with 5-fold cross-validation with 15 subjects. Moreover, the intra-subject evaluation showed an average MAE of 3.29 in all subjects using Random Forest (RF). The achievement of this study is using a hat-shaped device with cheap wearable sensors attached to estimate scalp moisture content, which avoids the purchase of a high-priced moisture meter or a professional scalp analyzer for individuals. Introduction Hair and scalp health are essential parts of human health. Because the scalp is more delicate than the other skin on the human body, improper or no scalp care may lead to health issues, varying from hair loss to diseases according to severity. Practically, most people only pay attention to the protection of their hair. When people choose shampoo, conditioner, and other hair care products, they tend to focus on whether these products are suitable for their hair rather than whether these products are suitable for their scalp. Due to the interdependence between the scalp and hair, the hair blocks the ultraviolet radiation that is harmful to the scalp and helps the scalp moisturize, and a healthy scalp can breed beneficial hair fibers [1]. Once problems such as baldness and scalp damage occur, these problems are more difficult to treat than hair damage; thus, scalp care is most important. Although some scalp care products are obviously for avoiding scalp health problems, it is difficult to say whether these products are effective because few have corresponding medical literature proof [2]. An unbalanced diet and disordered lifestyle could also cause scalp health conditions to change constantly. People cannot solve such a problem with scalp care products. However, we could roughly know the scalp condition by monitoring data, such as scalp temperature and moisture content, and because human hair and scalp are easily affected by environmental temperature and humidity, it is also necessary to monitor environmental data. First, optimal scalp temperature is generally considered to be approximately 34 • C [3]. The moisture content of the skin's stratum corneum is roughly 10-30% [4], and the scalp moisture is higher than that, for example, 35-50%. If the scalp temperature is lower than that, blood circulation to the scalp is reduced, insufficient oxygen is delivered, toxins accumulate around the scalp, and hair growth is impeded [5]. If the scalp temperature is too high, itching and pain will occur, and the scalp may obtain a parasitic infestation and develop inflammation [6]. Hair loss and dry scalp will occur if scalp moisture is less than 10% [7]. Moreover, scalp moisture is easily affected by environmental humidity [8]. The most suitable environmental humidity for the skin is around 60%, which is the same for the scalp. High-humidity environments mean that mold and bacteria are likely to grow, sebum secretion becomes active, dandruff is produced, and the scalp takes on a bad odor. In addition, when the environmental humidity falls below 40%, static electricity will be generated and damage the scalp surface. Although a temperature sensor or thermistor could be used to sense scalp temperature continuously, there are no simple sensors or devices for sensing scalp moisture accurately, and specialized equipment is expensive and unsuitable for routine measurement because of its large size. Therefore, in this study, we developed a hat-shaped device equipped with wearable and environmental sensors to estimate scalp moisture content in a hat environment. We used the hat-shaped device in a specially designed space to collect scalp data and a mobile moisture meter to collect ground truth of scalp moisture under different environmental conditions to train four machine learning models: Support Vector Machine (SVM), Random Forest (RF), Neural Network (NN) with one-dimensional convolution layers (Conv1d), and NN with Gate Recurrent Unit (GRU). SVM, RF, and NN are typical machine learning models. Machine learning models have usually been programmed to recognize a specific pattern, which cannot be directly observed by people, after being trained by a data set. Machine learning models can learn some features from the data set and make inferences based on their algorithm, which means machine learning models can estimate previously unobserved data after training. The estimation results of scalp moisture were also evaluated under inter-subject and intra-subject. Finally, we experimented with estimating scalp moisture in a daily environment and obtained an application method with estimation results. Human Thermal Model Human thermal models describe the heat exchange and balance of the human body. Still, the complexity of each model is quite different. Among these models, Gagge et al. first proposed the 2-node model in 1971, which assumed a human body as a two-layer structure of skin layer and deep layer and gives biological data generated from heat exchange, such as body temperature and sweat rate, as a mathematical expression [9]. The general structure of the 2-node model is shown in Figure 1. The 2-node model's segmentation of human body parts is straightforward; however, in human thermal models developed subsequently, the number of human body segments has increased, and the expression of heat exchange in various parts of the human body has become more detailed. For example, the 25-node model proposed by Stolwijk [11,12]. Furthermore, some models are called biothermal models, which can be researched for therapies for some diseases or studying the habits of animals and plants if the functional objects of the human thermal models are extended to specific objects. Wakamatsu et al. proposed a treatment for brain hypothermia through a patient's biothermal model [13]. Men-Chi et al. predicted the transient temperature of diseased wall tissue by building a biothermal model of atherosclerotic patients to help doctors improve treatment procedures [14]. Separately, Luecke et al. studied how California sea lions behaved in water through biothermal models [15]. Romero Measurement of Scalp Moisture Content As the measurement of scalp moisture is included in the scope of the measurement of skin moisture, there have been few studies on the measurement of scalp moisture alone. Generally, skin moisture measurement is conducted by a moisture sensor. Previously, moisture sensors were developed based on traditional humidity sensors, such as interdigital capacitance [17]. Recently, new moisture sensors for skin moisture measurement have been continuously developed. Lu et al. developed an integrated, flexible, and small sensor system that can be fixed on a human finger to measure moisture and temperature in real time [18]. Mondal et al. developed an anodized aluminum oxide (AAO)-assisted MoS 2 honeycomb resistive humidity sensor with higher sensitivity [19]. These sensors are difficult to apply to the measurement of scalp moisture because it is difficult to keep the electrode part close to the scalp due to the obstruction of hair, so specialized equipment for scalp moisture must replace electrodes for probes. Some specialized devices can be used to measure scalp moisture. Still, most are expensive and difficult for individuals to purchase and use in daily life, such as transepidermal water transpiration meters or highly sensitive stratum corneum thickness and moisture meters. Thus, this paper proposes an estimation method for scalp moisture to replace the measurement of scalp moisture in daily life. Wearable Sensors in Clothing Wearable sensors are ubiquitous for data sensing in the clothing environment. Many studies have shown that placing sensors in clothes, hats, shoes, etc., can monitor the wearer's behavior patterns and improve their daily life. For example, Farringdon et al. made a jacket that uses fabric stretch sensors to measure upper limb and body activity [20]. Jayasinghe et al. built inertial sensors into various clothes to analyze and classify the wearer's behavior patterns [21]. Pham et al. put a wireless accelerometer in shoes and built a CNN that uses the accelerometer data to predict the seven kinds of daily activities of wearers [22]. Li et al. put optical fiber-Bragg-grating-based sensors into functional textiles and obtained the wearer's body temperature in real time through a weighted coefficient model constructed based on the body surface temperature where the textiles are located for health care [23]. Shahnaz et al. developed a smart hat that uses sonar sensors to detect obstacles on a straight path and a three-axis acceleration sensor to detect the behavior of older people wearing the smart hat to prevent them from falling [24]. Chang et al. installed a camera on a hat of a young child to recognize the objects that the child had seen and output it into audio so that the child could learn quickly in daily life [25]. In addition, smart shoes that can generate electricity have been proposed, which can provide new energy solutions [26]. Sensor Technology toward Smart Wearable System With the complexity of the application scenarios of wearable sensors, the miniaturization of wearable sensors is a trend, which means wearable sensors must be portable and flexible enough nowadays. The developed biosensors for wearable systems have achieved some advancements in recent years. Wang et al. reported a wearable electrochemical biosensor to analyze sweat in physical exercise and at rest [27]. The biosensor has monitored the amino acid levels of the wearer to assess the risk of metabolic syndrome. Ferro et al. created a submillimeter high-power supercapacitor and a biomolecule probe, which provided a great possibility for the miniaturization of biosensors [28]. Wang et al. proposed a flexible and low-cost humidity sensor with fast responses and successfully applied it to detect human breathing and provide an electrical safety warning for bare hands and wet gloves [29]. Yang et al. proposed a wearable sweat sensor that could detect vitamin C and uric acid based on Metal-Organic Frameworks (MOFs). The sweat sensor can be attached to the skin due to good air permeability [30]. Additionally, Nawaz et al. claimed that Organic Electrochemical Transistors (OECTs) are suitable for manufacturing biosensors [31]. Because of electrolyte gating and aqueous stability, OECTs could be operated within a living organism. Proposed Method This study aimed to help people's scalp care by sensing scalp data in daily life. Thus, we estimated the scalp moisture content, an essential indicator of scalp health, based on machine learning using wearable and environmental sensors attached to a hat. First, we obtained scalp and environmental data in a specially designed space with controlled environmental temperature and humidity. Second, we trained the machine learning models as pre-trained models using the obtained data. When new scalp and environmental data obtained by the hat-shaped device are input to the pre-trained models, scalp moisture content would be estimated as an output. We selected some features for training the machine learning models by referring to the variables of the 2-node model. In the 2-node model, the heat exchange inside the human body is determined from three kinds of data: individual differences (height, weight, age, and sex), environmental conditions, and biometric information. We did not consider individual differences in this study. Environmental conditions can be regarded as environmental temperature and humidity. Because the two nodes of the 2-node model are the skin and the body core, we adopted the scalp surface and body core temperature as biometric information for estimating scalp moisture. We also added heart rate to the machine learning features to express the activity of the human body. For collecting the data mentioned above, we used different sensors. For aggregating these sensors, we developed a hat-shaped device with wearable and environmental sensors attached, as shown in Figure 2. The hat-shaped device is not only used to collect data but also to estimate scalp moisture content in real time based on a trained machine-learning model. In the hat-shaped device, we fixed an NTC thermistor for scalp surface temperature measurement and a DHT22 for the internal environmental temperature and humidity measurement in the hat. We mounted an Arduino Nano, Bluetooth module RN-42, mobile battery for power supply, NTC thermistor for core body temperature measurement, pulse sensor for heartbeats measurement, and DHT22 for external environmental temperature and humidity measurement outside the hat. We measured scalp surface temperature, core body temperature, heartbeats, internal hat temperature, internal hat humidity, external hat temperature, and external hat humidity. The corresponding relationship between sensors and data is shown in Table 1. Moreover, we also developed an Android application mainly to facilitate data acquisition from experimental participants. The application could store the data from the hat-shaped device in a small local database on a mobile phone with SQLite3. It could also control environmental control equipment with REST API and record correct machine learning data. A screenshot of it is shown in Figure 3. The interface of the application is roughly composed of six parts. The top toggle button is used to connect the Bluetooth module RN-42. The four toggle buttons below it are used to control the experimental environment. The following two buttons are used to start and stop the hat-shaped device sensor. Next is the countdown. The editable textbox and button are used to record the ground truth of scalp moisture. The bottom space is for showing the application log massage. The usage of the application will be described in Section 4 with the experimental process. Experiment Because changes in environmental temperature and humidity would affect the water content of the stratum corneum [8], we obtained the biometric data from experimental participants and environmental data around experimental participants to train and evaluate the estimation results of the machine learning models while regulating the current environmental temperature and humidity. We collected 15 participants' data in a specially designed space. Experimental Environment Our experimental environment is shown in Figure 4. The specially designed space is a 1.5 m × 1.5 m × 2.0 m pipe-type booth containing environmental control equipment (heater, cooler, humidifier, and dehumidifier). We used the equipment and a central cooler to modify the environmental temperature and humidity in the booth. SwitchBot hub mini could record the infrared signals. We registered the infrared signals of the remote controllers of the heater, cooler, and humidifier in the SwitchBot hub mini for controlling them on and off via the Android application mentioned in Section 3 with the REST API function. Moreover, we set two SwitchBot bots to the position adjacent to the switch button of the dehumidifier and central cooler to turn them on and off via the Android application. The four toggle buttons below the "CONNECT" button in Figure 3 correspond to the switches of the cooler, heater, humidifier, and dehumidifier. The relationship diagram of the environmental control equipment is shown in Figure 5. During the experiment, we acquired biometric and environmental data by letting the participants wear the hat-shaped device while working in a seated position in the pipe-type booth. When the participants were wearing the hat-shaped device, we stuck the tip of an NTC thermistor to their scalps. We also inserted and fixed the tip of another NTC thermistor in the participants' right ear canal and attached a pulse sensor to their little fingers. Then, we acquired scalp surface temperature, core body temperature, and heartbeats while modifying the temperature and humidity in the booth. Acquisition of Ground Truth of Scalp Moisture Generally speaking, skin sensors are rarely used to measure scalp moisture, and the measurement is inaccurate because it is vulnerable to disturbances, such as air humidity and sweat. As mentioned in Section 2.2, special measurement instruments are also unsuitable for daily life because of their high cost and large size. In this study, the ground truth of scalp moisture was obtained using the Mobile moisture HP19-M developed by Courage+Khazaka Inc. Naturally, in skin measurement, the relative water content can be obtained using the electrode from a water meter attached to the skin, but in the case of the scalp, the electrode cannot be attached to the scalp because the hair blocks it. However, the moisture in the forehead close to the scalp is roughly the same as in the scalp. Therefore, the upper forehead moisture was measured in the experiment and used as the ground truth for machine learning. Experimental Procedure Before starting the experiment, participants were told to operate the environmental control equipment according to the given orders, as shown in Figure 6. There are four orders corresponding to different operation sequences of environmental control equipment. We disrupted the sequence of environmental temperature and humidity changes to obtain more data under more environmental conditions, allowing the machine learning model to cope with a variety of environmental modes. For example, in Operation Order 1, the participant should first measure the data under room temperature and humidity for 15 min (no environmental control equipment working). In the second 15 min, he/she turns on the cooler and dehumidifier for measurement. In the third 15 min, he/she turns off the previous equipment and turns on the heater and humidifier. In the fourth 15 min, he/she turns the humidifier off and the dehumidifier on. In the last 15 min, he/she turns the previous equipment off and turns the cooler and humidifier on until the end of the experiment. The other orders follow roughly the same pattern. The switches of environmental control equipment were all completed by operating the Android application. During each 15 min interval, participants would take five times measurements, each lasting 3 min. When starting the measurement, the participants would press the "START SENSING" button in the Android application. The countdown below the "START SENSING" button would automatically begin, and there would be a vibration prompt when the countdown ends. Participants would then press the "STOP SENSING" button, and the countdown reset. After each measurement, the first author of this paper would use the HP19-M mentioned in Section 4.2 to measure the participant's forehead moisture three times and ask them to fill in the median value of the three measurements of scalp moisture in the editable textbox in the Android application. Every time the moisture was measured, the first author would gently wipe the participant's forehead with tissue to prevent the sweat produced in the experiment from affecting the measurement. During the experiment, the first author was in another pipe-type booth next to the participants, instructing them to switch the environmental control equipment. In addition, the operation orders were always displayed on a laptop, which was kept on a table for the participants to check at any time. The ground truth measurement in the experiment is demonstrated in Figure 7. Machine Learning and Data Preprocessing We used SVM, RF, NN with Conv1d, and NN with GRU for estimating scalp moisture under 5-fold cross-validation. Both Conv1d and GRU can handle time-series data. All models performed a grid search before estimation. Grid search is a technique for optimizing machine learning model hyper-parameters, and it can significantly increase the model estimation accuracy [32]. When training SVM and RF, we used Synthetic Minority Over-Sampling Technique for Regression with Gaussian Noise (SMOGN) to oversample the learning data to avoid insufficient samples under some environmental conditions [33]. When training the NN with Conv1d and NN with GRU, we used early stopping to inhibit overfitting [34]. Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were used to evaluate the estimated results. MAE is the mean of the absolute errors of the difference between the ground truth of scalp moisture and estimated scalp moisture values. RMSE is the root of the mean of the square of the difference between the ground truth of scalp moisture and estimated scalp moisture values. The SVM and RF were implemented in Scikit-learn [35], and NN with Conv1d and NN with GRU were implemented in Tensor-Flow [36]. Moreover, because the magnitude difference between the input and output data of the models might cause significant errors in the estimated scalp moisture, we normalized the input data. Because we used many different sensors, we hope that the influence of physical quantities could be reduced while training the model. Therefore, we chose z-score normalization, which is given by Equation (1) [37]: where X is raw data, µ is the mean of raw data, and σ is the variance of raw data. X norm is normalized data. Furthermore, we expanded the features of the input sensor data while training the models. Because each feature of sensor data is related to time, we added the mean and variance of the ten sets of data (approximately 20 s) before a certain time point of the same feature value to the current moment data set. The input sensor data set was increased from 7 to 21 dimensions. We obtained a total of 375 sets of data from 15 experimental participants. For each fold cross-validation, 300 sets were used as training data, and 75 sets were used as testing data. When training the machine learning models based on time-series data, because each set of data had a time length of 80 (approximately 160 s), for each fold cross-validation, the shape of 300 × 80 × 21 data was used for training, and the shape of 75 × 80 × 21 data was used for testing. Results We expect that the proposed method has superior estimation ability in daily situations and that machine learning models have general adaptability to different people. We illustrate the estimation results in the inter-subject evaluation in Section 5.1 and the estimation results in the intra-subject evaluation in Section 5.3. In Section 5.2, we evaluate the linear correlation for each feature of learning data to confirm the importance of features. Finally, different from the experiment in Section 4, we conducted another one-day experiment in Section 5.4 to estimate scalp moisture content for one experimental participant in daily life. Estimation Results of Inter-Subject Evaluation The average of the 5-fold cross-validation estimation results of the 4 machine learning models with 15 experimental participants is shown in Table 2. The estimated curves of one of the folds are shown in Figure 8. We can see that from the average 5-fold cross-validation results, SVM has the smallest MAE and RMSE; thus, SVM has the best performance for scalp moisture estimation with multiple experimental participants. RF and NN with GRU also have good estimation results, but the MAE and RMSE of the two models are slightly higher than those of SVM. Moreover, we can see that the MAE and RMSE of NN with Conv1d are oversized; thus, NN with Conv1d has the lowest estimation accuracy. We thought the estimation results of NN with Conv1d and GRU trained by time-series data should be stable. However, according to Figure 8, we can see that the estimated curves of SVM and RF are much more stable than those of NN with Conv1d and NN with GRU. The RMSE of NN with Conv1d and GRU is higher than those of both SVM and RF. It seems that NN with Conv1d and GRU are more susceptible to outliers. Although we collected the data from 15 participants with 25 samples per person, this may be insufficient to train NN with Conv1d and GRU because they have more deep structures and require more learning data than SVM and RF. Therefore, we plan to increase the number of participants or the number of experiments per participant in the future. Linear Correlation between Features of Learning Data and Scalp Moisture Although we are unsure about the relationship between the data collected from the hat-shaped device and scalp moisture, we investigated the linear correlation of the SVM and RF features and the scalp moisture via Principal Component Analysis (PCA), which could explain the importance of features of learning data. PCA could reduce the dimensionality of learning data and minimize information loss of data [38]. Because time-series data cannot be analyzed with PCA, we only show the results of features used in SVM and RF. The covariance coefficients of each feature against the scalp moisture content are shown in Table 3. The features of SVM and RF we chose are the average value of instantaneous, mean, and variance of each sensor data obtained within 3 min. The covariance coefficient ranges from −1 to 1. The judgment of linear correlation is as follows: when the covariance coefficient is between −0.1 and 0.1, there is no linear correlation; and when the covariance coefficient is above 0.5 or below −0.5, there is a strong linear correlation; when the covariance coefficient is between 0.1 and 0.5 or between −0.5 and −0.1, there is a weak linear correlation. We do not discuss positive or negative correlations here. In Table 3, we can see that the average of instantaneous and mean core body temperature are strongly linearly correlated. Among the remaining averages of instantaneous and mean features, except for external hat humidity, they are weakly linearly correlated. In addition, the variance of most features is uncorrelated. Although the above results are only for the linear situation, we can confirm that the variables we selected regarding the 2-node model are related to scalp moisture. Estimation Results of Intra-Subject Evaluation We used the four machine learning models to construct personal models of each participant to conduct the intra-subject evaluation. We still used 5-fold cross-validation for each personal model. Table 4 shows the average MAE and RMSE of the 5-fold cross-validation of 4 models for each experimental participant and the overall average of 15 participants. Likewise, we can see from Table 4 that the results of SVM and RF are generally better than those of NN with Conv1d and GRU. The average MAE and RMSE of RF of the 15 participants, which are 3.29 and 3.97, are smaller than those of the SVM and of the estimation results in the inter-subject evaluation. Therefore, RF performs better in estimating the individual scalp moisture content, and SVM has good performances in both inter-subject and intra-subject evaluations. The MAE and RMSE of NN with Conv1d are larger than those in Section 5.1, and it is impossible to estimate the scalp moisture correctly. NN with GRU has a smaller MAE for some participants and a larger MAE for the rest. Due to the further reduction in the available data from personal models (only 25 samples per participant), it becomes more challenging to train NN with Conv1d and GRU, so we did not obtain good results with these two models. The maximum range of moisture content of the scalp is 0-99%, and in general, the difference between the maximum and minimum moisture content of the stratum corneum of the skin should be 15-20% [4]. Although the best MAE result (8.59) obtained in Section 5.1 is slightly larger in the above range, the average result (3.29) of the MAE of RF obtained in this section is acceptable. Estimation Results in the One-Day Experiment We conducted a one-day experiment to verify whether the proposed method can estimate the scalp moisture content in daily life. The experiment lasted from 3:00 p.m. until 9:30 p.m. JST. The experimental participant was the first author. The participant wore a hat-shaped device and used the Android application to collect data. The data record interval was 2 s. The ground truth of scalp moisture was measured 3 times every 15 min. The median value of the three scalp moisture measurements was recorded in the Android application. We measured learning data and ground truth in many situations, such as shopping in a supermarket, walking in a seaside park, sitting on a train, stopping at an underground train station, and walking along the street. The one-day itinerary of the experimental participant is roughly as follows. First, he made some preparations at home, went to the supermarket on foot, took the train to the seaside park for a walk, and then took the train home. After returning home, some entertainment activities were carried out. We did not plan the experimental participant's itinerary, and his behavior in the experiment was arbitrary. We used the personal model trained in Section 5.3 to estimate scalp moisture content. The estimation results are shown in Table 5, and estimated curves from four models are shown in Figure 9. From Table 5, we can see that the MAE of SVM is the smallest, followed by NN with GRU and RF, and the estimation accuracy of NN with Conv1d is the worst. Although the MAE of SVM is better than the average MAE in Section 5.1, we can see from Figure 9 that the estimated curve of SVM is a straight line, which does not fit the ground truth of scalp moisture very well. The same occurred for RF. On the contrary, although NN with GRU had a significant deviation, the changing trend of its estimated curve fitted the ground truth of scalp moisture very well. Combined with the results in Section 5.3, we believe that if we could acquire more training data, NN with GRU would perform better in estimating scalp moisture in daily life. To further investigate why the estimation result is not very good, we analyzed the changes in internal hat temperature and humidity, external hat temperature and humidity, and the ground truth of scalp moisture during the one-day experiment; these data are shown in Figures 10 and 11. The two figures show that the temperature and humidity outside the hat change significantly. For example, because the participant was sitting on a train beside a door, every time the train stopped at the platform and the door was opened, there were apparent changes in temperature and humidity. When the participant was walking along the seaside, the relative humidity of the environment was as high as 90%. Other such occurrences in daily life would also lead to changes in environmental temperature and humidity, such as coming out to a balcony and picking up takeout. These situations are challenging to simulate in the experiment of Section 4, so it was difficult for us to collect learning data similar to when the environmental conditions suddenly change for some reason, which is the main factor leading to the inability to estimate scalp moisture accurately in daily life. Therefore, we must expand the simulatable temperature and humidity range in the pipe-type booth and gather more experimental data under different environmental conditions. Measurement of Core Body Temperature Theoretically, core body temperature is approximately 37 • C [39]. In the experiment of Section 4, we placed an NTC thermistor in the participants' ears to obtain core body temperature. Still, the measured value was lower than the standard value because it was difficult to fix the NTC thermistor and reach the eardrum firmly. We think the NTC thermistor only acquired the ear canal's temperature, not the body's core temperature. Because it is dangerous for the experimental participants to take action, such as running or walking with NTC thermistors in their ears, safer measurement methods are necessary. Perhaps we might consider a way of fixing the infrared temperature sensor or estimating body core temperature. Ground Truth of Scalp Moisture Some ground truth of scalp moisture obtained in the experiments of Section 4 exceeded 90%. This is because we asked the participants to constantly change the temperature and humidity of the environment to obtain learning data under different environmental conditions; some sweated a lot when they switched to a hot environment. Although we wiped the participants' foreheads before measuring the scalp moisture, it did not completely prevent this from happening. Thus, we think that some of the ground truth of scalp moisture may have deviations, which could lead to a decrease in estimation accuracy. Application Even if we considered that we had not obtained the ideal core body temperature and scalp moisture data in Section 6, some machine learning models performed sufficiently well. Therefore, we present a cloud-service-based method for applying well-performing machine learning models to estimate scalp moisture in real time. The approach is shown in Figure 12. Precisely, we deployed a pre-trained personal machine learning model on a virtual machine in Azure and used MLflow to publish an online service that continuously estimates scalp moisture. MLflow is an open-source platform to manage the machine learning lifecycle, including experimentation, reproducibility, deployment, and a central model registry [40]. The service published by MLflow could exchange learning data and estimation results between the Android application and cloud virtual machine through the REST API. Additionally, the deployed machine learning model could run in a Docker container, and we could mount multiple containers and deploy multiple personal models simultaneously. Conclusions In this paper, we proposed a hat-shaped device equipped with wearable and environmental sensors to estimate scalp moisture content. We estimated the scalp moisture of fifteen experimental participants through four machine-learning models based on scalp surface temperature, core body temperature, internal hat temperature, and humidity, external hat temperature and humidity, and heartbeat obtained from the hat-shaped device. SVM had the smallest MAE in the inter-subject evaluation, and RF had the smallest average MAE in the intra-subject evaluation. In summary, SVM performs well in both inter-subject and intra-subject evaluations. For the one-day experiment, although the MAE of NN with GRU was slightly larger, the trend of the estimated curve was good. Therefore, we need to improve the experimental conditions and obtain more data to estimate scalp moisture content accurately in daily life. In addition, we also presented a cloud-service-based method for applying the scalp moisture estimation results. Estimating scalp moisture content is feasible by machine learning models based on the learning data obtained from the hat-shaped device. Although some data values are biased, the scalp moisture estimation accuracy will be improved if these biases can be further reduced. As mentioned, one future task of this study is to obtain more data to improve the accuracy of scalp moisture estimation, which still requires many experiments. In addition, in daily life, it is too dangerous to insert the tip of the NTC thermistor into the ear canal to obtain core body temperature. Thus, we must consider other acquisition methods or estimate scalp moisture content without core body temperature.
7,960.4
2023-05-01T00:00:00.000
[ "Computer Science" ]
Peptide Binding Sites of Connexin Proteins : Intercellular gap junction (GJ) contacts formed by the coupling of connexin (Cx) hemichannels (HCs) embedded into the plasma membranes of neighboring cells play significant role in the development, signaling and malfunctions of mammalian tissues. Understanding and targeting GJ functions, however, calls for finding valid Cx subtype-specific inhibitors. We conjecture the lack of information about binding interactions between the GJ interface forming extracellular EL1 and EL2 loops and peptide mimetics designed to specifically inhibit Cx43HC coupling to Cx43GJ. Here, we explore active spots at the GJ interface using known peptide inhibitors that mimic various segments of EL1 and EL2. Binding interactions of these peptide inhibitors and the non-peptide inhibitor quinine has been modelled in combination with the use of blind docking molecular mechanics (MM). The neuron-specific Cx36HC and astrocyte-specific Cx43HC subtypes were modelled with a template derived from the high-resolution structure of Cx26GJ. GJ-coupled and free Cx36HC and Cx43HC models were obtained by dissection of GJs (GJ-coupled) followed by 50 ns molecular dynamics ( free ). Molecular mechanics (MM) calculations were performed by the docking of inhibitors, explicitly the designed Cx43 EL1 or EL2 loop sequence mimetics (GAP26, P5 or P180–195, GAP27, Peptide5, respectively) and the Cx36 subtype-specific quinine into the model structures. In order to explore specific binding interactions between inhibitors and CxHC subtypes, MM / Generalized Born Surface Area (MM / GBSA) ∆ G bind values for representative conformers of peptide mimetics and quinine were evaluated by mapping the binding surface of Cx36HC and Cx43HC for all inhibitors. Quinine specifically contacts Cx36 EL1 residues V54-C55-N56-T57-L58, P60 and N63. Blocking the vestibule by the side of Cx36HC entry, quinine explicitly interacts with the non-conserved V54, L58, N63 residues of Cx36 EL1. In addition, our work challenges the predicted specificity of peptide mimetics, showing that the docking site of peptides is unrelated to the location of the sequence they mimic. Binding features, such as una ff ected EL2 residues and the lack of Cx43 subtype-specificity of peptide mimetics, suggest critical roles for peptide stringency and dimension, possibly pertaining to the Cx subtype-specificity of peptide inhibitors. Introduction Gap junctions (GJs) formed by the coupling of various connexin (Cx) subtype hemichannels (CxHCs, connexons) maintain adhesion and conduction between adjacent cells [1][2][3]. They are recognized as critical players in the development and disease of mammalian tissues [4][5][6] (and reference cited). All Cx subtypes contain two extracellular loops (EL1 and EL2) which are supposed to participate in GJ formation [7]. Previously, Warner et al. [8] conjectured HC coupling to GJ and conserved amino acid (AA) motifs, with QPG of EL1 being among the residues that possibly intervene HC coupling to GJ. Significantly, the consensus sequence of the conserved AAs of EL1 DEQSxFxCNTxQPGCxNVCYDxx highlights sequences of fully (bold) or in at least 50% identical residues in all of some twenty human Cx proteins [9]. One third of EL1 AAs (x) are less conserved, providing opportunity for subtype-specific inhibitor design. An exposed tetrapeptide sequence within the EL2 sequence has also been emphasized concerning the specificity of inter-connexon interaction [10,11]. As a result, several EL1 or EL2 mimicking peptides targeted to inhibit the coupling of HC to GJ have been developed [12][13][14][15]. Unexpectedly, GJ inhibitor peptide mimetics do block CxGJ-facilitated intercellular communications, but in unanticipated ways [16]. We and others [2] have sought to develop a more detailed understanding of HC coupling to GJ, first made apparent at the molecular level by the discovery of the X-ray structure of Cx26 GJ at 3.5 Å resolution [17]. We began this study by identifying the contact area between the peptide inhibitors and GJcoupled and free Cx36/Cx43 connexon models using an iterative blind docking approach that did not require existing experimental binding data [33]. By blind docking peptide sequence mimetics, we also explored the valid binding area. Calculations of the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) ΔGbind values were performed [34] on the 30 best scoring poses of the blind docking trials to fit CxHC peptide inhibitors and quinine. This method has proved to be a powerful tool to predict binding affinities and identify the correct binding poses for protein-peptide complexes [35]. Figure 1. Plot of Cx43 protomer explaining sequence identity of peptide inhibitors GAP26, P5, GAP27, P180-195 and Peptide5. The membrane-embedded residues, including four transmembrane (TM) helices are recognized using the "positioning proteins in membrane" (PPM) server by the "orientation of proteins in membrane" (OPM) database [31]. AA colour code: aromatic-green; hydrophobic-gray; basic-blue; acidic-red; polar neutral-orange; Cys-lemon. Figure was generated by Protter [32]. Homology Modelling of Homomeric GJs Formed by Cx36 or Cx43 We began this study by identifying the contact area between the peptide inhibitors and GJ-coupled and free Cx36/Cx43 connexon models using an iterative blind docking approach that did not require existing experimental binding data [33]. By blind docking peptide sequence mimetics, we also explored the valid binding area. Calculations of the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) ∆G bind values were performed [34] on the 30 best scoring poses of the blind docking trials to fit CxHC peptide inhibitors and quinine. This method has proved to be a powerful tool to predict binding affinities and identify the correct binding poses for protein-peptide complexes [35]. Homology Modelling of Homomeric GJs Formed by Cx36 or Cx43 Homomeric GJ models built up exclusively by Cx36 or Cx43 protomers were constructed using the X-ray structure (2zw3) of homologous Cx26 GJ [17] as a template and the Swiss-Model server facility [36]. This initial PDB structure includes a whole GJ, explicitly chains A-B-C-D-E-F-G-H-I-J-K-L, shaping the extracellular (interface) and transmembrane (TM) regions, except the large intracellular region, due to its potentially disordered nature. Sequences of cytosolic residues, without experimentally determined coordinates in the crystal structure, were utilized to connect individual TM helices intracellularly using the built-in protocol of Swiss-Model server. Our model GJ structures enfold two hexameric HCs that are formed by apposed protomer chains A-B-C-D-E-F coupled with chains G-H-I-J-K-L. Since there are no significant insertions or deletions among Cx subtypes in the extracellular and TM regions, the built-in automated alignment of Swiss-Model was used for creating individual homology models. The modelling server puts on hydrogen atoms and arranges amino acid (AA) sidechains so that no clashes appear in the structure. Otherwise, the alpha carbon backbone of homomeric GJs formed by Cx36 or Cx43 resembles the Cx26 structure, reflecting the 6-fold symmetry. Due to these preparatory steps, homology models were ready for the separation of the A-F chains from the whole of GJ (A-L). Hence, homomeric Cx36GJ and Cx43GJ models were used to cut off solo connexons, i.e., Cx36HC or Cx43HC. To this end, G-L protomer chains of both Cx36 and Cx43 GJs were removed by means of Schrödinger's Maestro module [37]. The remaining A-F chains, representing the GJ-coupled model of Cx36HC and Cx43HC, are characterized by their extracellular front ( Figure 2A) and top ( Figure 2B) views. Chemistry 2020, 2, x 4 Homomeric GJ models built up exclusively by Cx36 or Cx43 protomers were constructed using the X-ray structure (2zw3) of homologous Cx26 GJ [17] as a template and the Swiss-Model server facility [36]. This initial PDB structure includes a whole GJ, explicitly chains A-B-C-D-E-F-G-H-I-J-K-L, shaping the extracellular (interface) and transmembrane (TM) regions, except the large intracellular region, due to its potentially disordered nature. Sequences of cytosolic residues, without experimentally determined coordinates in the crystal structure, were utilized to connect individual TM helices intracellularly using the built-in protocol of Swiss-Model server. Our model GJ structures enfold two hexameric HCs that are formed by apposed protomer chains A-B-C-D-E-F coupled with chains G-H-I-J-K-L. Since there are no significant insertions or deletions among Cx subtypes in the extracellular and TM regions, the built-in automated alignment of Swiss-Model was used for creating individual homology models. The modelling server puts on hydrogen atoms and arranges amino acid (AA) sidechains so that no clashes appear in the structure. Otherwise, the alpha carbon backbone of homomeric GJs formed by Cx36 or Cx43 resembles the Cx26 structure, reflecting the 6-fold symmetry. Due to these preparatory steps, homology models were ready for the separation of the A-F chains from the whole of GJ (A-L). Hence, homomeric Cx36GJ and Cx43GJ models were used to cut off solo connexons, i.e., Cx36HC or Cx43HC. To this end, G-L protomer chains of both Cx36 and Cx43 GJs were removed by means of Schrödinger's Maestro module [37]. The remaining A-F chains, representing the GJ-coupled model of Cx36HC and Cx43HC, are characterized by their extracellular front ( Figure . For the latter, the z-axis of the coordinate system was set up to point towards the channel. EL1 and EL2 loops of each chain, depicted using the "positioning proteins in membrane" (PPM) server by the "orientation of proteins in membrane" (OPM) database [31] as detailed in Section 2. Since this structure represents the conformation of connexins in the full GJ form and peptides are expected to interact with the hemichannel form, we applied 50 ns molecular dynamics (MD) to . For the latter, the z-axis of the coordinate system was set up to point towards the channel. EL1 and EL2 loops of each chain, depicted using the "positioning proteins in membrane" (PPM) server by the "orientation of proteins in membrane" (OPM) database [31] as detailed in Section 2. Since this structure represents the conformation of connexins in the full GJ form and peptides are expected to interact with the hemichannel form, we applied 50 ns molecular dynamics (MD) to allow the protein adopting the free hemichannel-like form not to resemble the prior presence of the apposed connexon (G-L chains). Molecular dynamics (MD) calculations were performed using the Desmond software package obtained from DE Shaw Research [37]. Cx36HC and Cx43HC transmembrane regions were defined according to Section 2.2. These HCs were then prepared via the Protein Preparation Wizard, and membranes were added using the System Builder menu by placing the membrane on the pre-aligned structure obtained from PPM. The model was relaxed before simulation and molecular dynamics was run at 300 K for a total simulation time of 50 ns at the supercomputer facility of the Governmental Agency for IT Development (KIFU), Hungary. The resulting free Cx36HC and Cx43HC model structures ( Figure 2C,D) were used to dock mimetic peptides. Determination of the Position of TM Regions Successful prediction of inhibitor binding at extracellular CxHC areas necessitates the realistic arrangement of membrane bilayers. Explicitly, the claim concerns the genuine arrangement of extracellular loops EL1 and EL2 of CxHC subtypes, critically depending on the valid position of the membrane bilayer. Initially, the determination of the position of TM regions was focused on the primary sequence of the Cx43 protein. According to Delvaeye et al. [15], we also sampled the sequence-based prediction of TM regions based on Uniprot's TM assignment as being 14-36, 77-99, 155-177, 209-231. These data were based on the prediction of the Transmembrane Protein Topology with a Hidden Markov Model (TMHMM) server [38], although the prediction of the first TM region of Cx43 was mistakenly identified as 14-36, and is currently under correction (Uniprot-personal communication). When TM regions were assigned according to 3D structures of our CxHC models, a significantly different TM1 region 21-46 was obtained, using the "positioning proteins in membrane" (PPM) server by the "orientation of proteins in membrane" (OPM) database [31]. To clarify the issue of real membrane boundaries determining extracellular EL1/EL2 domains, a series of sequence-based predictions of the TM regions from the CCTOP server [39] were weighed against the 3D structure-based predictions of the TM regions from the PPM server by the OPM database [31] in addition to an earlier TMDET server [40]. The procedure of the 3D structure-based assessments of the TM regions via the PPM and OPM approach seems to provide more detailed information about the membrane-embedded amino acids (AAs). This way, the membrane boundaries were determined, providing the consensus Cx36-membrane embedded amino acids (AAs TM domains, characterized by two shorter and two longer TM helices in succession together with the embedded residues near the N-terminal, are recognized. It is worth mentioning that structural motifs of CxHCs [2,17,41,42] (also this work) may reveal similar mechanistic clues such as the ball-and-chain inactivation in the K + channel [43]. In fact, the ball-and-chain mechanism was further confirmed by the recently determined 3D structure of Cx26 [44]. Topological Arrangements of Cx43 EL1 and EL2 Sequences Identical with EL1-Mimetic GAP26 and EL2-Mimetic GAP27 or P180-195 The question arises as to whether the peptide mimetics designed to specifically inhibit Cx43HC coupling to Cx43GJ shall act along the GJ-interface surface. Figure 3 shows that AA residues of GAP26 (blue), GAP27 (red) and P180-195 (green) shape extracellular Cx43HC interfaces. Explicitly, the AAs corresponding to mimetic peptides GAP26, GAP27 and P180-195 are principally located at interfaces between inner EL1 and outer EL2 or at the peripheral boundary interface of EL2 ( Figure 3). The arrangement illustrates that permutations of the vertical inter-loop interface joined with the horizontal loop-periphery interface are beyond the most exposed EL1 loop sequences, lying on the front of the CxHC coupling reaction. These findings conclusively suggest that the location of interfaces identified by matching AA sequences of designed peptide mimetics argue against the notion that mimetics could directly inhibit connexon coupling to GJ. Notably, EL1 sequences contiguous to channel forming TM helices take the shape of a vestibule by the side of HC entry. Chemistry 2020, 2, x 6 notion that mimetics could directly inhibit connexon coupling to GJ. Notably, EL1 sequences contiguous to channel forming TM helices take the shape of a vestibule by the side of HC entry. Other residues of EL1 and EL2, not corresponding to peptide mimetic inhibitors, are shown in gray surf. The membrane-embedded residues, including four TM helices recognized using the PPM server by OPM database [31], are shown in the transparent light gray cartoon. Validation of Blind Docking Procedure via Optimization of MM/GBSA ΔGbind Values Docking calculations were performed using the Schrödinger Small-Molecules Drug Discovery Suite 2020-1 software package [45]. The in silico calculations were performed based on the recommendations of Tubert-Brohman et al. [46] for peptide docking with increased accuracy. The Glide SP-Peptide mode was used for docking with subsequent MM/GBSA ΔGbind ranking calculations [34]. The structures of the connexins Cx36 and Cx43 were prepared using the Protein Preparation Wizard program, while the quinine structure was prepared using the Ligprep module. Two binding area definitions were used around the EL1-EL2 loop regions, and docking grids suitable for peptide docking were generated (Figure 4). The ligand diameter midpoint box sizes were increased from the default to values between 30 and 40 Angströms to cover the whole binding site region. During the Glide SP-Peptide mode docking calculations, sampling was enhanced by a factor of 2 and the expanded sampling option was used as well. Because of the increased volume of the binding site region, instead of the default number 1000, the 3000 best poses were used for energy minimization to Residues of free Cx43HC model corresponding to three commonly used peptide mimetic inhibitors derived from different EL1 and EL2 sequences, GAP26 (blue), GAP27 (red) and P180-195 (green) in front (A) and top (B) views with z-axis pointing towards the channel. P180-195 labeling was removed from (B) for clarity. Peptide residues are shown in surf representation on the extracellular region of Cx43. Other residues of EL1 and EL2, not corresponding to peptide mimetic inhibitors, are shown in gray surf. The membrane-embedded residues, including four TM helices recognized using the PPM server by OPM database [31], are shown in the transparent light gray cartoon. Validation of Blind Docking Procedure via Optimization of MM/GBSA ∆G bind Values Docking calculations were performed using the Schrödinger Small-Molecules Drug Discovery Suite 2020-1 software package [45]. The in silico calculations were performed based on the recommendations of Tubert-Brohman et al. [46] for peptide docking with increased accuracy. The Glide SP-Peptide mode was used for docking with subsequent MM/GBSA ∆G bind ranking calculations [34]. The structures of the connexins Cx36 and Cx43 were prepared using the Protein Preparation Wizard program, while the quinine structure was prepared using the Ligprep module. Two binding area definitions were used around the EL1-EL2 loop regions, and docking grids suitable for peptide docking were generated (Figure 4). The ligand diameter midpoint box sizes were increased from the default to values between 30 and 40 Angströms to cover the whole binding site region. During the Glide SP-Peptide mode docking calculations, sampling was enhanced by a factor of 2 and the expanded sampling option was used as well. Because of the increased volume of the binding site region, instead of the default number 1000, the 3000 best poses were used for energy minimization to assure an exhaustive search of the possible binding poses. The Schrödinger peptide docking protocol uses an initial Macromodel conformational search instead of the Confgen conformational search performed by the Ligand docking protocol. To model the peptide docking procedure, a Macromodel conformational search was performed on all ligand structures. The resulting conformers were clustered into five structurally different clusters. Based on visual inspection, only representative structures for the first three clusters were taken into account during docking calculations with the "Canonicalize input conformation" Glide option turned off. The MM/GBSA ∆G bind calculations were performed on the best 30 poses. The resulting poses from the two docking grids were combined and analyzed together. assure an exhaustive search of the possible binding poses. The Schrödinger peptide docking protocol uses an initial Macromodel conformational search instead of the Confgen conformational search performed by the Ligand docking protocol. To model the peptide docking procedure, a Macromodel conformational search was performed on all ligand structures. The resulting conformers were clustered into five structurally different clusters. Based on visual inspection, only representative structures for the first three clusters were taken into account during docking calculations with the "Canonicalize input conformation" Glide option turned off. The MM/GBSA ΔGbind calculations were performed on the best 30 poses. The resulting poses from the two docking grids were combined and analyzed together. Mapping Binding Interactions of Inhibitory Peptide Mimetics and Quinine in Model CxHC Structures MM/GBSA ΔGbind values obtained for the best 30 blind docking runs (see previous paragraph) for quinine and each peptide mimetic inhibitors have been evaluated. The best MM/GBSA ΔGbind values obtained from docking quinine to Cx36 or Cx43 GJ-coupled models indicate that quinine prefers binding to Cx36 over Cx43 ( Figure 5A-C). However, docking quinine into the free Cx36HC and Cx43HC structures did not show subtype-specificity ( Figure 5A,D,E). In this structure, quinine was not able to dock onto the same surface identified in docking to the GJ-coupled Cx36 structure, and instead preferred docking to the outer surface of the extracellular region ( Figure 5D,E). Filtering the docking results for poses on the inner surface still did not reveal any subtype-specific interaction ( Figure 5A, bottom). These data suggest that quinine shall exert its subtype-specificity by entering the GJ from the cytosol. Altogether, the claimed subtype-specificity of neuronal-type Cx36GJ inhibitor Mapping Binding Interactions of Inhibitory Peptide Mimetics and Quinine in Model CxHC Structures MM/GBSA ∆G bind values obtained for the best 30 blind docking runs (see previous paragraph) for quinine and each peptide mimetic inhibitors have been evaluated. The best MM/GBSA ∆G bind values obtained from docking quinine to Cx36 or Cx43 GJ-coupled models indicate that quinine prefers binding to Cx36 over Cx43 ( Figure 5A-C). However, docking quinine into the free Cx36HC and Cx43HC structures did not show subtype-specificity ( Figure 5A,D,E). In this structure, quinine was not able to dock onto the same surface identified in docking to the GJ-coupled Cx36 structure, and instead preferred docking to the outer surface of the extracellular region ( Figure 5D,E). Filtering the docking results for poses on the inner surface still did not reveal any subtype-specific interaction ( Figure 5A, bottom). These data suggest that quinine shall exert its subtype-specificity by entering the GJ from the cytosol. Altogether, the claimed subtype-specificity of neuronal-type Cx36GJ inhibitor quinine [6,[27][28][29][30] has been substantiated by molecular modelling of binding at the GJ-coupled, but not in the free Cx36HC versus Cx43HC model structures ( Figure 5). Chemistry 2020, 2, x 8 quinine [6,[27][28][29][30] has been substantiated by molecular modelling of binding at the GJ-coupled, but not in the free Cx36HC versus Cx43HC model structures ( Figure 5). Bulky quinuclidine and quinoline moieties present four stereo-centers and give the impression of stiffness and conformational restraint, leaving its conformation basically unaltered when fitting in the GJ-coupled Cx36HC vestibule. How does quinine get in the vestibule? We may consider steric interactions between non-conserved EL1 residues V54 (A chain) or L58 (B chain) and quinuclidine or quinoline moieties, respectively, forcing quinine to enter the vestibule. In that way, quinine may contact non-conserved N63 (B chain), and conserved C62 (B-chain) within 4 Angströms, enabling polar/redox interactions to occur ( Figure 5B). By contrast, the matching non-conserved vestibular residues of Cx43 EL1 R53 (A chain), R53 (B chain), Q57 (B chain) and E62 (B chain) are not contacting quinine ( Figure 5C). The findings from docking quinine to the extracellular EL1-EL2 interface of GJ-coupled models of Cx36/Cx43 subtypes indicate that quinine preferentially binds to Cx36HC versus Cx43HC. By contrast, quinine does not distinguish between Cx36HC and Cx43HC when docked to the free CxHCs ( Figure 5). The "quinine paradox" highlights that quinine subtype-specificity shall depend on conformations acquired during GJ formation from connexon subtypes. In addition, the GJ-coupled state should be open, allowing the diffusion of quinine from the cytosol to the extracellular vestibule. The best MM/GBSA ΔGbind scores of peptide mimetic inhibitors after docking to the free Cx36HC or Cx43HC models indicate that most peptides bind to both Cx36 and Cx43 subtypes ( Figure 6A), Bulky quinuclidine and quinoline moieties present four stereo-centers and give the impression of stiffness and conformational restraint, leaving its conformation basically unaltered when fitting in the GJ-coupled Cx36HC vestibule. How does quinine get in the vestibule? We may consider steric interactions between non-conserved EL1 residues V54 (A chain) or L58 (B chain) and quinuclidine or quinoline moieties, respectively, forcing quinine to enter the vestibule. In that way, quinine may contact non-conserved N63 (B chain), and conserved C62 (B-chain) within 4 Angströms, enabling polar/redox interactions to occur ( Figure 5B). By contrast, the matching non-conserved vestibular residues of Cx43 EL1 R53 (A chain), R53 (B chain), Q57 (B chain) and E62 (B chain) are not contacting quinine ( Figure 5C). The findings from docking quinine to the extracellular EL1-EL2 interface of GJ-coupled models of Cx36/Cx43 subtypes indicate that quinine preferentially binds to Cx36HC versus Cx43HC. By contrast, quinine does not distinguish between Cx36HC and Cx43HC when docked to the free CxHCs ( Figure 5). The "quinine paradox" highlights that quinine subtype-specificity shall depend on conformations acquired during GJ formation from connexon subtypes. In addition, the GJ-coupled state should be open, allowing the diffusion of quinine from the cytosol to the extracellular vestibule. The best MM/GBSA ∆G bind scores of peptide mimetic inhibitors after docking to the free Cx36HC or Cx43HC models indicate that most peptides bind to both Cx36 and Cx43 subtypes ( Figure 6A), although the Cx43 EL2 sequence mimetic P180-195 seems to be rather Cx36 subtype-specific. Importantly, however, binding surfaces of peptide mimetic inhibitors do not correspond to the sequence mimicked by the particular peptides ( Figure 6B,C). In fact, the possible appearance of novel binding hotspots for Cx43 peptide mimetic inhibitors has already been predicted just by the reduced accessibility of the pertinent EL1-EL2 regions (see Section 2.3, Figure 3). Peptides mimicking the EL1 loop (GAP26, P5) or the EL2 loop (GAP27, P180-195, peptide 5) all bind at the inner EL1 surface and the EL1-EL2 interface ( Figure 6B,C), irrespective of the derivation of their sequences. Indeed, the subtype specificity of these peptides is also challenged by the fact that the protein segments they mimic are widely shared by other Cx subtypes, as previously introduced. Consequently, these structural and docking data suggest that the rationale behind the design of peptidomimetics may not be valid for connexin gap junction proteins. Chemistry 2020, 2, x 9 although the Cx43 EL2 sequence mimetic P180-195 seems to be rather Cx36 subtype-specific. Importantly, however, binding surfaces of peptide mimetic inhibitors do not correspond to the sequence mimicked by the particular peptides ( Figure 6B,C). In fact, the possible appearance of novel binding hotspots for Cx43 peptide mimetic inhibitors has already been predicted just by the reduced accessibility of the pertinent EL1-EL2 regions (see Section 2.3, Figure 3). Peptides mimicking the EL1 loop (GAP26, P5) or the EL2 loop (GAP27, P180-195, peptide 5) all bind at the inner EL1 surface and the EL1-EL2 interface ( Figure 6B,C), irrespective of the derivation of their sequences. Indeed, the subtype specificity of these peptides is also challenged by the fact that the protein segments they mimic are widely shared by other Cx subtypes, as previously introduced. Consequently, these structural and docking data suggest that the rationale behind the design of peptidomimetics may not be valid for connexin gap junction proteins. Our present understanding is that the particular reduction in the extracellular conformational freedom of connexons during GJ formation increases the connexon subtype-specificity of quinine. The principle predicts the "quinine paradox" (see above) but does not explain the lack of subtypespecificity in the case of the EL1/EL2 sequence mimetic peptides. Checking up the position of EL1/EL2 sequences mimicked by peptide mimetics reveals, however, that the GAP26/GAP27 matching segments are buried in the connexon structure both in the GJ-coupled and the free Cx43HC ( Figure 3). Of note, the P180-195 matching EL2 segment comprising three basic (K, R, H) and one acidic (D) AAs (cf. Figure 1) may lead to a partially protonated peripheral surface at physiological pH, explaining the relatively weak P180-195 interaction by the Cx43HC setting ( Figure 6A). Instead, peptide mimetics seem docking to more variable EL1 surface (Figure 6), thus gaining flexible contact Our present understanding is that the particular reduction in the extracellular conformational freedom of connexons during GJ formation increases the connexon subtype-specificity of quinine. The principle predicts the "quinine paradox" (see above) but does not explain the lack of subtype-specificity in the case of the EL1/EL2 sequence mimetic peptides. Checking up the position of EL1/EL2 sequences mimicked by peptide mimetics reveals, however, that the GAP26/GAP27 matching segments are buried in the connexon structure both in the GJ-coupled and the free Cx43HC (Figure 3). Of note, the P180-195 matching EL2 segment comprising three basic (K, R, H) and one acidic (D) AAs (cf. Figure 1) may lead to a partially protonated peripheral surface at physiological pH, explaining the relatively weak P180-195 interaction by the Cx43HC setting ( Figure 6A). Instead, peptide mimetics seem docking to more variable EL1 surface (Figure 6), thus gaining flexible contact areas for binding. Peptide binding to these surface hotspots may be size-dependent but not particularly subtype structure-specific. Conclusions The validation of connexon binding interactions suggests that the design principle of peptide mimetics based on selected primary EL1/EL2 sequences may fail in predicting the mechanism of inhibitory action. Instead, we put forward the new rationale of modelling 3D binding hotspots. Of note, quinone moieties of antimalarial therapeutics quinine and hydroxychloroquine may substantiate a search for medications to treat COVID-19 via furthering the awareness of a possible relationship between connexons and COVID-19. To this end, future studies should be set up for both in silico molecular docking of additional antimalarial drugs to Cx36HC connexon subtype and testing hits against COVID-19.
6,389.6
2020-07-14T00:00:00.000
[ "Chemistry" ]
EEG/ERP evidence of possible hyperexcitability in older adults with elevated beta-amyloid Although growing evidence links beta-amyloid (Aβ) and neuronal hyperexcitability in preclinical mouse models of Alzheimer’s disease (AD), a similar association in humans is yet to be established. The first aim of the study was to determine the association between elevated Aβ (Aβ+) and cognitive processes measured by the P3 event-related potential (ERP) in cognitively normal (CN) older adults. The second aim was to compare the event-related power between CNAβ+ and CNAβ−. Seventeen CNAβ+ participants (age: 73 ± 5, 11 females, Montreal Cognitive Assessment [MoCA] score 26 ± 2) and 17 CNAβ- participants group-matched for age, sex, and MOCA completed a working memory task (n-back with n = 0, 1, 2) test while wearing a 256-channel electro-encephalography net. P3 peak amplitude and latency of the target, nontarget and task difference effect (nontarget−target), and event-related power in the delta, theta, alpha, and beta bands, extracted from Fz, Cz, and Pz, were compared between groups using linear mixed models. P3 amplitude of the task difference effect at Fz and event-related power in the delta band were considered main outcomes. Correlations of mean Aβ standard uptake value ratios (SUVR) using positron emission tomography with P3 amplitude and latency of the task difference effect were analyzed using Pearson Correlation Coefficient r. The P3 peak amplitude of the task difference effect at Fz was lower in the CNAβ+ group (P = 0.048). Similarly, power was lower in the delta band for nontargets at Fz in the CNAβ+ participants (P = 0.04). The CNAβ+ participants also demonstrated higher theta and alpha power in channels at Cz and Pz, but no changes in P3 ERP. Strong correlations were found between the mean Aβ SUVR and the latency of the 1-back (r =  − 0.69; P = 0.003) and 2-back (r =  − 0.69; P = 0.004) of the task difference effect at channel Fz in the CNAβ+ group. Our data suggest that the elevated amyloid in cognitively normal older adults is associated with neuronal hyperexcitability. The decreased P3 task difference likely reflects early impairments in working memory processes. Further research is warranted to determine the validity of ERP in predicting clinical, neurobiological, and functional manifestations of AD. Introduction Alzheimer's disease (AD) is increasingly viewed as a disconnection syndrome leading to reduced communication between brain areas [1,2]. Emerging evidence shows Open Access *Correspondence<EMAIL_ADDRESS>1 Department of Physical Therapy, Rehabilitation Science, and Athletic Training, University of Kansas Medical Center, Kansas City, KS 66160, USA Full list of author information is available at the end of the article that the reduced neurotransmission is caused by the disturbance of the synaptic excitation/inhibition balance in the brain [1,2]. Even in the preclinical phase when no cognitive impairments are apparent [3], beta-amyloid (Aβ) oligomers and Aβ plaques show associations with this excitation/inhibition imbalance and altered activity of local neuronal circuits and large-scale networks [4]. Preclinical mouse models of AD support the notion that this imbalance causes hyperactivity in hippocampal and cortical neurons and reductions of slow-wave oscillations, even before the appearance of Aβ plaques [4]. Such hyperactivity shifts the normal excitation/ inhibition balance towards neuronal hyperexcitability, mediated through both increased excitation of synaptic glutamatergic tone and decreased GABAergic inhibition [4]. This relative neuronal hyperexcitability in turn leads to excitotoxicity [5] and amplification of synaptic release of Aβ [6], ultimately leading to further neurodegeneration and neuronal silencing mediated by concomitant tau accumulation [7]. Previous studies have explained this hyperexcitability as a physiological compensation for the increased Aβ burden in preclinical AD [8][9][10][11], wherein the accumulation of Aβ deposits results in neural recruitment, up until a certain threshold when the compensatory mechanisms fail. The hyperexcitability is then followed by hypoexcitability due to functional neuronal silencing in clinically diagnosed AD [7]. Electro-encephalography (EEG) offers insights into the postsynaptic activity of pyramidal cells and may therefore be useful for evaluating the impact of Aβ deposits on neuronal excitability in older adults across the spectrum of AD [12]. A systematic review of published studies has shown consistent evidence of hypoexcitability in AD, expressed as reduced power in the high-frequency bands, and lower amplitude and larger latency of event-related potentials (ERP) [12]. The associations between Aβ and neuronal excitability in mild cognitive impairment (MCI) and preclinical AD are less clear. One resting-state EEG study including older adults with subjective memory impairments has found a non-linear relationship between Aβ and delta power, in those individuals who showed signs of neurodegeneration, but not in those with normal-appearing brain [11]. Another study has shown that older adults with increased Aβ load and subjective cognitive impairments exhibit greater connectivity in the alpha band and reduced connectivity in the beta band [13]. Studies evaluating excitability under cognitive load in the cognitively normal (CN) older adults are even more limited. In a previous study, the event-related spectral power in the alpha and beta bands extracted while doing a working memory task (2-back) was higher in older adults with unknown Aβ status who showed deterioration in an 18-month follow-up assessment compared to CN participants who remained stable [14]. Although previous work suggests neuronal hyperexcitability in preclinical AD, possible changes in the eventrelated power in cognitively normal, amyloid elevated (CNAβ+) older adults are yet to be established. Previous research suggests that the changes in spectral frequency due to increased amyloid burden reflect initial compensatory processes to maintain normal cognitive function [8,9,11]. However, it is unclear how neuronal hyperexcitability affects the efficiency of cognitive processing. ERPs offer unique insights into the neural processes of working memory under cognitive load. The P3 (or P300) is a positive ERP that appears at around 300 ms after stimulus onset. The amplitude of P3 is generally considered as a measure of resource allocation, particularly during working memory tests [15]. Higher cognitive demands result in decreased P3 amplitudes and longer latencies [16]. The attenuated P3 amplitude with increased cognitive demand is explained by the reallocation of resources away from the stimulus discrimination task towards processes that are more responsible for the higher demands posed on working memory, such as information storage and updating [17]. The P3 component can be isolated by discriminating the frequent nontarget from the infrequent target. This task difference effect reflects frontal lobe activity that is sensitive to the attentional demands induced by the task [18]. Larger task difference effects reflect more efficient discrimination ability in the stimulus evaluation process. The main aim of this study was to compare the physiological response during working memory tasks of incremental cognitive demand between CN older adults with and without increased Aβ load. We hypothesized that CNAβ+ participants would show decreased P3 amplitude of the task difference effect compared to CN nonelevated (CNAβ−) participants. In a previous study [19], we established the reliability of the P3 ERP of the task difference effect at Fz in older adults with and without cognitive impairments. Therefore, we predesignated the P3 amplitude of the task difference effect at channel Fz as the main outcome variable, but also calculated the amplitude and latency of nontarget and target responses at channel Fz and additional midline channels Cz and Pz. The second aim of this study was to compare the event-related power between CNAβ+ and CNAβ− participants. Due to the early breakdown of slow-wave frequency bands shown in animals with preclinical AD [4], we expect lower event-related power in the delta band in CNAβ+. Since hypoexcitability in clinically diagnosed AD manifests as decreased power in the higher frequency bands, we expect increased power in the alpha and beta bands to reflect hyperexcitability in CNAβ+. Finally, we explored the association between Aβ uptake and P3 peak amplitude and latency of the task difference effect. Participants All participants were recruited from the University of Kansas Alzheimer's Disease Center between May 30, 2018 and July 20, 2020. Participants were excluded if they (1) were currently taking steroids, benzodiazepines, or neuroleptics; (2) had a history of any substance abuse; (3) had a history of a neurological disorder; or (4) had any contra-indications to positron emission tomography (PET) or EEG. The inclusion criteria were (1) age of 65 years or older; (2) understanding all instructions in English; (3) having given informed consent; and (4) a previously administered amyloid PET scan of the brain. The cerebral amyloid burden was assessed using PET images, obtained on a GE Discovery ST-16 PET/CT scanner after administration of intravenous Florbetapir 18 F-AV45 (370 MBq) following a previously published protocol [20]. To determine the Aβ status, three experienced raters interpreted all PET images independently and without reference to any clinical information, as previously described [21]. The final status was determined by the majority of raters as Aβ− versus Aβ+, using a process that combined both visual and quantitative information [22,23]. The median (Q1-Q3) time between the PET scan and EEG assessment was 1111 (794-1675) days. Demographic and clinical information We recorded information of age, sex, education, race, and ethnicity of participants. The participants also completed the Montreal Cognitive Assessment (MoCA) as a general screen of cognitive functions, which was carried out by a member of the research team who was blinded to the group allocation [24]. Normal cognition was confirmed after a clinical assessment performed at the time around PET scan at the University of Kansas Alzheimer's Disease Center, which included the Clinical Dementia Rating [25] and Uniform Data Set Neuropsychological Battery [26]. All participants reported to be right-hand dominant. N-back test In the n-back test, EEG was recorded while the participants were shown with a series of letters and instructed to press a button if the current stimulus was the same as the item presented n positions back (Fig. 1). The cognitive load increases with increased number, but the perceptual and motor demands remain the same. In this study, the 0-back, 1-back, and 2-back tests were administered ( Fig. 1). The 0-back test was used as the control condition [27,28]. The 1-back test requires the participant to passively store and update information in working memory. The 2-back test requires constant switching from the focus of attention to short-term memory [27]. Higher levels of difficulty require continuous mental effort to update information of new stimuli while maintaining representations of recently presented stimuli [29]. During the test, participants sat in a comfortable chair at 26 inches in front of a computer screen with the center of the screen at eye level. White letters appeared on the black screen. Participants completed a practice trial of 3 targets and 7 nontargets prior to each test. These practice sessions were repeated until the participants felt comfortable with the instructions. The actual test consisted of 60 trials that required a response by pressing the left mouse button (target, 33.3%) with their right index finger and 120 trials for which a response was not required (nontarget, 66.7%). Each letter was presented for 500 ms on the computer screen followed by a blank interstimulus interval for 1700 ms, with a random jitter of ± 50 ms. The allowed maximum response time was 2150 ms. The Fig. 1 Design of the n-back test. ISI, interstimulus interval total task time was about 400 s. The number of correct responses (accuracy) and response times in the correct response trials were taken as the main behavioral outcome measures. P3 ERP and event-related power Continuous EEG was acquired using a Magstim EGI high-density system from 256 scalp electrodes, digitized at 1000 Hz. Data were online referenced to Cz and filtered using a 30 Hz low-pass filter and a 0.5 Hz highpass filter in the EGI software. Although Kappenman and Luck recommend 0.10 Hz as the high-pass filter for EEG systems in P3 ERP studies [30], we used 0.5 Hz to account for the minimum high-pass filter threshold of 0.3 Hz set by the EGI system and to minimize the roll-off effect. All other EEG processing was done in EEGLab [31] and in ERPLab [32]. Recordings from electrodes around the face were first removed, leaving 183 electrode channels in the processing pipeline. Bad channels were removed through automatic identification and visual inspection of the EEG data. Various artifacts unrelated to cognitive functions, including ocular and muscular movement or cardiovascular signals, were identified and removed using independent component analysis. The stimuluslocked ERPs were extracted from the n-back tests and segmented into epochs of 100 ms before to 1000 ms after the stimulus onset, and baseline-corrected using the prestimulus interval. Epochs of incorrect and missed responses were removed from the analyses. Signals from bad electrodes were then interpolated using surrounding electrode data. Scalp locations and measurement windows for the P3 component were determined based on their spatial extent and latency after inspection of grand average waveforms. P3 peak amplitude of the task difference effect was considered the main outcome variable. The task difference effect was calculated by subtracting the average ERP elicited by targets from the average ERP elicited by nontargets (nontarget-target) for each participant. We also calculated P3 peak latency of the task difference effect as well as P3 peak amplitude and latency of the targets and nontargets. The P3-component time-window was established between 250 and 650 ms for all three tests. The average event-related power was identified in four frequency bands: delta [2][3][4], theta [4][5][6][7][8], alpha [8][9][10][11][12] and beta [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] [33]. Because of the involvement of prefrontal cortex in working memory, we analyzed P3 ERP from Fz, but also from Cz and Pz sites. Cz was interpolated using the surrounding five channels after re-referencing offline to the linked mastoids. No participants were excluded from analyses due to artifacts. Data analysis Descriptive analysis including mean (standard deviation), median (Q1-Q3), and frequency count of participants' general demographics, performance measures, and ERP data was performed as appropriate. Unpaired t-tests, Median tests, and Chi-square tests were used to compare descriptive variables and performance in cognitive tests. We conducted linear mixed models to determine the effect of Aβ on P3 and event-related power at channel Fz. We used a random intercept term with a subject-specific coefficient to adjust for correlation between measures within subjects. Group (CNAβ+ and CNAβ−) and n-back difficulty (0, 1, 2) were entered as main effects. Interaction effects of group × n-back were also examined. Bonferroni correction was applied for pairwise comparisons. Residual analysis was used to validate model assumptions. Variables were transformed to their log function when residuals were not normally distributed. We entered age, sex, education, and MOCA as potential covariates in a separate linear mixed model. These analyses were repeated for channels Cz and Pz. In addition, linear mixed models were employed to investigate the main effects of group and condition (n-back) on the average event-related power in the delta, theta, alpha, and beta bands, and on performance in the n-back tests (response time and accuracy). Correlations of the mean Aβ standard uptake value ratio (SUVR) and the SUVR of six predefined regions (anterior cingulate, posterior cingulate, precuneus, inferior medial frontal, lateral temporal, and superior parietal cortex) with the P3 peak amplitude and latency of the task difference (nontarget-target) in each n-back test at channels Fz, Cz, and Pz were analyzed with Pearson r correlation coefficient. P < 0.05 was considered significant. Analyses were performed using SAS 9.4 and SAS Enterprise Guide 8.2 softwares. We first analyzed differences in the accuracy and response time in the n-back test ( Table 1). The linear mixed models showed no main group effects on response time (P = 0.36) and accuracy (P = 0.91). P3 grand average waveforms The grand average peak P3 amplitudes of the task difference effect (nontarget-target) of the two groups for each n-back condition at channels Fz, Cz, and Pz are shown in Additional file 1: Table S1. Figure 2 shows that the task difference effect of the peak amplitude at channel Fz was lower in CNAβ+ compared to CNAβ− (P = 0.048, P = 0.05 after adjusting for age, sex, and MoCA scores). No other effects were found for peak amplitude. Additional file 1: Table S2 shows that the P3 latency of the task difference effect at channel Fz was sensitive to changes in cognitive demand (non-adjusted P = 0.047; adjusted P = 0.05). The grand average waveforms of the targets and nontargets at channel Fz of both groups are depicted in Fig. 3. Linear mixed model analysis revealed shorter P3 latency for nontargets (non-adjusted P = 0.006; adjusted P = 0.006) at channel Fz in CNAβ+ (Additional file 1: Table S2). No other effects were found at channels Cz and Pz, except for the peak latency of the nontargets at channel Cz that produced significant group effects (non-adjusted P = 0.04; adjusted P = 0.04). P3 event-related power Power in each of the frequency bands for each of the three n-back conditions at channels Fz, Cz, and Pz is detailed in Additional file 1: Table S3. At channel Fz, the CNAβ+ participants exhibited lower power in the delta band for nontargets (unadjusted P = 0.04; adjusted P = 0.08, with age [P = 0.01] and MOCA scores [P = 0.007] contributing significantly to the model), compared to the CNAβ− participants. At channel Cz, the CNAβ+ participants exhibited higher power in the theta band for the task difference effect (unadjusted P = 0.05; adjusted P = 0.09). In addition, higher power was observed in the alpha band for nontargets (unadjusted P = 0.05; adjusted P = 0.03), targets (unadjusted P = 0.04; adjusted P = 0.03), and the task difference effect (unadjusted P = 0.03; adjusted P = 0.09, with age [ P = 0.02] contributing significantly to the model). At channel Pz, the CNAβ+ participants exhibited higher power in the theta band for nontargets (unadjusted P = 0.05; adjusted P = 0.07) and task difference effect (unadjusted P = 0.03; adjusted P = 0.11). Likewise, higher power in the alpha band was observed in the CNAβ+ group (unadjusted P = 0.03; adjusted P = 0.03). Analyses of the beta band did not show significant effects. Correlation between amyloid and P3 ERP The correlation table shows stronger correlation of SUVR with ERP latency than with amplitude of the task difference effect (Fig. 4). Absolute Pearson r correlation coefficient of 0.53 and higher indicates significant correlation (P < 0.05). The mean Aβ SUVR of the Overall, the magnitude of correlations between Aβ SUVR and P3 peak amplitude and latency was smaller at channels Cz and Pz than at Fz in CNAβ+. Aβ SUVR also correlated with the P3 peak amplitude of several n-back conditions in the CNAβ− group (Fig. 3b). SUVR in the superior parietal cortex correlated negatively with P3 peak amplitude of the 0-back test at channel Fz (r = − 0.69; P = 0.007) and of the 1-back test at channel Cz (r = − 0.69; P = 0.006). SUVR of the posterior cingulate cortex correlated with P3 peak amplitude of the 1-back test at Pz (r = − 0.58; P = 0.03). The mean SUVR correlated with P3 peak latency of the 2-back test at Fz (r = − 0.56; P = 0.04) and Cz (r = − 0.62; P = 0.01) and of the 1-back test at Pz (r = − 0.69; P = 0.006). Similar magnitudes of correlation were observed for subregions anterior cingulate cortex, inferior medial frontal lobe, posterior cingulate cortex, and precuneus. Discussion The goal of this study was to compare neuronal excitability during working memory of incremental cognitive demand between CNAβ+ and CNAβ− older adults. We demonstrated differences in the P3 ERP (decreased peak P3 ERP of the task difference) as well as changes in the event-related power (lower power in the lowfrequency bands [delta] and higher power in the midrange-frequency bands [theta, alpha]) in CNAβ+ adults, compared with CNAβ−. Cognitive load was not associated with the differences in P3 ERP amplitude between the two groups. In addition, we found strong correlations between Aβ deposits in cortical brain regions and P3 ERP. These findings point towards evidence of hyperexcitability in CNAβ+. However, this hyperexcitability did not appear to affect behavioral performance as no differences were found in accuracy and response times on the n-back test. Our study demonstrated lower delta event-related power in the frontal midline channel, along with an increase in theta and alpha event-related power in the central and parietal midline channels in CNAβ+. These results confirm the preclinical AD animal model studies showing that hyperexcitability is related to early breakdown of low-frequency waves [4]. The increased event-related power in alpha and theta frequencies in CNAβ+ contrasts the changes in event-related power found in older adults with cognitive impairments. While an increase has been found in absolute theta power [34][35][36], the event-related theta power was significantly lower in response to cognitive load in MCI and AD compared to controls, reflecting hypoexcitability [36]. The combined lower event-related delta power and higher eventrelated alpha and theta power suggest that CNAβ+ older adults may exhibit neuronal hyperexcitability. A previous study investigating resting-state spectral power has classified CN older adults with subjective memory complaints according to their Aβ burden (+ or −) and associated neurodegeneration (+ or −) into four respective categories, and found a U-shaped distribution in delta power and an inverse U-shaped distribution in gamma power, most pronounced in CN individuals with signs of neurodegeneration [11]. In addition, the presence of neurodegeneration is associated with a decrease in lower-frequency waves (delta) and an increase in higher-frequency waves (beta and gamma) in the fronto-central regions [11]. Yet, there are no associations between Aβ load and spectral power in the absence of neurodegeneration. The similarities in spectral power in the different frequency bands between the previous study [11] and ours imply that our group of CNAβ+ participants may have shown early signs of neurodegeneration. However, we cannot confirm this assumption as we did not formally assess neurodegeneration. Another potential explanation is that changes in power may appear earlier in the disease process (i.e., in Aβ+ with no neurodegeneration) under cognitive load as opposed to the resting state. Although no interaction effects of group by n-back were found, visual inspection of the P3 waveforms showed that the differences between CNAβ+ and CNAβ− were most obvious under highest cognitive load. However, our study may have been underpowered to elicit these differences statistically. Although older adults with elevated amyloid may exhibit neuronal hyperexcitability, these compensatory processes do not result in more efficient neural processes. Our results showed that the CNAβ+ group exhibited a smaller P3 amplitude of the task difference effect, suggesting less efficient stimulus processing compared to CNAβ−. The absence of a clear effect of task difficulty on P3 amplitude and latency in the 2-back test in elevated amyloid also implies a lack of appropriate reallocation of cognitive resources away from stimulus evaluation. These findings, along with the non-significant differences in behavioral outcomes, suggest that the hyperexcitability is a non-functional compensation on neural level due to increased Aβ deposition, and may result in less efficient cognitive processing of working memory. Our results suggest a direct link between average and regional Aβ burden and electrophysiological activity, particularly in the frontal cortex in CNAβ+. This is consistent with animal studies that found hyperactive neurons exclusively around the Aβ plaques [5], suggesting that Aβ exerts toxic effects on surrounding neurons and synapses, thereby disturbing their function and perhaps leading to dementia [37]. In particular, soluble Aβ oligomers have been shown to affect neuronal excitability in animal models and in vitro in humans [38]. However, no causal inferences can be made from our results. Longitudinal studies are required to identify the effect of Aβ burden on the relative postsynaptic excitation, and the role of P3 as a biomarker of pathophysiological, clinical, and functional decline. Future studies should investigate whether a relative increase in excitatory neurotransmitters, particularly glutamate, drives the link between Aβ burden and P3 ERP in preclinical AD. If confirmed, EEG metrics may be used as endpoints for mechanistic studies evaluating hypotheses related to autophagy [39], mitophagy [40], and selective neuronal vulnerability of AD [41], and for translational intervention studies aiming to reduce Aβ burden with pharmacological treatment [42], behavioral interventions (e.g., exercise), etc. [43] Limitations of this study include the relatively small sample size and the long interval between PET scan and EEG testing. We cannot rule out the possibility that some CNAβ− participants might have converted to CNAβ+. The projected conversion rate from Aβ− to Aβ+ is about 4% per year [44], showing stability of cortical Aβ in the vast majority of older adults. However, we plan to conduct a future study where the PET scan and EEG assessment are conducted close in time. In addition, two participants in the CNAβ+ group scored below 26 on MoCA, which may indicate a change in cognitive status since their comprehensive cognitive assessment. Therefore, our results should be interpreted with caution. We also corrected for multiplicity by design. To account for the multiplicity, we designated in advance of the study a single linear mixed model with the task difference effect of P3 ERP as our primary result. All other tests were designated as secondary and presented in full to provide complete transparency. However, we did implement standard multiplicity adjustments within our linear mixed models. We chose the n-back test to test our hypotheses as working memory is regarded a core cognitive function sensitive to aging and early neurodegeneration, upon which higher-order cognitive skills, such as attention, decision making, and planning are built [45]. Our results are therefore unique to working memory and cannot be generalized to other domains of cognitive functions that are relevant to AD. Conclusion Older adults with normal cognition and elevated Aβ show neuronal hyperexcitability under cognitive load. This hyperexcitability affects cognitive processes indexed by the P3 ERP. Future studies are required to elucidate the causal effects between Aβ depositions and neural excitability.
5,982.2
2022-02-09T00:00:00.000
[ "Psychology", "Biology" ]
The Behaviour of Some Acylthiosemicarbazides in the Reaction with α-Halogenated Esters aTEFANIA-FELICIA BARBUCEANU1*, GABRIELA LAURA ALMAJAN1, IOANA SARAMET1, CONSTANTIN DRAGHICI2, CRISTIAN ENACHE3 1University of Medicine and Pharmacy “Carol Davila”, Department of Organic Chemistry, 6 Traian Vuia Str., 020956, Bucharest, Romania 2 Centre of Organic Chemistry “Costin. D. Neniþescu”, Romanian Academy, 202B Splaiul Independenþei, 060023, Bucharest, Romania 3 Central Laboratory for Fito-Sanitary Quarantine, 11 aos. Afumaþi, Bucharest, Romania The spectral data and the elemental analysis for the products obtained disproved the obtainment of thiazolidin-4-ones, pleading for the formation of some compounds by the 2-aminosubstituted 1,3,4-oxadiazoles class. Experimental part The melting points of the obtained compounds were determined with a Böetius apparatus and are not corrected.The IR spectra were registered in a KBr pallets with a spectrophotometer with Fourier transform FTS-135 BIORAD, at 4000-400 cm -1 , and the UV spectra with a spectrophotometer SPECORD 40 Analytik Jena, within the range 200-600 nm.The NMR spectra were registered with a Varian Gemini 300BB apparatus, at 300 MHz for 1 H-NMR and 75 MHz for 13 C-NMR.DMSO-d 6 was used as a solvent with a minimum grade of deuteration of 99% and tetramethylsilane (TMS) as an internal standard.The mass spectra were registered with a triple quadrupole mass spectrometer Varian 1200 L/MS/MS coupled with a high performance liquid chromatograph with Varian ProStar 240 pump and a Varian ProStar 410 automatic injector.For the obtainment of ions was used an electrospray interface (ESI) or a atmospheric pressure chemical ionization (APCI).The solvent used was DMSO; the liquid chromatography was performed on a Hypersil Gold (Thermo) column with pre-*<EMAIL_ADDRESS>and the mobile phase was 30% water and 70% methanol. Synthesis of 5-[4-(4-X-phenylsulfonyl)phenyl]-2-(2/3methoxyphenylamino)-1,3,4-oxadiazoles 4,5a-c 1 Mmol of appropriate thiosemicarbazide 2 or 3 and 1.1 mmols ethyl chloro-or bromoacetate were refluxed in 50 mL of absolute ethanol in presence of 4 mmols anhydrous sodium acetate for 11 h.The reaction mixture was cooled, diluted with water and allowed to stand overnight.The resulted precipitate was filtered, washed with water and, finally, with ethyl ether.Obtained compounds were recrystallized from ethanol.The spectral data 1 H-NMR and 13 C-NMR are presented in tables 1 and 2. The spectral data 1 H-NMR and 13 C-NMR are presented in tables 1 and 2. The spectral data 1 H-RMN and 13 C-NMR are presented in tables 1 and 2. -[ 4 -( 4 -b r o m o p h e n y l s u l f o n y l ) p h e n y l ] -2 -( 3methoxyphenylamino)- For the confirmation of these compounds structure, the 1,3,4-oxadiazole 5a was synthesized and, through the reaction of cyclodesulfurization of acylthiosemicarbazide 3a with mercury (II) acetate, according to the reaction: Results and discussions The compounds obtainment from the 1,3,4-oxadiazoles class by acylthiosemicarbazides with ethyl chloro-or bromoacetate and of anhydrous sodium acetate reaction was also observed by other authors [18][19][20]. The transformation of acylthiosemicarbazides in 1,3,4oxadiazoles or thiazolidin-4-one is influenced by the substituent nature linked to the nitrogen atom in 4 position.If this substituent is of aryl type, it may be obtained predominantly 1,3,4-oxadiazoles.If the substituent is an alkyl type radical, less bulky, there may be obtained thiazolidin-4-ones [18][19].There are cases in which, although the substituent is a radical of aryl type, the products obtained are thiazolidin-4-ones, or cases in which if the substituent is a bulky alkyl type radical, there may be obtained 1,3,4-oxadiazoles [20]. Reaction mechanism implies, in the first stage, the formation of the alkylate intermediary at the sulphur atom in the tiol tautomeric form of acylthiosemicarbazides.The intramolecular cyclization of this intermediary may take place with the elimination of one molecule of ethanol by NH group nucleophilic attack to the carbonic atom from the ester carbonyl group or by the elimination of one molecule of ethyl mercaptoacetate by the nucleophilic attack of OH group at the carbonic atom linked to the sulfuric alkylate atom [21] (fig.2). In this case, the cyclization with the elimination of ethanol is not possible because of low NH aryl group nucleophilicity and consequently, the reaction of the acylthiosemicarbazides 2,3a-c with ethyl chloro-or bromoacetate in anhydrous sodium acetate presence follows the (b) way, the final products being 1,3,4oxadiazoles 4,5a-c. The IR Spectra In the IR spectra of the compounds 4,5a-c the absorption band due to the valence vibration of C=O group from the thiosemicarbazides 2 and 3 (1675-1702 cm -1 ), is not found.Besides, the another absorption band absence due to the vibration of C=O group from the thiazolidin-4-ones nucleus, leads to the conclusion that the reaction products do not contain the carbonyl group. The synthesized compounds present one single absorption band at 3305-3438 cm -1 characteristic for NH group valence vibration in contradiction with the thiosemicarbazides 2,3a-c who present two or three bands in the range 3157-3443 cm -1 . In the region 1624-1627 cm -1 appears a band of high intensity, due to the group C=N valence vibration which is characteristic of the 1,3,4-oxadiazole 2-aminosubstituted [22]. The NMR Spectra In the 1 H-NMR spectra of compounds 4,5a-c are present two subspectra characteristic of the diarylsulfone fragment [14,16] and of the 1,3,4 oxadiazole ring. In the 1 H-NMR spectra of this new compounds the signal attributed to the two protons of the methylene group from the thiazolidin-4-one is not present and this proves that these compounds have not been obtained. The 13 C-NMR spectra of the compounds 4,5a-c are not presenting any characteristic signal to the carbon atoms C=O (-CONH-N= or C=O from the thiazolidinone nucleus) in the thiazolidin-4-one molecule.Besides, it is not present the S-CH 2 carbonic atom group corresponding signal of the thiazolidin-4-ones from ~33 ppm [19]. It can be noticed the apparition of the signal attributed to the quaternary carbonic atom C-2 at 159,93-161,18 ppm from the 1,3,4-oxadiazole nucleus, the carbonic atom signal attributed to the thionic group (~181 ppm) not being present anymore.Instead of amidic carbonyl group corresponding signal from the thiosemicarbazides that appears at ~164 ppm, one may observe a new signal at ~157 ppm that belongs to the carbonic atom C-5 from the 1,3,4-oxadiazole ring [23,24] (table 2). The mass spectra of the compounds 4b, 4c and 5a confirm the presence of these 2-amino substituted 1,3,4oxadiazoles. Conclusions The purpose of this paper was to obtain new compounds from the 1,3-thiazolidin-4-ones class through the acylthiosemicarbazides reaction with ethyl chloro-or bromoacetate and anhydrous sodium acetate.The spectral data and the elemental analysis invalidate the presence of these compounds, pleading for a structure 2-R-amino-1,3,4oxadiazole type confirmed also through the alternative synthesis between thiosemicarbazide 3a with mercury (II) acetate.
1,510.6
2008-04-09T00:00:00.000
[ "Chemistry" ]
Online Boosting Algorithm Based on Two-Phase SVMTraining We describe and analyze a simple and effective two-step online boosting algorithm that allows us to utilize highly effective gradient descent-based methods developed for online SVM training without the need to fine-tune the kernel parameters, and we show its efficiency by several experiments. Our method is similar to AdaBoost in that it trains additional classifiers according to the weights provided by previously trained classifiers, but unlike AdaBoost, we utilize hinge-loss rather than exponential loss and modify algorithm for the online setting, allowing for varying number of classifiers. We show that our theoretical convergence bounds are similar to those of earlier algorithms, while allowing for greater flexibility. Our approach may also easily incorporate additional nonlinearity in form of Mercer kernels, although our experiments show that this is not necessary for most situations. The pre-training of the additional classifiers in our algorithms allows for greater accuracy while reducing the times associated with usual kernel-based approaches. We compare our algorithm to other online training algorithms, and we show, that for most cases with unknown kernel parameters, our algorithm outperforms other algorithms both in runtime and convergence speed. Online Training Methods. During the recent years, there has been an increasing amount of interest in the area of online learning methods.Such methods are useful not only for setting where a limited amount of samples in being fed sequentially into a training system, but also for system, where the amount of training data is too large to fit into memory.In particular, several methods for online boosting and online support vector machine (SVM) training has been proposed.However, those methods have several limitations.The online boosting algorithms, such as [1] or [2], are usually limited to a fixed number of classifiers, while the online SVM training methods employing Mercer kernels [3,4] may not converge well in the cases where the inappropriate kernel was chosen.Furthermore, kernel-using SVM usually have significant storage and computational requirements due to the large amount of kernel expansion terms.In this paper, we exploit the similarity between the mathematical description of boosting and support vector machines to create a two-stage online boosting algorithm with variable number of classifiers, that can exhibit greater flexibility than kernel-based SVM while having smaller computational costs. Our method uses Pegasos, Stochastic Gradient Descent-(SGD-) based SVM training method introduced in [4] to produce both a set of weak classifiers and boosting weights for combining them into strong classifier.Using this kind of training algorithm allows us to utilize solid theoretical background and well-defined convergence rate of SGD algorithms, as well as increased accuracy of classifier produced by boosting.We utilize a simplified variant of Pegasos with parameter k set to 1 on both stages.Our algorithm uses three parameters: λ H and λ, corresponding to the regularization parameters in the SGD algorithm, and additional parameter r that defines the length each additional classifier is trained and may be defined in several ways. The resulting algorithm is simple and has low computational and storage costs, which allow it to be easily incorporated into real-time application.1.2.Related Work.Our algorithm is closely related to algorithms used for SVM training and boosting.These algorithms, taken together, can be separated into several classes: 1.2.1.Support Vector Machines.The support vector machines are a class of linear or kernel-based binary classifiers that attempt to maximize the minimal distance (margin) between each member of the class and separating surface.Such maximization was shown to give near-optimal levels of generalization error [5].In most cases, when dealing with kernels, the task of learning a support vector machine is cast as a constrained quadratic programming problem.Methods that deal with such problem usually need access to all labeled samples at once and require about O(m 2 ) operations, where m is the number of samples.Several approaches to the solution exist, such as interior point [6] methods and the decomposition methods, such as SMO [7].Also, recently, interest in gradient-based methods to solving primal SVM problem has risen drastically.These methods exhibit convergence rate independent of the number of samples, which is particularly useful for large datasets, and only need one or several samples for a single iteration, which lends itself well to the online setting. Main method that is used as a basis for our algorithm is Pegasos [4], which is a modified SGD method with an added projection step, although more generic algorithms such as NORMA [3] can be easily used in its place. Boosting Algorithms. Boosting is a meta-algorithm for supervised learning that combines several weak classifiers, that is, classifiers that can label examples only slightly better than random guessing, into a single strong classifier with arbitrary classification accuracy.One of the most successful and well known examples is AdaBoost [8] and its variants, like LPBoost.The convergence properties of AdaBoost has been carefully analyzed [9], and it was used for such problems as text recognition, filtering, and face recognition and feature selection.Boosting algorithms that employ linear combination of weak classifiers to form confidence function for strong classifier were shown to be closely related to the primal formulation for support vector machines [10].As in case of SVM, many boosting methods were designed for offline setting, where all of the training examples are given a priori, and share the same set of problems when dealing with larger datasets.Recently, there were several successful approaches [1,2] designed to shift boosting to online setting, however, they assume that the number of weak classifiers in the boosting process stays the same, which limits the flexibility of their methods. Outline of the Paper. The remainder of this article is organized as follows.In Section 2, we give the description of our algorithm.In Section 3, we compare it to several existing online training methods in terms of computational cost and flexibility.Finally, we present our conclusion and further directions of research. Algorithm Description 2.1.Overview of SVM and Pegasos Algorithm.Support vector machines are a useful classification tool, developed by Cortes and Vapnik [11], that attempt to construct a hyperplane separating the data that has the largest distance to any point in any class (see Figure 1).The original formulation assumed data to be linearly separable, with no noise.Later, a loss term was added to account for noisy data.The most widely used loss function for SVM training is hinge-loss: where ρ is called a margin parameter, y is a label of a sample, and f is classification function of the svm, sometimes also used as a confidence measure.For linear SVMs, f is a simple inner product of input and weight vectors, while for SVMs using kernel trick for nonlinear classification: where k(•, •) is a kernel function satisfying Mercer's condition and b is a bias term.Several methods exist for solving both primal and dual formulations of the SVM optimization problem.In this article, we employ the primal estimated subgradient solver (Pegasos) method, as described in [4], with some modifications. The primal form of SVM problem is a minimization problem with added regularization term.Formally, for a reproducing kernel Hilbert space H with kernel k : R n × R n → R, and a set of vectors X ∈ R n with corresponding labels y : X → {−1; 1}, find function f such that Algorithm 1: The Pegasos algorithm with weighted samples. In the linear case, function f can be represented as f ( x) = α, x , where •, • denote an inner product, and the equation becomes In our work, we also incorporate a bias term b by substituting x with x e = { x, 1}.In this case, α ∈ R n+1 .To simplify notation, we assume that all input vectors have been extended in such a way, and simply use x. In online setting, only one of the sample-label pairs is available at a time.There exist several algorithms to deal with such a problem that have well-established convergence bound.Amongst them, methods using stochastic gradient descent (or, in case of hinge-loss function, subgradient descent) are most prominent.In this paper, we choose a Pegasos algorithm for its rapid convergence and simplicity of implementation.Here, we only present algorithm with our modifications for weighting sample, for detailed analysis please refer to the original article [4]. On each iteration, the Pegasos algorithm is given iteration number t, regularization coefficient λ (regulating how "soft" the resulting margin is, i.e., whether the priority is given to low error rate over training set or larger margin between classes), sample vector x with weight w and label y, and updates vector α as shown on Algorithm 1 . The only difference from [4] is addition of weighting term w, the significance of which will be explained below.The number of simultaneous input samples k, used in the original Pegasos algorithm, is taken to be 1, reducing Pegasos to an SGD algorithm with an additional projection step. Overview of AdaBoost. AdaBoost [8] is an algorithm that iteratively adds weighted weak classifiers h i to obtain a strong classifier H.Each additional classifier is selected from the pool of available classifiers to minimize the weighted error rate over training samples.This error rate was also used to calculate the weight of h i in H and to update training sample weights in such a way so that the next classifier would favor samples misclassified by the current strong classifier and ignore the ones classified correctly. As most learning algorithms of that time, AdaBoost was designed for offline (batch) training, with the error rate estimated over all available samples.There are, therefore, several difficulties involved in employing boosting as an online training algorithm, several of which were mentioned in [1].Their solution was to use a limited number of meta-classifiers, called selectors, each selecting a single weak classifier with least estimated error from a pool, combined into H.Both the weak classifiers and selectors were updated each iteration.The limited number of selectors and features allowed for simplified online algorithm.Their algorithm, however, had two potential problems, one being that a limited number of selectors limited flexibility of the classifier, the second being that the effect constant updates have on the error rate of the weak classifiers was never addressed. In our work, we note that the expression for the confidence function of strong classifier in AdaBoost can be expressed in the same form as objective function f of SVM: where Using this notation, the algorithm is initialized with the following data: Assuming a simplified cutoff criteria for weak classifier training, single iteration of the algorithm takes the form illustrated on Algorithm 2 . First, the input vector is classified by each of the weak classifiers already added to the pool, forming vector h of classifier outputs.This pool has T i classifiers in it, starting from a single classifier on the first iteration, and an additional bias expansion term, as described in Section 2.1. Next, objective function F and the loss function L for the strong classifier are calculated, given boosting weights β.The loss function is then used as a weight for training Input: Iteration numbers i, i w , x, y, λ and λ H , vectors the weights of the latest weak classifier α T with the Pegasos algorithm.It can be seen that, similar to AdaBoost family of boosting algorithms, our algorithm increases the weights of samples misclassified by the current strong classifier, and decreases the weights of samples classified correctly.In fact, due to the use of the hinge-loss function for weight calculation, samples classified with high confidence by the already existing classifiers have no effect on the weights α T .This allows creation of the classifiers that compensate for the errors introduced by the previous ones, and eventual convergence to the true distribution of the samples. After the weak classifier, boosting weights are adjusted, using vector h as an input.Usage of Pegasos algorithm guarantees that, eventually, the weights would converge arbitrarily close to the optimum described by the function (4). As the last step of an iteration, we calculate the cutoff parameter for adding new classifier.If the preset threshold is reached (in this case, the number of training iterations i w reaches r), the value of T is increased, and a new classifier is initialized with zero weights. Each iteration of the algorithm produces a strong classifier that can be used to get the class of vector x: There are several key differences between our algorithms and SVM using Mercer kernels as described in [3,4].The first and possibly most important one is that the number of weak classifiers T is much less than the number of input samples, i.While the algorithm described in [3] allows truncation of the kernel expansion coefficients, this is only applicable to the SGD algorithms with constant learning rate, and it results in an accuracy penalty.The second difference is the update of vector β, which allows change in the weights different from the exponential decay of [3].As our experiments in Section 3 show, these differences allow our algorithm to achieve better accuracy while being less resource intensive.It is also important to note that while parameters λ, λ H and r somewhat affect the convergence rate, they are independent from the form of the class-separating surface, that is, unlike the kernel methods, the convergence rate and resulting accuracy both depend heavily on the type and parameters of the kernel selected, our method converges similarly to the AdaBoost, with the resulting accuracy depending mainly on regularization parameters and resulting amount of weak classifiers T. This is shown in our Section 3, where our algorithm is running on the same set of parameters for datasets with different variable distributions, and often outperforming even the kernel-using algorithms with ideal kernel settings. Discussion on the Proposed Method. The process of convergence is illustrated in Figure 2 for the case of twodimensional dataset, where the data is classified according to whether it is inside unit circle about origin.The parameters used for this illustration, λ = 0.02, λ H = 0.02, r = 50.As can be seen, the original separation is quite bad since a linear classifier cannot converge with such data distribution, however, as new weak classifiers are added, the separation achieved by strong classifier becomes closer and closer to the true data distribution. Each phase of our training algorithm is trying to solve the primal formulation of SVM (3), using Pegasos algorithm, which has a convergence rate of O(R 2 /λ ), where R is a bound on the norm of input vector, and is desired accuracy.It is easy to see that, for the second phase R = T, T being the number of added classifiers, and each classifier producing output h i ∈ {−1; 1}, so the convergence rate slowly decreases as additional classifiers are added.To combat this, certain classifiers with lows weights may be removed from the pool.This has a small additional effect of increasing the ability of algorithm to adapt to a changing classification target. The changing weights of sample vectors in the first phase, as well as increasing the size of classifier pool, can be easily recast as a moving classification goal, described in [3].According to them, movement of the target introduces an accuracy penalty that is approximately linear to the total distance traveled by the target, which suggests that in our case the convergence should be slower than the convergence of an SVM training algorithm using kernel parameters appropriate for the sample distribution.This is an additional reason why we only update a single weak classifier, rather than all of them, since, otherwise, the drift would be proportional to the number of classifiers added, significantly penalizing convergence rate. However, the bounds mentioned in [3] are not tight, and it is not clear how their bounds are altered by the additional projection step in Pegasos.Adding to this the fact that in most real-time applications the exact form of the data distribution is not known, we have decided to compare algorithm performance using experimental data. Possible Extensions. In this section, we discuss several possible extensions of our algorithm.For example, as can be easily noticed, while parameter T is much less than the kernel expansion terms, it still grows with additional samples, which may lead to loss of effectiveness and overfitting.There are several possible extensions that may allow to avoid this.One way is to remove classifiers α i for which the condition |β i | < held for several iterations in a row.This will also allow the algorithm to adapt better to the case of changing distribution.The other way is to increase the parameter r depending on T, or to choose a different cutoff algorithm altogether. Also, to increase adaptability to changing input conditions, it is possible to change the calculation of the learning rate of the Pegasos algorithm, for example, stopping its decay on a certain threshold.However, such experiments are outside the scope of this paper. Experiments 3.1.Description.We compare our algorithm to both Pegasos [4] and Norma [3], implemented on MATLAB for both the linear and the kernel-based case.The experiments are being run on AMD Phenom X4 965, with only one core being used for calculations.We perform several experiments, aiming to compare generalization error and convergence rates over different datasets, as well as the ability of the algorithm to adapt to the distribution with the changing parameters (flexibility). We use several artificial datasets with known distributions and separation properties, and a Forest Covertype dataset (separating class 5 from other classes), originally used in [12], and also used for comparison of convergence speed in [4].The artificial datasets are generated according to the following distributions. (1) High-dimensional linearly separable data (Linear).A random hyperplane is created in 50-dimensional space.Data points are generated randomly to both positive and negative sides of the hyperplane.Data points too close to the hyperplane are filtered out. (3) Bayes-separable data (Bayesian).This dataset is generated as described in [3], that is, in such a way so that data is clearly separable using ideal Bayesian classifier for known class distributions. (4) Bayes-separable data with moving distribution (Drifting).As in (4), but the parameters of a distribution are changed slightly each iteration, simulating target movement.This experiment estimates the ability of the algorithms to adapt to gradual changes in the data distribution. (5) Bayes-separable data with switching distribution (Switching).Once again, a dataset generated according to the description in [3], with the distribution changed drastically every 1000 iterations.This x experiment shows the ability of the algorithms to completely relearn a distribution. For each distribution, we measure the decrease of estimated error rate over training dataset (estimated error being simply the number of misclassified training samples divided by number of iterations), and the resulting error rate over the testing dataset (generated without noise in case of noisy distribution).In case of the dataset with the changing distribution, the distribution at the last iteration is used for testing. For all experiments, the parameters of our algorithm were fixed, with λ = 0.02, λ H = 0.03 and the cutoff parameter r = 150.Pegasos and Norma used parameter λ = 0.02, and either a linear kernel or a Gaussian RBF kernel with γ = 0.01, which is the same value of γ used for generating Bayes-separable datasets. Experimental Results. The graphs for the estimated error rate are shown on Figure 3, while the resulting error rate on the test datasets is shown in Table 1.It can be seen that for linearly separable problems our algorithms performs on par with the Pegagos algorithm, with slight increase of the error rate possibly due to the overfitting.For kernelbased methods, however, our algorithm usually outperforms both Norma and Pegasos, unless the exact kernel parameters are used, and even then (see Figure 3(b)), our algorithm performs slightly better in the long run.It is interesting to note that, for switching dataset, NORMA actually outperforms Pegasos by a considerable margin, indicating that Pegasos algorithm is more sensitive to rapid changes in the classification target, most likely due to the rapid decay of the learning rate with time, while our algorithm was largely able to compensate, demonstraing stability of the method to condition changes. For the Covertype dataset, linear classifiers work best and approach the error rates indicated in the paper [12], with Pegasos and our algorithm giving virtually the same results.It is also important to note that, when compared to kernel-based SVM algorithms, our algorithm is much more efficient both in terms of computing and storage requirements, since the amount of weak classifiers, each requiring only a single inner product calculation, is much lower than the amount of kernel expansion terms produced by both NORMA and Pegasos for the same accuracy levels.For example, in the test shown on Figure 3(c), the resulting amount of kernel expansion terms was over 5000 after 10000 iterations (for both Norma and Pegasos), while the amount of weak classifiers generated by our method was only 67, that is, both the memory and computational requirements (per classification) were less by a factor of around 75. Conclusion and Future Work We have shown how combination of boosting and online SVM training creates an algorithm that outperforms standard training algorithms in the case when the kernel parameters are not known and in general allows for the creating more efficient classifiers that simple kernel expansion.The drawbacks of our algorithm include the fact that it is not as efficient in case of linearly separable problems, and that it inherits some of the sensibility to the rapid changes in target function from Pegasos. In the future, we plan to study the application of our algorithm to various image and signal processing tasks, in partiular to object tracking problem to compare its effectiveness to methods based on various online AdaBoost modifications, like the ones described in [1]. Figure 1 : Figure 1: Illustration for support vector machine. Figure 2 : Figure 2: Illustration of the convergence process: (a) the first linear classifier trained (i = 50, T = 1), (b) several additional weak classifiers added (i = 250, T = 5), (c) the data separation corresponding to the set of classifiers in (b).Background color shows class generated by classifier, form and color of data points show actual label y, (d) data separation achieved after convergence (i = 5000, T = 100). The label provided for x, y ∈ {−1; 1} [1]ector that consists of outputs provided by a set of T weak classifiers, h i ( x) ∈ {−1; 1}.That means that an SVM training algorithm can be applied for training weights β, with the same guarantees for convergence rate and generalization error bound shown in[5].2.3.Resulting Algorithm.Using the above similarity, we separate the boosting process into two phases: the training of each successful weak classifier, and adjusting the boosting weights that combine weak classifiers into a strong one.Both phases can then be cast as a primal SVM problem, same as (4), with the difference that in phase 1 (weak classifier training) each sample is weighted according to the result provided by the current strong classifier.To reduce the amount of calculations, we have chosen the loss function for the strong classifier H, L, as a weight, although other weighting solutions are possible.Also unlike[1], we only train the last weak classifier of the set rather than all of them.x:An input sample vector.A single vector is provided for each iteration of the algorithm.Sample vectors are assumed to be extended with an additional constant element representing bias term y: t = f t ( x): Objective function of a t's weak classifier F = F( h): Objective function of a strong classifier L: A loss function for the combined strong classifier.In this work we use hinge-loss function (see (1)) h t : Output value of a weak classifier, h t = sign( f t ) β: Boosting coefficients, used for combining several weak classifiers into a stronger one.With the bias term, the number of elements in β is T + 1 H: A combined strong classifier. Table 1 : Error rate over the testing dataset, O: our algorithm, PL: linear Pegasos, NL: linear NORMA, PG: Pegasos using Gaussian RBF kernel with γ = 0.01, NG: NORMA with the same kernel.
5,567.8
2012-08-14T00:00:00.000
[ "Computer Science", "Mathematics" ]
Smart meters as a key component of modern measuring infrastructure providing observability and state estimation of low-voltage distribution networks The paper presents a solution to the problem of organization of a system for collecting and transmitting information about measurements from smart meters necessary for the state estimation of a low-voltage distribution network. The problems of providing the sufficiency of measurements for the observability of the network and the influence of errors in the information about load connection to phases on the quality of the observability are considered. The results of allocation of smart meters and the state estimation of the real distribution network are given. Introduction Efficiency and reliability of energy use can be ensured by using information about the current load condition coming from the measuring infrastructure to the control centre of the electric power system (EPS) in real time. Measurement data are processed by state estimation (SE) algorithms that allow to refine the measurements and to calculate all the variables of the EPS state required for monitoring and control. A necessary condition for ensuring the existence of a solution to the problem of SE according to measurements is the EPS observability. The quality of observability, which determines the influence of measurement errors on the solution obtained during SE, depends on a set and an allocation of measurements. It can be improved by increasing the number of measurements. Methods of SE have found a wide application in high and ultra-high voltage networks, through which electricity is transferred from power plants to distribution networks (DNs) directly connected with consumers. For the SE in such networks Phasor Measurement Units (PMUs) are used allowing simultaneous and synchronized vector measurements in remote nodes using GPS or GLONASS. PMU measures the voltage vector in the node and the current vectors in adjacent branches [1]. The results of the SE based on PMU can provide information about the power entering the primary medium-voltage (MV) DN. Neither total loads of the transformers nor, especially, the loads of the secondary low-voltage (LV) DN can be obtained. Preparation of information for the state estimation of the secondary distribution network In the traditional DN, only the measurements at primary distribution substations on the MV side are available to the dispatcher, as well as measurements of critical loads [3]. Obviously, such measurements are not enough either for the SE of DNs or for the calculation of the load flow. Passive DNs gradually become active ones with a difficultly predictable state owing to introduction of renewable generation sources, energy storage and active consumers controlling their consumption depending on tariffs. There is a need to equip DNs with measuring devices and conduct the SE of DN on their basis. The SE of the primary DN is necessary for solving such problems as estimation of energy losses, reactive power and voltage regulation, network reconfiguration and load forecasting. In the secondary DN information about load condition is used for voltage and power flow control, also for calculations of power losses, short-circuit currents and reliability indicators [2]. The SE of the secondary DN can be the basis for determining voltages and powers entering the secondary DN in the nodes of connection of transformers to the primary DN. It is actually equivalent nodal powers of the primary DN. Such information will also be the basis for the SE of the primary DN that has no measuring devices. Nowadays there is an automated commercial electricity metering system in secondary DNs, the main component of which are electronic meters that measure energy consumption for a certain period of time (a day, a week, a month, a season, a year). Owing that, there is an opportunity of using the information about average loads to calculate the load flow in the DN and to determine the energy losses. [4] Since there is no information in reports of automated commercial electricity metering systems about loads' distribution by phases, the energy losses estimation will depend on the way the loads are distributed between phases. It is shown with calculations of the three-phase load flow for a 24-node low-voltage feeder of a real DN, Fig.1a. Two variants of specifying feeder loads were investigated. In the first symmetrical variant nodal loads were assumed to be three-phase symmetrical: the load of the node, single-phase or three-phase, was divided equally between phases. When calculating the load flow in this case, the current in neutral wire and the energy loss in it are zero. In the second asymmetrical load variant, three-phase nodal loads were divided equally between phases and each next single-phase nodal load was assigned to a new phase of the feeder. Feeder loads with this simulation become unbalanced and when calculating the load flow, currents and energy losses appear not only in the neutral wire, but also in the earth. The results of calculating the hourly energy losses in phases a, b, c, in the neutral wire n and total losses d for the symmetrical (1) variant and asymmetrical (2) one are shown in the diagram, Fig.1b. Total energy losses in phases for a symmetrical load equal to 0.1336 kWh, it is less than the losses for an asymmetrical load -0.1639 kWh, which also includes energy losses in the neutral wire and in the ground. It will be shown below that the problem of phase identification also arises when measurements from SMs are used for the SE. To determine the phase, a method is proposed in [5] consisting of a synchronized measurement over a long period of time of the voltage magnitudes in the nearest to the power source control node with known phases and in the SM connection node. The phase is determined from the maximum values of cross-correlation coefficients between two vectors containing measured voltage magnitudes. The effectiveness of the proposed approach can be illustrated for the fragment of the DN [6], presented in Fig.2a. The node 2, which is the nearest to the power supply node, is taken as a control node with known phases. Nodes 2,9,10,11 have three-phase loads; node 16 in phase B and node 24 in phase C are generator; nodes 5,29,17,25 have single-phase loads, and node 18 -twophase load. For all the nodes, according to the test load measurements for each hour of the day, the information about voltage magnitude was obtained using the threephase SE program. Further, for the voltage vectors of phases A, B, C of the node 2 and each of the measured vectors of voltages for example, in node 5, the standard MATLAB function corrcoef was used for calculation of cross-correlation coefficient AA,BA,CA. The coefficients recorded in table, Fig.2b, suggest that the SM is set in phase A. The phase identification for SM can be determined from the maximum values of correlation coefficients. For example, comparing the correlation coefficients AA (0.9984), AB (0.7757), and AC (0.9115) for the node 5 allows affirming meter is in phase A. Additional problems associated with the automated commercial electricity metering system are non-synchronization of measurements, erroneous data about network topology, erroneous data about connection of feeders to secondary transformers. The use of such information is problematic for the calculation of the load flow, and even more so for the SE. The transition to the concept of smart grid including the use of new electronic devices, such as SMs, and the organization of a universal communication infrastructure for the collection and processing of measurement information, allows the organization of the functioning of the DN in a dialogue format. The control object (a DN dispatcher) does not just send one-way commands, but receives from the control object a message about the possibility of their implementation. As a result, the command can be corrected or cancelled. That will avoid losses and ensure the reliability of the DN operation [7]. To realize the two-way interaction, SMs have been developed, their measurements will form the basis of the three-phase SE of the secondary DN, and, as noted above, the SE of the primary DN. Smart meters and Advanced Metering Infrastructure The appearance of three-phase and single-phase SMs installed in the load and/or generator nodes of LV DN allows obtaining the measurements necessary for the calculation of the load flow and the SE [8]. The transmission and processing of measurements from SMs is possible thanks to the innovative communication system containing various communication technologies -Advanced Metering Infrastructure (AMI). AMI collects and analyzes data from SMs, providing two-way communication with a dispatching office, as well as between consumers and utility services. The appearance of AMI and its earlier modifications is the first step towards the transition to digital control systems for DNs. [8,9] Various communication tools are used for AMI organization [9]. In Russia and Europe, the most widely spread wire technology is PLC, associated with the transmission of measured information through the power lines [10], its drawbacks include a bandwidth limitation and long-term responses to incoming messages from SMs. The most common wireless communications are ZigBee [11] and cloud technologies [12]. In Kazakhstan and Belarus, cellular is used for communication between SMs [13]. Wireless technologies are considered to be less expensive, but the range and correctness of signal transmission is limited, the data transfer rate is low, and the penetration and avoidance capacity of the obstacles is weak. In addition, wireless technologies can interfere with other devices. Typically, the PLC is used to transfer measurements to data concentrators located in primary substations. Communication between meters of a single feeder or a section of the DN is carried out via wireless channels. AMI is constructed hierarchically in accordance with the information levels of the computer network (HAN, NAN, WAN, LAN), Fig.3 [7.8]. The review of papers on problems of AMI organization, allocation of SMs in the LV DN and their use for the SE [2,14], allowed to establish the following facts. At the initial stage of the study [15] of the SE problem, measurements from SMs were considered as synchronized vector measurements of nodal voltages and current injections [16,17]. It was believed that to determine the location of a particular SM it is possible to use the information attached to it, including the name of the substation, the node number, feeder and phase numbers [18]. However, more recent publications [7,9,19,20] present conflicting information on the problem of synchronization of measurements from SMs and the absence of a specialized standard that determines the sampling rate for SMs. Thus, measurements from SMs are non-synchronized. These are the same electronic meters that are connected to the automated commercial electricity metering systems, performing an average measurement of energy consumption for a certain time interval (15, 30, 60 min). In addition, SMs of several manufacturers can measure the voltage magnitude and active and reactive power injections and/or active and reactive current injections in the node with SM. Also it can measure the output power delivered to the supply network in DNs with renewable generation sources. Allocation of measurements to provide observability and state estimation in distribution networks To arrange the measurements by the criterion of topological observability for each phase of LV DN, it is sufficient to know the phase topology and the nodes which are load and/or generator ones. Transit nodes, which are adjacent to two branches, and hanging nodes without nodal power are not included to the equivalent circuit of the phase. The set of SMs including voltage magnitude and current injection measurements in the node can be determined with the developed program for allocation measurements that ensure the observability of DNs. The algorithms used in the program [21] are based on the algorithms for choosing the set and locations of PMU measurements proposed in [22] to ensure observability in various cases for load condition. In contrast to PMUs, SMs measure not vectors, but magnitudes of voltage. SMs must be installed in all load and/or generator nodes to ensure the observability of the LV DN [6]. Measurements are not installed in transit nodes. If it is not possible to allocate measurements at all load and/or generator nodes, a minimal set of SMs can be used to control nodal voltages. That is especially important in networks with renewable generation. The minimal set of SMs can be obtained using the algorithm for selecting the minimum set of measurements from single-channelled PMUs as shown in [21]. Additional constraints were introduced in the algorithm to prohibit installation of measuring devices in transit nodes adjacent to more than two branches and installation more than one measuring device in the node. After checking the choosing set of SMs in the topological observability analysis program, such measurements can be used for the SE of each phase of the LV DN. To ensure the observability of the DN with the minimum set of SMs direct-axis voltage components equal to measured voltage magnitudes can be specified as the measurements from SMs. Quadrature-axis voltage components are assumed to be zero. Such replacement is allowable as quadrature-axis voltage components are almost equal to zero. That makes it possible to provide observability, specifically voltage control of only directaxis voltage components in all nodes of the DN, as shown in [15]. A similar replacement is also used when SMs is allocated in all load and/or generator nodes. A linear SE algorithm based on the extended Hatchtel matrix can be used for the SE of each phase in three-phase four-wire DNs with asymmetrical phase loads, as shown in [23]. The advantage of this method is the elimination of measurements of zero current injections from the objective function and the use of an additional constraint on zero equalization of the weighted residues. The illustration of the algorithms for choosing the set of measurements and state estimation of the secondary distribution network A real LV DN 0.4kV, which belongs to a regional state unitary power company "Oblkommunenergo" (Irkutsk, Russia) is used to illustrate the developed algorithms. It feeds consumers of the Iskra village. The power supply to the Iskra village is provided by four transformer substations, which are supplied from the primary substation, 110/6kV, owned by "Irkutsk Electric Grid Company". Four 0.4kV feeders of substation "Iskra-Novaya" were taken for the calculations, Fig.4: feeder No.1 -34 nodes, feeder No.2 -21 nodes, feeder No.3 -45 nodes, feeder No.4 -56 nodes. The scheme includes 157 nodes taking into account the power node, 60 nodes are transit. "Oblkommunenergo" gave necessary information for the study about the network topology, brands of electric wires and line lengths; a list of consumers. During the preparation of the scheme for calculating the state, problems were found that are typical for the most real DNs in Russia. The geographical maps (2GIS, Yandex and Google) available on the Internet contain conflicting information about house numbering and street names, and in the given list of consumers not all addresses are presented in an understandable form and correspond to the map data. Despite the existence of the list of consumers, there is no information on their belonging to a particular phase. When analyzing and processing the provided data on energy consumption for various periods of time, it was found that consumption readings are either mis-transmitted by users or contain errors that occur when recording information, which can even lead to negative consumption. Since the exact consumption is unknown, two versions of the initial data were generated for the state analysis. In variant No.1, the loads were set in accordance with the available limit in Russia for the power consumption for domestic needs of 15kW. This load was accepted for three-phase consumers. Singlephase loads were taken equal to 5kW. The reactive component of the nodal power was adopted on the basis of the reactive power factor normalized for utility networks (tgφ = 0.2). The voltage on the low side of the transformer substation was set at 0.38kV. A single-phase state for the DN was calculated, which showed that, for example, in feeder No.1 in 32 nodes voltage deviation exceeds 10%, which is confirmed by the estimates of "Oblkommunenergo" employees. At the moment, the company has already developed a technical task for the network reconstruction of the Iskra village and the development of the project has begun, including the addition of a substation and the reconfiguration of the entire network. In the version of state No.2 to simulate the possible overvoltages in the network, which can be further determined by the SE, the values of the node load powers were reduced by 20%, the voltage on the low side of the transformer substation was taken at 0.425kV, and for a number of the most remote feeder nodes reactive power sources were installed. The allocation of SMs according to the program [21] was made for each feeder independently. When determining the minimum set of the measurements, it becomes possible to edit the allocation of the SMs selected by the program: for example, to replace the meter installation in the garage allocated in the scheme as a load node (node 43, Fig.4), to install the SM in the residential building (node 44, Fig.4). For the DN, Fig.4, the minimum measurement set is obtained, including 64 SMs, and the SM setting is determined in all 96 load nodes. As measurements for the SE, injections of active and reactive currents and voltage magnitudes were used, while errors of 0.4V and 0.3A were introduced into the test values. Fig.5 compares the direct-axis components of the nodal voltages of the test state of the feeder No.1 with estimates obtained from the linear SE program for two sets of SMs for the initial data No.1 (Fig.5a) and No.2 (Fig.5b). In the first case, in all nodes of the feeder there are unacceptable voltage drops, in the second caseovervoltages. The analysis of the SE results confirmed the conclusion obtained earlier that even the minimum set of SMs makes it possible to obtain estimates of nodal voltages with acceptable accuracy and to reveal their deviations. The maximum error of estimates in comparison with the test state data was 0.14593% for the case No.1 and 0.09117% for the case No. 2. At this stage of the study a minimum number of SMs can be proposed for four feeders of the LV DN departing from "Iskra-Novaya" substation of "Oblkommunenergo". Such set, including 64 SMs, installed in the circled nodes, Fig.6, is sufficient to control voltage levels in the LV DN of the Iskra village. Conclusion The problems that arise while prepare the information for SE of the secondary DN are analysed, such as: 1. the choice of measuring devices and the associated set of measured variables; 2. the identification of phases of consumers' connection; 3. the ability to synchronize measurements; 4. errors in information about the topology of the DN and the connection of feeders to transformers. Methods for constructing a modern measurement infrastructure including an active DN, a communication network and measurements of smart meters that are used not only for collecting information about energy consumption but also for the SE in a secondary DN are considered. The problem of providing the observability of all variables of the secondary DN based on SMs and the observability of only the magnitudes of node voltages is considered. The proposed approaches were applied for choosing the set of SMs and for the SE of the DN, which is managed by the regional state unitary power company "Oblkommunenergo", Irkutsk, Russia. Recommendations for the allocation of measurements and the organization of a system for the collection and transmission of measured data for the planning and further development of this network can be given to "Oblkommunenergo" based on the results of the study. The most important moments are the need to install smart meters in all load nodes, to ensure the synchronization of such measurements and the required knowledge of the phase connection of consumers. The work is done in the framework of the project III.17.4.2. program of fundamental research SB RAS, registration number AAAA-A17-117030310438-1.
4,607.4
2018-11-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Sequence variation and immunogenicity of the Mycoplasma genitalium MgpB and MgpC adherence proteins during persistent infection of men with non-gonococcal urethritis Mycoplasma genitalium is a sexually transmitted bacterial pathogen that infects men and women. Antigenic variation of MgpB and MgpC, the immunodominant adherence proteins of M. genitalium, is thought to contribute to immune evasion and chronic infection. We investigated the evolution of mgpB and mgpC sequences in men with non-gonococcal urethritis persistently infected with M. genitalium, including two men with anti-M. genitalium antibodies at enrollment and two that developed antibodies during follow-up. Each of the four patients was persistently infected with a different strain type and each patient produced antibodies targeting MgpB and MgpC. Amino acid sequence evolution in the variable regions of MgpB and MgpC occurred in all four patients with changes observed in single and multiple variable regions over time. Using the available crystal structure of MgpC of the G37 type strain we found that predicted conformational B cell epitopes localize predominantly to the variable region of MgpC, amino acids that changed during patient infection lie in these epitopes, and variant amino acids are in close proximity to the conserved sialic acid binding pocket. These findings support the hypothesis that sequence variation functions to avoid specific antibodies thereby contributing to persistence in the genital tract. Introduction M. genitalium is increasingly recognized as a sexually transmitted pathogen in men as a frequent cause of acute and chronic non-chlamydial, non-gonococcal urethritis (NGU) [1]. In women, M. genitalium is associated with cervicitis, pelvic inflammatory disease (PID), endometritis [2][3][4], tubal factor infertility [5,6], preterm birth, and spontaneous abortion [7]. Importantly, M. genitalium infection increases cervical shedding of HIV [8] as well as the risk of acquiring and transmitting HIV [9,10]. The prevalence of M. genitalium ranges from 1.3-3.9% in population-based studies, to 20.5% in high risk settings [11,12]. Similar to Chlamydia trachomatis [13], many infected men and women are unaware of their M. genitalium-positive status [12,14,15]. Treatment is complicated by inherent resistance to cell wall targeting antibiotics, and high rates of acquired azithromycin resistance (40-100% of strains in some settings) [16]. The efficacy of moxifloxacin, used to treat azithromycin-resistant infections, is declining and dual resistance is increasingly reported [17]. M. genitalium can persist for months, and potentially years, in infected individuals [10,18,19] despite the presence of specific antibodies in genital exudates of infected women [20] and in the sera of infected men [21]. These data suggest that M. genitalium evades the local and systemic immune response, allowing greater opportunity for transmission to others and ascension to the upper reproductive tract in women. The MgpB and MgpC adherence proteins, also known as P140/MG191 and P110/MG192, respectively, are immunodominant targets of host antibodies [5,[20][21][22][23]. MgpB and MgpC localize to the M. genitalium terminal organelle, forming a complex [24] required for adherence to host cells and inanimate surfaces, and motility [25][26][27][28]. Recently the structure of the MgpC protein was determined and a sialic acid binding pocket was identified [29]. The mgpB and mgpC genes, expressed from a single locus on the M. genitalium chromosome, consist of conserved sequences interspersed with variable regions [22,23,30]. MgpB and MgpC expression is affected by both antigenic and phase variation. Antigenic variants express MgpB and MgpC proteins with variant amino acids while phase variants do not express either MgpB or MgpC and are non-adherent. Antigenic variation is accomplished through segmental, reciprocal recombination between individual variable regions in mgpBC and archived variable sequences present in nine MgPars located throughout the chromosome [22,23,31,32]. No MgpB or MgpC protein is expressed from the MgPars as only the variable sequences of mgpB and mgpC are present, the adjacent variable regions have different reading frames, and the variable sequences are often separated by short AT-rich regions encoding multiple stop codons. Phase variants arise in vitro by multiple mechanisms including recombination between mgpBC and the MgPars, point mutations, and deletions. [25,33,34]. Phase variants generated by recombination fall into at least six classes and can be reversible or irreversible depending on the number of recombination partners involved [33]. Deletion of recA results in the near-total loss of antigenic and phase variation implicating recombination as the mechanism that generates the majority of these variants [25,33]. Antigenic and phase variation may represent immune evasion strategies allowing M. genitalium to escape binding by specific host antibodies and persist in the genital tract. In order to understand the extent of antigenic variation during infection, we assessed sequence changes in both mgpB and mgpC in four men with NGU with persistent M. genitalium infection [35]. Among these men, two were positive for anti-M. genitalium serum antibodies specific for MgpB and MgpC at enrollment and two developed MgpB/C-specific antibodies during observation. We assessed sequence variation simultaneously in all four variable regions: region B, EF, and G of mgpB, and region KLM of mgpC, and found that variation was both rapid and extensive and was localized to conformational B cell epitopes predicted for MgpC. Our results are consistent with a model in which the immune system selects for variants in multiple regions of MgpB and MgpC simultaneously during persistent infection. Finally, by mapping these sequence changes onto the published crystal structure of MgpC [29], we found that variant amino acids are located near the sialic acid binding pocket of MgpC, suggesting that antigenic variation may protect M. genitalium from adherence-inhibiting antibodies. Patient specimens The M. genitalium isolates in this study (Table 1) were obtained from urine specimens collected between January 2007 and July 2011 in a double-blinded, randomized trial comparing the effectiveness of azithromycin and doxycycline for men with NGU at the Public Health-Seattle & King County STD Clinic in Seattle, WA [35]. In this study, M. genitalium PCR-positive patients were randomly assigned to receive azithromycin or doxycycline upon enrollment (Visit 1). Patients returning at Visit 2 with signs or symptoms of urethritis were prescribed the alternate antibiotic and M. genitalium PCR status was again determined. Patients that were M. genitalium-positive at Visit 3, after azithromycin and doxycycline treatment, were prescribed moxifloxacin. At each time point, persistent infection was determined by PCR and M. genitalium isolates were recovered in cocultures with Vero cells (see below). Among the four patients whose cultured strains were analyzed in the current study, M. genitalium infection persisted after doxycycline treatment, consistent with the known poor efficacy of this antibiotic for eradication of M. genitalium infection [36]. Three men (Patients 10378, 10467, and 10477) were also treated with azithromycin per study protocol, however, it was later determined that each of these patients had been infected with an azithromycin resistant strain (MIC � 8 μg/ml, Totten et al. in preparation) at enrollment. The M. genitalium strain that infected Patient 10366 is sensitive to azithromycin, however, azithromycin treatment was initiated after collection of the Visit 3 specimens. For the present study, we analyzed M. genitalium isolates cultured from patient specimens obtained at Visit 1 and Visit 3. Immunoblots Antibody reactivity of patient sera to whole cell lysates of wild type M. genitalium strain G37 was determined as previously described [38] using a 1:1,000 dilution of patient serum, followed by a 1:7,500 dilution of peroxidase-conjugated goat anti-human IgG (whole molecule; Sigma-Aldrich, St. Louis, MO) secondary antibody and chemiluminescent detection (ECL, GE Healthcare, Chicago, IL). Culture of M. genitalium from urine M. genitalium isolates (Table 1) were recovered from processed patient urine by coculture with Vero cells [39]. Briefly, patient urine (2 ml) was centrifuged at 16,000 x g for 15 minutes, the supernatant was discarded, and the cell pellet was resuspended in 0.4 ml mycoplasma transport medium and frozen at -80˚C. At the time of culture, 1 x 10 5 Vero cells (obtained from the American Type Culture Collection) were seeded in 25 cm 2 flasks in 5 ml Eagle's Minimal Essential Medium (EMEM; ATCC, Manassas, VA) supplemented with 10% fetal bovine serum and 100 U/ml penicillin. After overnight incubation at 37˚C in 5% CO 2 , the culture medium was removed, adherent Vero cells were washed with PBS, and fresh EMEM containing 10% FBS, 6% yeast dialysate, 100 U/ml penicillin, 50 μg/ml polymyxin B, and 50 μg/ml colistin in a total volume of 8.5 ml was added. Flasks were inoculated with thawed, processed urine (100 μl) and incubated at 37˚C in 5% CO 2 for four weeks. Vero cells grew to form a confluent monolayer after two weeks and then detached from the plastic by week three. To confirm growth of M. genitalium from patient specimens, aliquots from these cocultures were collected weekly and DNA was isolated with the MasterPure DNA isolation kit (Lucigen, Middleton, WI). M. genitalium DNA was quantitated by qPCR as previously described [40] confirming growth by an increase in genomes over time. PCR amplification and sequencing of mgpBC variable and strain typing regions DNA isolated from M. genitalium strains cocultured with Vero cells for three weeks, corresponding to late log phase, served as template for amplification of the mgpB strain typing region (Fig 1) with primers ModPetF and 1415R (Table 2). PCR products amplified after 30 cycles with Platinum PCR SuperMix High Fidelity (Invitrogen, Carlsbad, CA) were cloned into pCR2.1-Topo (Invitrogen) and several plasmid clones were sequenced to determine the M. genitalium strain type sequence and verify that a single strain type was detected at Visit 1 and Visit 3 ( Table 1). Variable regions in mgpB and mgpC were similarly PCR-amplified using the primers indicated in Fig 1 and Table 2, cloned, and sequenced from multiple plasmids. Each variable region was PCR-amplified twice, using a different primer pair for each reaction. Sequences were aligned using MultAlin (http://multalin.toulouse.inra.fr/multalin/) [41], Highlighter (https://www.hiv.lanl.gov/content/sequence/HIGHLIGHT/highlighter_top.html) [42], and ElimDupes (https://www.hiv.lanl.gov/content/sequence/elimdupesv2/elimdupes.html). Unique sequences have been deposited in GenBank (Accession numbers MT439353 -MT439593, Table 1). Sequences of the conserved regions of mgpB for these patient isolates have been published previously [27]. Small arrows indicate the primers used to PCR amplify each region indicated by grey lines. Each variable region was amplified with two different primer pairs. PCR products were cloned and sequenced from individual plasmids to assess sequence changes over time in infected men. Strain types were determined by sequencing the region indicated by "ST". Numbers indicate base pairs relative to the start codon of mgpB and mgpC. PLOS ONE To measure variation during Vero coculture, one strain (MEGA 1166) was subcultured twice, for a total of ten weeks of in vitro growth, then variation in mgpB region B was assessed by PCR and sequencing as described above. Epitope analysis Epitopes within MgpC (amino acids 23-938) of the M. genitalium type strain G37 were predicted using DiscoTope 2.0 [45] and the published crystal structure, PDB 5MZ9 [29]. For simplicity, our analyses include only those epitopes with a score greater than -1.0, corresponding to 30% sensitivity and 85% specificity. The default threshold of -3.7 corresponds to 47% sensitivity and 70% specificity. To predict epitopes in patient variants, the KLM region of G37 MgpC was replaced with sequences specific for isolates 1491 and 1534, modeled with iTasser [46], and then analyzed using DiscoTope 2.0. PyMol was used to manipulate models of predicted protein structures and generate images. Linear epitopes were predicted using Bepipred 2.0 [47]. Ethics statement The M. genitalium cultures and sera analyzed in this study were obtained from men enrolled in our Seattle-based treatment trial [35]. This study was approved by the University of Washington Institutional Review Board and all enrollees gave written informed consent. Results The goal of this study was to determine the extent of sequence variation in mgpB and mgpC over time in men with NGU who were persistently infected with M. genitalium. Previous studies of antigenic variation of M. genitalium from clinical specimens have been limited to analyzing a single variable region and/or by a low number of cloned sequences analyzed [22,23,32,43], which would underestimate the diversity and complexity of MgpB and MgpC variation. An appreciation of the extent of variation occurring during infection is necessary to understand how antigenic variation contributes to persistence and immune evasion. Here we analyzed sequence variation in all four variable regions (B, EF, G, and KLM) of the mgpBC expression site in M. genitalium cultured from the urine of four men with NGU during persistent infection spanning 28 to 50 days. Identification of suitable specimens The M. genitalium isolates sequenced in this study were cultured from men enrolled in our study comparing the efficacy of doxycycline and azithromycin for the treatment of NGU [35]. Urine specimens were collected from men at multiple time points (up to four clinic visits) spanning 28 to 50 days, and confirmed M. genitalium-positive by PCR [48]. Strain typing [37] was used to determine that a single strain was detected at all time points. Patient sera collected at Visit 1 and Visit 3 were assayed by immunoblot to identify patients with anti-MgpB and anti-MgpC antibody reactivity. Four patients were chosen for further analysis including two (patients 10378 and 10467) with increased MgpB and MgpC reactivity between clinic visits, and two others (patients 10366 and 10477) that reacted with MgpB and MgpC at both time points (Fig 2). Immunoblot reactivity is assumed to primarily reflect the binding of patient antibodies to conserved sequences in MgpB and MgpC as variable region sequences differ between patient isolates and the G37 type strain used as antigen. The paired Visit 1 and Visit 3 M. genitalium isolates cultured from these four PCR-positive men were strain typed to confirm that the Vero-cultured isolates were identical to the strain present in patient specimens (Table 1). These isolates were then analyzed for sequence variation over time. Analysis of sequence diversity in M. genitalium infected men To assess gene variation in M. genitalium isolates, each of the three variable regions within mgpB (regions B, EF, and G) and the single variable region within mgpC (region KLM) was PCR amplified from Visit 1 and Visit 3 cultures (Fig 1). Amplicons were cloned, and then 10 plasmids were sequenced to identify regions that varied between time points. If sequence changes were observed in a particular variable region, then an additional 25-30 plasmids were sequenced. These variable regions were then amplified with a second primer pair targeting the same region (Fig 1 and Table 2), again cloning and sequencing 35-40 plasmids. Similar sequences were obtained using both primer pairs suggesting that each reaction amplified representative sequences. For example, 18 variant sequences were identified among 78 clones of region B from Patient 10366 Visit 1. Of the 6 variant sequences found in more than two plasmid clones, all were amplified by both primer pairs, representing 83% of total sequences obtained. Using this strategy approximately 75 cloned sequences were analyzed per variable region for each time point in all four patients. The predicted amino acid sequences for each variable region were aligned to assess sequence changes between time points. Sequence variation between time points was observed in all four patients analyzed in at least one variable region. Variable region sequences were unique to the isolates from each patient (i.e., no sequence was identical between different patients), emphasizing the diversity of M. genitalium strains circulating in a single geographic area. A comparison of the Visit 1 and Visit 3 sequences revealed the loss of specific amino acid sequences over time, consistent with immune selection against specific epitopes. Figs 3 and 4 show the results of these analyses in graphic form in which each individual cloned sequence is compared to the predominant sequence present at Visit 1. M. genitalium isolates cultured from Patient 10366 varied between time points in all four regions of mgpB and mgpC (Fig 3). A mixture of variant region B, EF, and KLM sequences, with a single region G sequence, was present at Visit 1. However, by Visit 3 novel sequences predominated in all four variable regions (Fig 4). In general, a single variant sequence predominated at Visit 3 for all four patients analyzed, although patients differed in whether they were infected with a variety of variants (eg Patient 10366 B, EF, and KLM) or a single predominant sequence (eg Patients 10378, 10467, and 10477). Interestingly, in Patient 10378, the Visit 3 culture was a mixture of the predominant Visit 1 sequence and a novel sequence (Fig 4). This may indicate "selection in process"-i.e., that effective antibodies have recently appeared and are actively selecting against an epitope formed by the amino acids that have changed between time points. Close inspection of variant sequences (Figs 5 and 6) revealed that changes occurred in clusters of amino acids consistent with the well-described mechanism of segmental recombination between mgpBC and the MgPars [22,23]. As individual variant clusters arose independently of each other (for example, amino acids "DTSG.T" appeared independently of amino acids "KSG" in region B Patient 10366, Fig 5) we assumed that they represent independent Stability of sequences in Vero coculture As recovery of M. genitalium from patient specimens requires three weeks of in vitro coculture with Vero cells, we considered the possibility that the gene variation we observed occurred during in vitro culture rather during patient infection. To address this issue, we serially passaged M. genitalium from Patient 10366 at Visit 1 (MEGA 1166) for an additional seven weeks in Vero cells then compared the variants present in MgpB region B to the same unpassaged culture. As shown in Fig 7 (upper panel), we found that the number of different variants present, and the sequences of these variants, changed very little during these seven weeks of in (Fig 7, lower panel). These results contrast with the extensive sequence evolution that occurred during 33 days of urethral infection and support our conclusion that diversification of sequences is a consequence of growth in vivo. MgpC variant amino acids lie within predicted conformational B cell epitopes The recent description of the MgpC crystal structure [29] afforded the opportunity to predict the location of conformational B cell epitopes. As shown in Fig 8A, DiscoTope [45] analysis PLOS ONE predicted numerous conformational epitopes within full-length MgpC with higher scoring epitopes concentrated in variable region KLM: 152 amino acids in full-length MgpC have a DiscoTope score greater than -1.0 (corresponding to 30% sensitivity and 85% specificity), 133 (87.5%) of which are located in KLM. The locations of predicted conformational epitopes were mapped onto the crystal structure of G37 MgpC as shown in Fig 9. Epitopes predicted within the conserved region of MgpC are marked in red, while epitopes in variable region KLM that cluster together on the MgpC PLOS ONE surface are indicated with various colors. The amino acids implicated in sialic acid binding [29] are located within, or adjacent to, predicted epitopes (magenta in Fig 9). This analysis shows that most conformational epitopes are located on the so-called "crown" of MgpC, which consists primarily of variable region KLM, as previously noted [29], and are located on the surface of the MgpC molecule where they could be targeted by host antibodies. Variation observed during patient infection localizes to predicted epitopes We hypothesized that if variation in MgpC region KLM represents an immune evasion mechanism then sequence changes should affect predicted epitopes. We aligned the KLM sequences obtained from two patients and identified amino acids that changed between time points ( Fig 8B). In Patient 10366, 32 amino acids varied between early and late time points in region KLM, 22 (66%) of which corresponded to epitopes predicted by DiscoTope with a stringent cutoff of -1.0 (indicated as peaks in Fig 8B, red line). Similarly, 49 (67%) of the 73 amino acids that varied in Patient 10477 were located in epitopes (Fig 8B, green line). We next determined whether sequence variation in vivo changed the amino acid sequence of an existing epitope, or if a pre-existing epitope was changed to a non-epitope. For this analysis, we replaced the G37 MgpC region KLM sequence with the patient-specific variant sequences, generated structural models with iTasser, and predicted conformational epitopes with DiscoTope. For simplicity, sequences from a single patient (Patient 10477: Visit 1 MEGA 1491 vs Visit 3 MEGA 1534) were analyzed as a single, unique sequence predominated at each time point (Fig 10). The epitopes predicted using the patient-specific models were similar to G37 in score (not shown) and location (compare Figs 10 to 9) despite 17% amino acid sequence differences. Patient-specific epitopes localized predominantly to the crown of MgpC and the amino acids that changed between time points (indicated in black in Fig 10) were embedded in these epitopes. Interestingly, the variant amino acids localized to different faces of the MgpC crown suggesting that several epitopes changed during the course of infection, possibly avoiding binding by antibodies of multiple specificities. Discussion In this study, we assessed sequence variation in the variable regions of mgpB (regions B, EF, and G) and mgpC (region KLM) of M. genitalium cultured from longitudinal specimens (spanning 30 to 58 days) from persistently infected men. Strain typing confirmed that a single strain type was detected in each patient at all time points, and immunoblots indicated antibody PLOS ONE reactivity to the MgpB and MgpC proteins in the sera of all four men. Evidence of extensive variation was observed in these patients in one or multiple mgpBC variable regions in vivo, with little variation during in vitro culture. The extent and diversity of variation was greater than previously appreciated by other studies of M. genitalium antigenic variation in patient specimens. Most variable amino acids in MgpC mapped to predicted conformational B cell epitopes supporting a role for antigenic variation as a mechanism to avoid the biologic effect of specific antibodies in order to persist in vivo. Previous studies have been instrumental in establishing that the gene variation predicted from in vitro studies occurs in vivo. For example, we previously observed mgpB and mgpC gene variation in M. genitalium-infected women [22,23] and Ma et al. [32,43] documented variation in several M. genitalium-infected men and women. However, each of these previous studies assessed a limited number of sequences (5 to 18 cloned sequences per specimen) and only one or two variable regions in these patients. Fookes et al. [50] assessed changes in the entire mgpBC operon by whole genome sequencing of isolates obtained 79 days apart, however, as single colony cloned strains were sequenced, the full breadth of variation within the infecting population could not be assessed. Our study provides a comprehensive assessment of variation across all variable regions in both MgpB and MgpC in men who were each persistently infected with a single strain. This approach allows an appreciation of the extent and frequency of variation, necessary to understand how antigenic variation relates to immune evasion and pathogenesis. Interestingly, we found that some variable regions did not change between time points in three patients: few changes were observed in regions B, G, and KLM in Patient 10378, in regions EF, G, and KLM in Patient 10467, and in regions B and G in Patient 10477. Our model of antibody-mediated immune selection would predict that the sera of these patients would react poorly to these non-variant regions of MgpB and MgpC, a prediction we intend to test in future experiments. We compared the number of recombination events observed at the expression site, calculated by identifying clusters of amino acid changes between time points and assuming that a unique cluster arose via a single recombination event. In this small sample set, the number of recombination events did not correlate with the length of time between specimen collection. For example, in Patient 10378 we observed 6 recombination events over 48 days of infection, whereas 68 recombination events were detected in Patient 10366 during 33 days of infection. It is tempting to speculate that the duration of antibody response is related to the number of variants observed. For example, isolates from Patients 10366 and 10477 had the most recombination events between time points analyzed (68 and 26, respectively) and both patients had anti-MgpB and MgpC antibodies at early and late time points, providing more opportunity for antibody-mediated selection. However, it is unknown when antibodies arose in Patients 10378 (48 days between visits) and 10467 (28 days). Furthermore, patient serum antibody reactivity was measured by immunoblot against whole cell lysates of M. genitalium strain G37, thereby detecting reactivity to sequences common in all MgpB and MgpC alleles, rather than unique variants present at early time points in these patients. Finally, immunoblot reactivity does not necessarily indicate biologic activity, for example, antibodies may target epitopes that are not exposed on the surface of M. genitalium. Further experiments are needed to ascertain whether these patient antibodies bind specific variants and have biologic activity. Our results showed clearly that many more variants arise during patient infection than during in vitro culture, probably due to a combination of immune selection and higher rates of recombination in vivo. The role of selection by immune factors is supported by the presence of antibodies to MgpB and MgpC, the known immunogenicity of these proteins in humans and animals, and the prediction of many high scoring conformational B cell epitopes in the variable region of MgpC. Furthermore, the fact that a single variant predominates at a given time point suggests that immune selection drives variation (ie, by selecting against one sequence followed by proliferation of a novel variant that escapes antibody selection). In concert with immune selection, we hypothesize that the rate of recombination is upregulated in vivo. Recently, an alternative sigma factor, MG428, was identified that induces expression of RecA and other recombination enzymes thereby increasing antigenic and phase variation [51][52][53]. We hypothesize that specific inducing signals for the MG428 regulon, as yet unknown, will be found in the genital tract. We analyzed the MgpC protein for predicted conformational B cell epitopes using the available crystal structure [29]. We found that most conformational epitopes are located in the variable KLM region of MgpC and that sequence variation detected in M. genitalium patient isolates alters the amino acids specifically localized to these epitopes. These data support our hypothesis that the role of gene variation is avoidance of specific antibody. Interestingly, few conformational epitopes are predicted in conserved sequences of MgpC suggesting that low immunogenicity may be an additional strategy to avoid antibody targeting of these invariant yet surface exposed regions. Further studies are needed to determine if epitopes predicted in silico are indeed targeted by host antibodies, and if the MG281 antibody binding protein of M. genitalium [54] plays a role in immune evasion.
6,132
2020-10-12T00:00:00.000
[ "Biology", "Medicine" ]
CT and MR image fusion of tandem and ring applicator using rigid registration in intracavitary brachytherapy planning The purpose of this study is to find the uncertainties in the reconstruction of MR compatible ring‐tandem intracavitary applicators of high‐dose rate image‐based brachytherapy treatment planning using rigid registration of 3D MR and CT image fusion. Tandem and ring reconstruction in MR image based brachytherapy planning was done using rigid registration of CT and MR applicator geometries. Verifications of registration for applicator fusion were performed in six verification steps at three different sites of tandem ring applicator set. The first site consists of three errors at the level of ring plane in (1) cranio–caudal shift (Cranial Shift) of ring plane along tandem axis, (2) antero–posterior shift (AP Shift) perpendicular to tandem axis on the plane containing the tandem, and (3) lateral shift (Lat Shift) perpendicular to the plane containing the tandem at the level of ring plane. The other two sites are the verifications at the tip of tandem and neck of the ring. The verification at the tip of tandem consists of two errors in (1) antero–posterior shift (AP Shift) perpendicular to tandem axis on the plane containing the tandem, and (2) lateral shift (Lat Shift) perpendicular to the plane containing the tandem. The third site of verification at the neck of the ring is the error due to the rotation of ring about tandem axis. The impact of translational errors from −5 mm to 5 mm in the step of 1 mm along x‐, y‐, and z‐axis and three rotational errors about these axes from −19.1° to 19.1° in the step of 3.28° on dose‐volume histogram parameters (D2cc,D1cc,D0.1cc, and D5cc of bladder, rectum, and sigmoid, and D90 and D98 of HRCTV were also analyzed. Maximum registration errors along cranio–caudal direction was 2.2 mm (1 case), whereas the errors of 31 out of 34 cases of registration were found within 1.5 mm, and those of two cases were less than 2 mm but greater than 1.5 mm. Maximum rotational error of ring about tandem axis was 3.15° (1.1 mm). In other direction and different sites of the ring applicator set, the errors were within 1.5 mm. The impacts of registration errors on DVH parameters of bladder, rectum, and sigmoid were very sensitive to antero–posterior shift. Cranio‐caudal errors of registration also largely affected the rectum DVH parameters. Largest change of 17.95% per mm and 20.65% per mm in all the DVH parameters of all OARs and HRCTV were observed for ϕ and Ψ rotational errors as compare to other translational and rotational errors. Catheter reconstruction in MR image using rigid registration of applicator geometries of CT and MR images is a feasible technique for MR image‐based intracavitary brachytherapy planning. The applicator registration using the contours of tandem and neck of the ring of CT and MR images decreased the rotational error about tandem axis. Verification of CT MR image fusion using applicator registration which consists of six steps of verification at three different sites in ring applicator set can report all the errors due to translation and rotational shift along θ,ϕ, and Ψ. ϕ and Ψ rotational errors, which produced potential changes in DVH parameters, can be tackled using AP Shift and Lat Shift at the tip of tandem. The maximum shift was still found along the tandem axis in this technique. PACS number: 87.55.km Received 10 September, 2012; accepted 3 December, 2013 The purpose of this study is to find the uncertainties in the reconstruction of MR compatible ring-tandem intracavitary applicators of high-dose rate image-based brachytherapy treatment planning using rigid registration of 3D MR and CT image fusion. Tandem and ring reconstruction in MR image based brachytherapy planning was done using rigid registration of CT and MR applicator geometries. Verifications of registration for applicator fusion were performed in six verification steps at three different sites of tandem ring applicator set. The first site consists of three errors at the level of ring plane in (1) cranio-caudal shift (Cranial Shift) of ring plane along tandem axis, (2) antero-posterior shift (AP Shift) perpendicular to tandem axis on the plane containing the tandem, and (3) lateral shift (Lat Shift) perpendicular to the plane containing the tandem at the level of ring plane. The other two sites are the verifications at the tip of tandem and neck of the ring. The verification at the tip of tandem consists of two errors in (1) antero-posterior shift (AP Shift) perpendicular to tandem axis on the plane containing the tandem, and (2) lateral shift (Lat Shift) perpendicular to the plane containing the tandem. The third site of verification at the neck of the ring is the error due to the rotation of ring about tandem axis. The impact of translational errors from -5 mm to 5 mm in the step of 1 mm along x-, y-, and z-axis and three rotational errors about these axes from -19.1° to 19.1° in the step of 3.28° on dose-volume histogram parameters (D 2cc , D 1cc , D 0.1cc , and D 5cc of bladder, rectum, and sigmoid, and D 90 and D 98 of HRCTV were also analyzed. Maximum registration errors along craniocaudal direction was 2.2 mm (1 case), whereas the errors of 31 out of 34 cases of registration were found within 1.5 mm, and those of two cases were less than 2 mm but greater than 1.5 mm. Maximum rotational error of ring about tandem axis was 3.15° (1.1 mm). In other direction and different sites of the ring applicator set, the errors were within 1.5 mm. The impacts of registration errors on DVH parameters of bladder, rectum, and sigmoid were very sensitive to antero-posterior shift. Cranio-caudal errors of registration also largely affected the rectum DVH parameters. Largest change of 17.95% per mm and 20.65% per mm in all the DVH parameters of all OARs and HRCTV were observed for ϕ and Ψ rotational errors as compare to other translational and rotational errors. Catheter reconstruction in MR image using rigid registration of applicator geometries of CT and MR images is a feasible technique for MR image-based intracavitary brachytherapy planning. The applicator regis-tration using the contours of tandem and neck of the ring of CT and MR images decreased the rotational error about tandem axis. Verification of CT MR image fusion using applicator registration which consists of six steps of verification at three different sites in ring applicator set can report all the errors I. INTRODUCTION The introduction of computed tomography (CT) and magnetic resonance (MR) imaging compatible applicator enables us to define the target volume and organs at risk effectively for CT and MR image-base brachytherapy treatment planning. However, delineation of target volume in CT images is still limited due to lesser soft-tissue contrast resolution. magnetic resonance (MR) imaging preferably T 2 weighted were recommended as a superior imaging modality for target volume and organ at risk delineation. (1)(2)(3)(4) However, applicator reconstruction in magnetic resonance (MR) image is found to be difficult due to the larger thickness of slice and spacing of adjacent slices of MR image acquisition and the inabilities to visualize the proper geometry of catheters, the localization of source channel and tips of catheters in MR images. Moreover, the major challenge in MR image-based brachytherapy is the lack of availability of dummy catheters to simulate the source positioning and poor spatial resolution for the delineation of smaller dummy source size in MR image. This inability to visualize the source channel in MR images is due to weak signal response from the applicator materials, as well as from smaller size of dummy source. (5) Kirisits et al. (6)(7) defined the catheters in paratransverse MR images using the back-projection of applicator geometry reconstructed from X-ray images and Oinam et al. (8) also used the back-projection method to reconstruct the applicator geometry in CT images. Limitation of back-projection methods was the inability to digitize the anchor points which represent the dwell positions of library plans in the interslices points of CT and MR images. Recently multiplanar reformatted image reconstruction was introduced in different brachytherapy treatment planning systems. (9)(10)(11)(12)(13)(14)(15)(16) The changes in DVH parameters calculated due to interobserver variation of applicator reconstruction using the different methods of applicator reconstructions in MR image-based brachytherapy planning were also reported by different authors. (10)(11)(12)(13) Haack et al. (14) reported the interobserver reconstruction accuracy of individual catheters reconstructed using multiplanar reconstruction method and copper sulphate dummy sources in MR image-based intracavitary brachytherapy planning. Pelloski et al. (9) also used the multiplanar reconstruction method of low dose rate (LDR) brachytherapy applicators in BrachyVision TPS. If the CT data of small slice spacing is acquired, the applicator geometry can be reconstructed accurately using the information of autoradiographs. This can be utilized for the reconstruction of applicator channel in three-dimensional MR images using the registration technique of image fusion. Presently, there are a limited number of literatures which reported about the practice of applicator reconstructions in MR image-based brachytherapy using rigid registration of applicator geometries of CT and MR images. (4,15,16) So far, none of the studies reported about the accuracy of the definition of dwell positions using rigid registration of applicator geometries of CT and MR images in clinic. In this paper, we have attempted a method to minimize the errors in the definition of the applicator geometry and source position on 3D MR scan image in high-dose-rate brachytherapy treatment planning using 3D CT and MR image registration of applicator geometry. We introduced a method also to quantify and report the errors associated with 3D CT and MR image fusion of applicator geometry. A. Images acquisitions and contouring for brachytherapy treatment planning Standard Nucletron CT and MR imaging compatible ring applicator (Nucletron, Veenendal, The Netherlands) of 2.6 cm and 3 cm diameter were used in this image-based intracavitary brachytherapy implants. Vienna ring applicator (Nucletron) of 3.0 cm diameter was also used in this study. The lengths of the tandems used in this study are 4.0 cm and 6.0 cm. These implants were done with the help of MicroMaxx portable ultrasound system from Sonosite (Philips Ultrasound Inc., Seattle, WA) for proper positioning of tandem and ring applicator in the uterus and paracervical regions respectively. Abdominal obstetric gynaecological ultrasound probe C11e of 30 cm scan depth and 5-2 MHz ultrasound frequency was used in this image-guided brachytherapy implants. In this study, T 2 weighted turbo spin-echo MR image sequences (TR (spin-echo relaxation time) = 4000 ms, TE (spin-echo elapsed time) = 112 ms) were acquired in 1.5 Tesla MR Scanner (Siemens Magnetom; Siemens AG, Munich, Germany) using a pelvic surface coil. This produces fast spin-echo sequences with 3 mm slice thickness in different orientation of slice acquisition. As a part of EMBRACE protocol of GEC-ESTRO recommendation guideline, (1,2) four MR image sequences consist of paratransverse, parasagittal, and paracoronal images containing the tandem and the whole ring and another transverse MR image sequence were acquired. T 2 weighted transverse MR images were used for the contouring of target volumes in Oncentra MasterPlan treatment planning system (Nucletron). Target volumes and organs at risk (OARs) were contoured on transverse MR image according to GEC-ESTRO recommendation guideline (1,2) with the help of paratransverse, parasagittal, and paracoronal images containing the tandem and the whole ring. The OARs, contoured on the transverse MR images in this study, were sigmoid, bladder, and rectum. Gross target volume (GTV), and high risk and intermediate risk clinical target volumes (HRCTV and IRCTV) were also defined in this study. B. Determination of dwell position in CT and MR compatible ring applicator set Dwell position of HDR brachytherapy source inside the applicators should be coincided with the plan dwell position in the treatment planning system. The accurate definitions of dwell positions inside the applicator can be done using the autoradiography of active sources on a single film with the applicator in the same geometry. Autoradiograph of different type of applicators was taken using extended dose range KODAK EDR2 ready pack film (Eastman KODAK Company, Rochester, NY) to find out the tips and different dwell positions of radiation sources of applicators. The ring applicators and tandems used in this study were attached on this film and exposed with high-dose-rate (HDR) brachytherapy source (Micro Selectron HDR V3 machine; Nucletron) at different dwell positions, starting from the first dwell position. The time of exposure was optimally chosen so as to obtain a fine center of optical density for the source positioning, as well as the lumen of applicator and the surface of the applicator on the film. Then the same film with the attached applicators was exposed on kilovoltage X-ray of 50 kV accelerating voltage and tube current of 120 mAs for 10 to 12 times to demarcate the surface and inner air channel of the applicators. After processing this film on an automatic film processor, it was scanned with a resolution of 600 dpi by an optical density scanner VIDAR VXR-16 (VIDAR Systems Corporation, Herndon, VA) in the import workspace of ECLIPSE treatment planning system (TPS) (Varian Medical System, Palo Alto, CA) and imported as digital image communication (DICOM) format using standard mode of 1:1 scale. The center of the optical density due to the exposure by HDR radiation source was determined as the dwell position using the full width at half or tenth of the maximum value (FWHM or FWTM) of CT Hounsfield number depending on the symmetry of optical density profile. On the autoradiograph film of ring applicator, the offset value of the first dwell position of ring applicator from the middle point of the ridge between the entry and tip of air lumens of the ring applicator was measured ( Fig. 1). For the intrauterine tandem applicator, the positions of the first and different dwell positions were defined with respect to tip and size of the catheter (Fig. 1). Two tangents were drawn along the applicator -one at the tip and the other at the next straight portion, nearest to the curvature. Then, the distance between the tip and intersection of the two tangents were noted for the reconstruction of the same applicator in Oncentra Masterplan brachytherapy treatment planning system. C. Applicator reconstruction on 3D image of MRI The applicator geometries in 3D MR image were reconstructed according to CT image, fused on MR image using the rigid registration of applicator geometry. In this image fusion, the contours of tandem applicators were reconstructed separately in both CT and MR transverse images using pearls contouring tools of Oncentra MasterPlan for the preparation of CT and MR image fusion. The tips of the tandem applicators were excluded in the contouring of these tandems. In order to observe the outer surface of ring applicator in MR image, the ring applicator set and cotton used for packing in this intracavitary implants were soaked with Aquason 2000 sonography gel. Even then, the outer surface of the applicator is not seen in every slice. So the clearly observed points of applicator in MR images were used for the tandem applicator contouring in 3D MR image. Rectangular shape necks of the rings were also contoured in both CT and MR images. These reconstructed contours of tandem and ring applicators were fused interactively using rigid registration technique of Oncentra MasterPlan. The precise positioning of applicators fusion was performed by shifting and rotating these applicators of CT and MR 3D image dataset. Then the ring and tandem applicators were reconstructed on MR image using the help of corresponding coregistered CT images by digitizing the catheters on CT image through the spy glass tool and multiplanar reconstruction technique (Fig. 2). The first dwell positions on the applicators were determined in MR images (corresponding to coregistered CT images) with respect to the landmarks of the tips, curvature, and source channels of the ring applicators set using the offset values from the distal digitization point of these applicators from autoradiograph and the blend image between CT and MR images. D. Verification method to report the applicator registration accuracy of CT and MR images Accuracy of applicator reconstruction in CT and MR image fusion were retrospectively analyzed for 34 cases of already done intracavitary brachytherapy application. The absolute value of the deviation of the position of applicator in MR images from the corresponding points in CT images were taken as the errors of image fusion and catheter reconstruction errors. Catheter positions of CT image were taken as the baseline geometry for image fusion. CT and MR image fusion was verified using the following six verification shifts at three different sites. The verification of the first site is done at the level of ring plane. It consists of three errors: (1) cranio-caudal shift (Cranial Shift) of ring plane along tandem axis, (2) antero-posterior shift (AP Shift) perpendicular to tandem axis on the plane containing the tandem, and (3) lateral shift (Lat Shift) perpendicular to the plane containing the tandem. The second site is the verification at the tip of tandem. It consists of two errors: (1) anteroposterior shift (AP Shift) perpendicular to tandem axis on the plane containing the tandem, and (2) lateral shift (Lat Shift) perpendicular to the plane containing the tandem. The third site at the neck of the ring is the verification for the error due to the rotation of ring about tandem axis. This error can be minimized by reducing lateral shift of mean centre of the neck of ring. The accuracy of applicator reconstruction in this CT and MR fusion technique was verified using the water dummy sources inserted in Vienna ring applicator set implants, as well as the titanium needle hole in ten cases of such image fusion. The variation of maximum deviations of reconstructed catheters from the water dummy sources on the reformatted paratransverse, parasagittal, and paracoronal MR image planes were analyzed (Fig. 2). Rotations of the ring about tandem were frequently observed and alignments using the needle holes of Vienna ring were done repeatedly after the first initial alignment of rigid registration. Later on, the rectangular shape neck of the ring, observed in both CT and MR transverse images, were contoured and accomplished while doing rigid registration. With this method, normal CT and MR compatible Nucletron ring applicator set were begun to be used in our center and the uses of water dummy sources were stopped. In case of CT and MR image fusion without water dummy sources, the (1), (2) and (3) show the verifications of tandem and ring on paracoronal planes using dummy source, neck of ring on para-axial plane, and the projected contours of ring applicators, reconstructed on CT and MR images, respectively. projected contours of tandem and ring applicators of CT and MR images were analyzed on different reformatted para-axial, parasagittal, and paracoronal MR images. The contours of the ring applicator which are clearly visualized on original paracoronal and parasagittal MR images were copied to the transverse MR image and edited according to the position of ring on transverse MR. Then the lateral, antero-posterior and cranio-caudal shift of ring contour of MR image from those of CT image were measured, as shown in Fig. 2. Lines passing through the mean center of the tandem were also drawn on reformatted parasagittal and paracoronal plane at the level of the tip of tandem. The maximum deviations of these lines from those of CT image were measured using spy glass tool of Oncentra Masterplan for each of the CT and MR image fusion plan. These deviations were analyzed for 24 plans of such image fusions. Similarly the deviation of mean center of the rectangular shape catheter at the neck of the ring on reformatted para-axial MR image from those of CT image were measured (Fig. 2). E. Impact of image registration errors on DVH Bladder, rectum, sigmoid, and HRCTV contours of a typical patient of GEC ESTRO EMBRACE protocol were considered to find out the impact of registration errors on dose-volume histogram parameter in this study. To quantify the changes in dose-volume histogram parameters due to registration errors in applicator reconstruction of brachytherapy planning, known errors in catheter reconstructions have to be introduced in applicator coordinate system. The coordinate points of reconstructed catheters of Oncentra Treatment Planning system were defined in MR image coordinate system which is used as primary image in image registration. In order to introduce the known errors in applicator coordinate system, MR image coordinate system was transformed into applicator coordinate system using an autorotation program developed in MATLAB software version 7.7 (The MathWorks, Natick, MA ) and determines the three rotational angles and three translational shifts. The equations used in this autorotation program were (1) where is the rotation matrix as a function of θ, ϕ, Ψ, which are rotating about y-, z-, and x-axes, respectively. θ is the rotation of applicator set about tandem axis (y). ϕ is the rotation about an axis (z) through the center of the ring on the tandem plane and perpendicular to the tandem axis. Ψ is the rotation about an axis (x) through the center of the ring on the ring plane and perpendicular to the plane containing the tandem applicator. xSt i , ySt i , and zSt i are the ith coordinates of applicator in applicator coordinate system, and x i , y i , and z i are the coordinates of applicators with origin of o x , o y , and o z of applicator in MR image coordinate system. After the determination of θ, ϕ, and Ψ using three orthogonal coordinate points of applicator set within the tolerance limit of 0.3 mm in two steps of autorotation programs consisting of coarse rotation of 1 mm tolerance and fine rotation of 0.3 mm tolerance limits, different systematic errors were introduced in six degrees of freedom (x-, y-, z-axes and θ, ϕ, and Ψ rotational angles) in rigid registration. Then the inverse rotation were performed using the already determined angles of rotations (θ, ϕ, and Ψ) to transform into MR image coordinate system. The equation used in the reverse rotation for the introduction of translational error is (2) For rotational error introduction along the rotation about y-axis, the following equations are used in our program: The errors introduced in applicator coordinate system were ranges from -5 mm to 5 mm in the step of 1 mm along tandem, vertical, and horizontal axis of tandem and rotational errors (Δθ) which ranges from -19.10° to 19.10° in the step of 3.28°. In cases of rotational errors (Δϕ and ΔΨ) introduction along ϕ and Ψ rotational angles, and rotational matrices were used. The changes in dose to 2 cc, 1 cc, 0.1 cc, and 5 cc volumes of bladder, rectum, and sigmoid (D 2cc , D 1cc , D 0.1cc , and D 5cc of bladder, rectum, and sigmoid), 90% and 98% volume of HRCTV (D 90 and D 98 ) and volume of HRCTV receiving 90% percent of prescribed dose (V 90 ) due to these errors were analyzed for 66 applicator reconstructions to validate the action limit for images registration. Figure 1 shows the autoradiographs of intracavitary ring applicator set for source positions identification. This depicts both the dwell positions of HDR radiation source, as well as the applicator geometries of ring and tandem applicators. The distances of the first dwell position from the center of the ridge between the air lumens at the tip of different ring applicators were 7.1 mm and 6.7 mm, respectively, for normal Nucletron rings of 3 cm and 2.6 cm diameter, and offset of 9.5 mm was found for Vienna Nucletron ring of 3.0 cm diameter. The center of the ridge was specially chosen for the digitization of the ring catheter to minimize the uncertainty of digitization of the tips of applicators. Similarly the distances of the first dwell positions of tandem applicators from the same centers of the ridge between the inner air lumen and the outer surface of different tandems at the tip were found as 6.9 mm for both 4 cm and 6 cm long tandems. The distance of the cranial surface of ring applicator from the source channel were found as 6.2 mm, 6.1 mm, and 6.2 mm for normal Nucletron ring of 3.0 cm and 2.6 cm diameters and Vienna Nucletron ring of 3.0 cm diameter ( Table 1). The HDR miniature source was positioned at the center of the inner air trend of normal Nucletron ring applicator of 3.0 cm diameter, whereas the position of radio-opaque dummy was shifted toward the outer wall of the inner trend by 0.11 cm. III. RESULTS & DISCUSSION Dummy markers to simulate the position of the source in 3D CT image were used by Pelloski et al. (9) in the multiplanar reconstruction of brachytherapy applicators in BrachyVision TPS (Varian Medical System). But in our experience, the determination of the tips and the source positions with dummy marker were still a challenging task and we found uncertainties due to the inability of accurate determinations of tips by the limited slice spacing in CT data acquisition and source position using dummy marker. Reconstruction of catheter was done easily using MPR method with the help of the rotation of the axes of 3D CT image, zoom in, and distance measuring tools of Oncentra MasterPlan TPS and the corresponding autoradiograph informations instead of dummy markers. In this applicator reconstruction, paracoronal and parasagittal original MR images were not used even though these images generate good clear boundary of applicator. This is due to the inability of the acquisition of the MR image along the tandem axis and the movement of patient during longer data acquisition time between two different sequences for accurate localization of source channel. In our experience, maximum errors were found along the cranio-caudal direction and the rotation of ring about tandem axis while using rigid registration of CT MR image fusion for applicator reconstruction. Haack et al. (14) also reported four variations of the interobserver reconstruction accuracy of catheters in MR image-based intracavitary brachytherapy planning using BrachyVision TPS. The variations were reported in terms of antero-posterior, lateral, and longitudinal translational shifts and rotation of the ring about tandem axis. In our experience the errors must be reported for a complete set of applicator implants and at different sites of applicator rather than for individual applicator and single site. This can be done by verifying at three different sites of this ring applicator set instead of reporting individual applicators in their study (Fig. 2). The variation of six different shifts of reconstructed catheter in MR image using coregistered CT image from the water-filled dummy sources inside the applicators of MR images for ten cases of reconstructions are depicted in Fig. 3(a). Table 2 also shows the statistics of errors (Cranial Shift, AP shift, and Lat Shift at the ring level) of reconstructed catheter from the position of water dummy sources at the level of ring on reformatted paracoronal and parasagittal planes. The maximum deviation of 2.2 mm (1 case) on paracoronal/sagittal plane was found along the cranial direction, as compared to those of lateral shift on paracoronal plane and AP shift on parasagittal plane. In case of rotation of ring applicator about tandem axis, maximum Lat Shift of neck of ring from water dummy sources was 1.1 mm (3.15°). Maximum values of AP Shift on parasagittal plane and Lat Shift on paracoronal plane of MR image from CT image were also below 0.5 mm. The maximum error was found along the cranial direction, with an average of 1.03 mm (SD = 0.72 mm) and 0.45 mm (SD = 0.46 mm) for the verification of catheter reconstructions with and without water-filled dummy sources, respectively ( Fig. 3 and Table 2). Haack et al. (14) also showed the same maximum variations in the direction of longitudinal axis of tandem and in the direction of ring rotation about tandem axis. Two cases of fusion were beyond the action limit of 1.5 mm, whereas the remaining registrations were within 1.5 mm (Fig. 3(a)). This occurred due to the unnoticeable shift of tandem applicator of MR image from CT image along the tandem axis while doing the applicator registration. Table 2 also shows the verification of reconstructed catheters of 24 cases of MR image-based brachytherapy plans without water dummy sources. The maximum Cranial Shift of ring contour of MR from those Only one case of catheter reconstruction had a Cranial Shift greater than 1.5 mm (Fig. 3(b)). This method was done to quantify the error along cranio-caudal, lateral, and antero-posterior shift of ring from that of CT image in this study. Such a procedure cannot be performed in routine practice due to time-consuming procedures of copying the contours of ring applicator and editing on transverse MR images, but the gross error can be verified interactively in ECS coordinate system of Oncentra Master Plan TPS without this contour. Maximum values of AP Shift on parasagittal and Lat Shift on paracoronal plane at the level of ring were 0.6 mm and 0.9 mm, respectively. At the tip of tandem, maximum values of AP Shift on parasagittal plane and Lat Shift on paracoronal plane were 1.0 mm and 1.0 mm, respectively, whereas those of Lat Shift at the neck of the ring resulted due to the rotation of ring about tandem axis was found as 0.7 mm. This rotation was corrected later on in our applicator registration by aligning the contours of the rectangular shape neck of the ring (Fig. 2) and reduced the average shift to 0.27 mm from 0.46 mm with water dummy source ( Fig. 3 and Table 2). In our center, we analyzed the impact of registration errors on DVH parameters for a patient using the autorotation program incorporating the above mathematical Eqs. (1) to (4). In this analysis, multiple reconstructions were performed for a single application by introducing the different systematic errors in applicator coordinate system (Fig. 4) to find out the effects of systematic errors on DVH parameters. Figure 5 shows the variations of DVH parameters from those of original reconstructed catheters of ring and tandem applicators with different systematic errors ranges from -5 mm to 5 mm in the step of 1 mm along three axes of horizontal, vertical, and parallel to tandem as translational errors and rotational errors ranges from -19.1° to 19.1° in the step of 3.82° about tandem y-axis, z-axis, and x-axis. In Fig. 5(a), the maximum percent changes in dose to 0.1 cc (D 0.1cc ), 1 cc (D 1cc ) and 2 cc (D 2cc ) volume of bladder due to the introduction of errors from -5 mm to 5 mm along x-axis from the same dose-volume histogram (DVH) parameters of original reconstructed applicators were, respectively, as 26.6%, 12.23%, and 9.19%. D 0.1cc was very sensitive to error introduced. Percent changes of dose to 5 cc (D 5cc ) volume of bladder varied from -7.78% to 5.58%. All the DVH parameters of bladder varied linearly with respect to systematic errors introduced along x-, y-, and z-axis. Maximum variation of 42.54% for D 0.1cc was found due to the introduction of 5 mm systematic error along z-axis, as compare to those errors introduced along x-and y-axes in the decreasing order. The variations all the DVH parameters due to the errors along y-axis ranged from -7.6% to 2.66%. When the rotational errors were introduced along θ, ϕ, and Ψ, all the DVH parameters varied in nonlinear pattern except for D 1cc , D 2cc , and D 5cc due to rotational errors along θ (Fig. 5(a)). Rotational errors along ϕ and Ψ result a large percent change of all DVH parameters of all OARs and HRCTV, as compare to those changes due to other systematic errors (Fig. 6). Applicator registrations of this study were done using the tandem contours of CT and MR images, the maximum values of both AP Shift and Lat Shift at the tip of tandem and the ring level were within 1 mm. The systematic errors due to rotational error along ϕ and Ψ due to 6 cm long tandem were within 1° and hence the maximum impacts on DVH parameters for Ψ and ϕ rotational errors were within 5% and 2%, respectively. The maximum (average) absolute percent changes of dose per mm of D 0.1cc of bladder due to systematic errors along the lateral, cranio-caudal, and antero-posterior directions were 6.62% (3.54%), 3.79% (0.86%), and 10.11% (6.73%), respectively, and for those changes due to the rotation (θ) about tandem was 5.59% (4.08%) per mm ( Fig. 6(a)). In case of D 2cc , the maximum (average) absolute changes of dose per mm due to the errors along lateral, cranio-caudal, and antero-posterior directions were 2.99% (1.22%), 1.41% (0.73%), and 5.75% (4.12%), respectively, whereas the error of rotation (θ) about tandem results the maximum (average) value of 2.21% (1.04%) (Fig. 6(b)). Figure 5(b) shows the linear variation of rectum DVH parameters (D 0.1cc , D 1cc , D 2cc , and D 5cc ) due to systematic translational (along cranio-caudal and antero-posterior directions) and rotational errors (along θ, ϕ, and Ψ) with respect to those of original reconstruction. Maximum ranges of variations from -18.59% to 26.6% and -16.11% to 21.89% were found for D 0.1cc of rectum due to cranio-caudal and antero-posterior errors, respectively, whereas those due to lateral error were from -0.61% to 2.90%. When the rotational errors were introduced along θ, ϕ, and Ψ, both the errors along θ and ϕ produced the approximate linear changes of all DVH parameters (within the variation ranges from 0% to 4.04% and -1.45 to 6.8%, respectively, for D 2cc ), whereas all the DVH parameters due to the error along Ψ (rotation about x-axis) were found as nonlinear changes ( Fig. 5(b)). Maximum (average) absolute change of dose per mm for D 0.1cc and D 2cc were found maximum as 3.69% (2.53%) and 2.82% (1.75%), respectively, for the systematic error along cranio-caudal shift, as compare to those of other systematic errors, except ϕ and Ψ rotational errors ( Fig. 6(a)). In case of sigmoid (Fig. 5(c)), all the DVH parameters varied linearly with systematic errors along lateral (-4.56% to 10.47%), cranial-caudal (-4.70% to 5.28%), AP shift (-16.10% to 27.65%), and θ rotation (-2.05% to 2.05%). D 1cc and D 5cc due to ϕ rotational error also varied linearly within the ranges from -12.5% to 42.95% and -8.74% to 19.01%, respectively, whereas D 0.1cc of sigmoid due to lateral error and all the DVH parameters due the rotational error along Ψ varied in nonlinear pattern. D 0.1cc due to lateral errors varies up to 49.58% from those of original reconstruction geometry of applicator. Maximum (average) absolute percent change of dose per mm due to lateral error was 12.36% (4.09%) per mm for D 0.1cc , whereas those of other DVH parameters were found within 3.28% (2.19%) per mm, except those of error along ϕ and Ψ (Fig. 6). All the DVH parameters of sigmoid were very sensitive to AP Shift, ϕ, and Ψ rotational errors. Most of the DVH parameters of HRCTVs varied in nonlinear pattern, as shown in Fig. 5(d). Maximum variation range from -27.49% to 0.49% was found for AP shift error as compare to other ranges from -10.6% to 0.51% and -7.65% to 6.49% of Lat Shift and cranio-caudal shift errors, respectively. In case of rotational error, ϕ and Ψ rotational errors produced the variation in DVH parameters from -27.93% to 1.35% and -28.39% to 11.77%, respectively. Percent absolute change of dose per mm for HRCTV D 98 and D 90 were within the maximum values of 8.39% per mm and 8.37% per mm and average values of 6.14% per mm and 5.51% per mm, respectively. HRCTV V 90 was not affected by the systematic errors introduced in this study. Maximum changes of HRCTV V 90 were found within 2.09%. The changes in DVH parameters calculated due to interobserver variation of applicator reconstruction using rigid registration methods were also reported by Tanderup et al. (15) They reported the changes of DVH parameters of bladder and rectum by 5%-6% per mm displacement of applicator in ant-post direction. For the other directions and other DVH parameters, the changes were lesser than 4% per mm. Our data were well congruent as that of the mean change of dose per mm of bladder and rectum which were within 4% per mm due to lateral and cranio-caudal shifts. We found a very large change of 10.92% per mm and 12.16% per mm for ϕ and Ψ rotational errors in all DVH parameter of all OARs (D 0.1cc , D 1cc , D 2cc , and D 5cc ) and HRCTV (D 90 and D 98 ) as compare to other translational and rotational errors. It is reasonably correct that Tanderup et al. report only the error along θ for rotational error. As the alignment was done using tandem contours of CT and MR images, the rotational errors along ϕ and Ψ do not happened frequently. However, a small change of applicator registration error of 1 mm at the tip of tandem will result in 1° error in ϕ and Ψ rotation about an origin at the center of the ring and hence a large change in all DVH parameters can happen (Figs. 5, 6). These can be taken care by checking and reporting the accuracy of applicator registration at the tip along AP Shift and Lat Shift within 1.0 mm action limit. Cranial Shift largely affects the DVH parameters of rectum only (Fig. 6) as compare to bladder, sigmoid, and HRCTV. Action limits of 1.0 mm to 2.0 mm will produce the increase in variation of dose of rectum from those of original reconstruction ranges from 6.7% to 10.9%. The limitation of this study is that the impacts of registration error on the changes of DVH parameters were considered only for a single patient. These impacts will be better quantified if a large population of patient data were utilized. IV. CONCLUSIONS To our knowledge, our study is one of the new procedures for reporting the registration errors of CT MR fusion using rigid registration method for applicator reconstruction and to analyze the impact of registration errors on DVH parameters in image-based brachytherapy planning. Image-based brachytherapy planning requires an accurate definition of dwell position of radiation source with respect to the tips of the applicators on the MR images. Choosing the ridge between the lumens at the tip of the ring applicator as reference point for dwell position definition decreased the uncertainty of digitization of catheter. The applicator geometries of micro-Selectron HDR brachytherapy can be successfully reconstructed in treatment planning system using the rigid registration of applicators of CT and MR images and the information of applicator geometries from autoradiographs. The applicator registration of CT and MR images using the contours of tandem and neck of the ring decreased the rotational error about tandem axis. The reconstruction accuracy of applicators was achieved within the action limit of 1.5 mm in this CT MR Image fusion technique. We recommended a verification method of CT MR image fusion using applicator registration which consists of six steps of verification at three different sites in ring applicator set for a perfect fusion. Rotational errors along ϕ and Ψ rotation angles, which produced large changes in DVH parameters, can be tackled using AP Shift and Lat Shift at the tip of tandem. The maximum shift was found along the tandem axis in this technique.
9,186.8
2014-03-01T00:00:00.000
[ "Medicine", "Engineering" ]
Distribution of landforms and buried sedimentary deposits during the growth of the Aceh River delta (Sumatra, Indonesia) ABSTRACT Fluvial and coastal landforms are indicative of landscape river delta evolution over time and provide clues for understanding coastal adjustments to sea-level and fluvial dynamics fluctuations, tectonic displacements, and extreme waves. We have mapped the surface and sub-surface footprints of fluvial and coastal geomorphological features in the Aceh River delta, northern Sumatra, using imagery dataset, vertical facies logging and helicopter electromagnetic surveys. The result is a geomorphological map at the scale of 1:75.000 which outlines the main features of the deltaic plain, including rivers, tidal and buried channels, fluvial levees, beach ridges, swales, tidal flats and lagoons. We compare their spatial distribution to the geometry of buried sediment bodies, revealed by boreholes and resistivity maps. Buried channel belts and floodplain deposits document the former locations of the distributary channels of the Aceh River. Coastal-parallel beach ridges evidence 7–8 km of asymmetric delta progradation since the mid-Holocene sea-level high stand. The Aceh River delta, in Sumatra, was, by far, the delta most severely affected by the 2004 Indian Ocean tsunami, one of the largest tsunami recorded in human history (Doocy et al., 2007;Levy & Gopalakrishnan, 2005).In this specific context, scientific studies have been conducted to understand its recent dynamics (decennial timescales), in order to address societal (Scawthorn et al., 2006), sanitary (Prasetiyawan et al., 2006;Redwood-Campbell & Riddez, 2006) and environmental (Chapkanski, Brocard, Lavigne, Meilianda, et al., 2022;McLeod et al., 2010;Paris et al., 2009;Siemon et al., 2007) issues after the tsunami.Less attention has been paid to its longterm (centennial to millennial timescales) evolution, and how such evolution influences the current response of the delta to tsunami disturbances (Umitsu et al., 2007). This study aims at delineating of the spatial distribution of various fluvial and coastal landforms in the Aceh River delta in order to: (i) provide a geomorphological map of the Aceh River delta at the scale of 1:75.000(refer to the Main Map), (ii) provide a framework to reconstruct its evolution since the mid-Holocene sea-level high stand and (iii) identify the prevailing processes that control its geomorphological evolution and sedimentary architecture.We first provide a detailed mapping of the delta surface and then compare the spatial distribution of landforms to the geometry of buried sediment bodies revealed by resistivity maps and boreholes.Borehole data and resistivity maps are used to assess the correlation between sediment grain size and resistivity values.We finally briefly outline the Holocene evolution of the delta. General context The Aceh River delta is located five degrees north of the equator in the northernmost part of Sumatra Island (Indonesia, Figure 1(A)).It lies between 0 and ∼+20 m above sea level (Figure 1(B)).Based on the Ulee Lhue tide station (https://srgi.big.go.id), tides along the delta coasts are semi-diurnal with minimum neap, mean and maximum Spring tide range of 0.4, 1.2 and 2.1 m, respectively.Waves along the shoreface track from north-west from October to March, and from north-east from April to September, inducing a long-shore drift toward the north-east and southwest, respectively (Diposaptono & Mano, 1998).The delta is fed mostly by the Aceh River in its central part, by small intermittent karst-sourced rivers along its southwest border, and by the minor Angan River and its tributaries along its northeastern border (Figure 1(C)).The Aceh River, which is partly diverted into the Alue Naga Floodway Canal a few kilometers from its mouth, is the major freshwater source of the delta, with a catchment of 1568 km 2 (Syvitski et al., 2014).It originates in the Barisan Mountains (Figure 1(B)), and then runs through a terraced valley for ∼130 km (Montagne, 1963) before reaching the delta plain ∼20 km from the sea.The hydrological regime of the Aceh River is influenced by equatorial convective and monsoonal rainfall, with mean and maximum flood discharge at the Banda Aceh gauging station (period of 9 years) of 176 m 3 /s and 1700 m 3 /s, respectively (Syvitski et al., 2014).Climate is tropical with annual temperatures ranging within 25-32°C, and high humidity (80-90%).Mean annual rainfall in the delta is ∼1600 mm/y but reaches up to 5000 mm/y in the headwaters (Barisan Mountains, Muis et al., 2016;Ploethner & Siemon, 2006).The driest season stretches from June to July, with less than 100 mm/month.The rainy season stretches from September to March, and peaks in November and December at 300 mm/month. The Aceh River valley is cross-cut by two major left-lateral strike-slip faults, the Aceh fault in the southwest, and the Seulimeum fault in the northeast (Figure 1(B)).They represent the north-westernmost segments of the Great Sumatra fault (Fernández-Blanco et al., 2016;Genrich et al., 2000;Ito et al., 2012;Tabei et al., 2015).The Aceh River valley is bordered to the west by steep mountains made of massive Cretaceous limestones, and to the east by andesitic tuffs and flows, and to the south by Pliocene-Pleistocene fossiliferous tuffaceous sandstone and mixed lithic conglomerates (refer to the Main Map in Supplementary Material; Bennett et al., 1981;Culshaw et al., 1979).The Aceh River delta has filled the drowned lower reaches of this valley during the Holocene (Culshaw et al., 1979;Farr & Djaeni, 1975).The thickness of Quaternary sediment fill reaches 180 m in the center part of the Aceh delta (Culshaw et al., 1979). Material and methods Landforms and sub-surface sedimentary deposits of the Aceh River delta were mapped using a GIS-based approach (Figure 2).The dataset (Table 1) consists of (i) topographic and geological maps, (ii) aerial photographs, (iii) satellite images and (iv) digital elevation models (DEM), summarized, treated and georeferenced by Chapkanski, Brocard, Lavigne, Tricot, et al. (2022).In addition, this study integrates (v) bathymetric data, (vi) sub-surface vertical facies logging and (vii) helicopter electromagnetic data.To ensure the accurate overlay of each data layer, the entire dataset has been projected in WGS 84-UTM-Zone 46N.Uncertainties on georeferencing accuracy, digitizing and local hydrological context are provided in Chapkanski, Brocard, Lavigne, Tricot, et al. (2022). Imagery pretreatments and feature recognitions Sentinel-2 Multi-Spectral images at 10 m resolution (Table 1) were processed to enhance the spatial contrasts in visible and near-infrared bands generated by lateral variations in soil moisture and hydric stress on vegetation (Huete, 2004;Lillesand et al., 2015).These variations help identifying buried paleochannels (Giacomelli et al., 2018).Combinations of red, green and blue (RGB normal color composite) and near-infrared, red and green (as RGB color-infrared composite) were chosen as they best reveal the landforms.Bathymetry was interpolated from sounding points (Table 1; Meilianda et al., 2010) by inverse distance weighted interpolation to generate bathymetric isobaths.Hillshading of the 2 m resolution DEM was used to improve the delineation of beach ridges and dunes across the delta plain.ArcGIS Hydrology tools and 8 m resolution DEM were used to generate the drainage network and delineate the watersheds of the rivers that feed the delta.Geomorphological features were manually drawn in GIS environment in order to avoid the misclassification of automatic procedures (Brandolini et al., 2020).The resulting shapefiles were assigned to the following geomorphological groups: fluvial, coastal, and marine (Figures 3 and 4), following an architectural classification approach (Ainsworth et al., 2011;Nanson et al., 2013;Vakarelov & Ainsworth, 2013).Sinuosity ratios of the palaeochannels were calculated following Malavoi and Bravard (2010) and compared to the sinuosity ratios of the sub-contemporary (1884-2019) Aceh River channels, as reported in Chapkanski, Brocard, Lavigne, Tricot, et al. (2022). Airborne geophysical data A helicopter-borne electromagnetic (HEM) survey of the Aceh delta was flown during the summer (from the 23th of August to the 12th of September) of 2005 by the German Federal Institute for Geoscience and Natural Resources (BGR), as part of the German-Indonesian cooperation project HELP ACEH (HELicopter Project ACEH, Siemon, Röttger, et al., 2006).The project aimed finding suitable locations for drilling new freshwater wells after the Boxing Day tsunami, because electrical conductivity correlates positively with water mineralization, and therefore helps assessing the landward penetration of the coastal saline wedge (Ploethner & Siemon, 2006).Geophysical data processing is detailed in Siemon, Röttger, et al. (2006).Here, we used resistivity maps derived from 1D inversion models of these data at 3, 4, 5, 10, 15 and 20 m below ground level (BGL, Siemon, Ploethner, et al., 2006;Siemon et al., 2007;Siemon & Steuer, 2011).We use the resistivity maps to gain insight into the underground architecture of the delta, using the difference in resistivity between floodplains (relatively lower resistivity values of finer sediments) and sand bodies (relatively higher resistivity values of coarser sediments) such as river channels and beach ridges. Sub-surface stratigraphy database We dug 50 hand-auger boreholes as deep as 4 m into the Aceh plain in 2018 and 2019, targeting geomorphological elements such as palaeochannels, beach ridges, flood plains and swales, in order to provide a sub-surface validation of surface landforms.To log the boreholes, sediments were characterized in the field, based on sediment color, hand texture, hardness, the presence of oxidation, shells, diffuse organic matter, and plant remains (Figure 5).Stratigraphic information published in former surveys and reports (Culshaw et al., 1979;Farr & Djaeni, 1975;Iwaco, 1993) was collected, and standardized to be compared and combined with the 2018-2019 boreholes.The entire dataset comprises 890 stratigraphic units from 206 core logs, up to 20 m long (see locations on Figure 1(C)). Correspondence between resistivity maps and borehole stratigraphic data The resistivity at 3, 4, 5, 8, 10 and 20 m was extracted in GIS from the resistivity map at each borehole location.Borehole logs were then compared to the extracted resistivity values.Stratigraphic data relative to grain size were grouped into three classes: clay-silt, sand and clay-to-sand.Resistivity values are expected to vary with grain size (Siemon et al., 2007).Therefore, the two data matrices (resistivity values and grain size) were combined and subjected to descriptive statistics using the software IBM SPSS statistics 20.0 (Armonk, NY, USA).Minimum, first 1st and 3rd quartiles, median and mean values were calculated to quantify the spread of resistivity values in clay-silt, sand and clay to sand units at different depths below the ground surface (Figure 6).The results were used to evaluate down-depth resistivity attenuation and to select values used for the discretization of resistivity values on corresponding resistivity maps.Despite the low resolution of the maps and saltwater intrusions in the lower delta, sands in logs were found to consistently exhibit higher resistivity than silty-clays (Figure 6).We therefore assumed that resistant bodies are composed of coarser sediment and we used this property to map buried sandy fluvial levees and beach ridges hosting freshwater (Figure 5).Similar observations were made by Siemon et al. (2007) demonstrating that the current Aceh River channel spatially correlates with relatively high values in resistivity maps.That also coincides with observations in other survey areas in Germany where low resistivity values correlate with clay-silt deposits in former lakes and higher resistivity rates with sandy to gravel deposits (Siemon et al., 2020). General architecture of delta landforms The lower delta is a ∼14-km wide coastal strip.It is made of tidal flats, lagoons, aquaculture ponds, and tidal channels (Figure 3; see the Geomorphological map in Supplementary Material).Upstream, the delta plain surfaces rise up to 20 m above sea level. West and south-west of the Aceh River, we found palaeochannel belts and associated natural levee deposits that run parallel to the west delta margin, and are separated from the margin by eroded terraces rising ∼ +30 m above the delta plain, and by small alluvial fans.Irregularly spaced beach and chenier ridges, sub-parallel to the current coastline, were identified, across the delta plain, as far as 8 km inland (Figure 3).East of the Aceh River in the upper delta, we identified pre-Holocene strandplain standing ∼+20 m above the Holocene delta plain and incised by a dense network of small valleys (Figure 3). Fluvial channels and levees. The Aceh River currently flows along the median axis of the delta (Figure 3).Its course is highly sinuous and the meanders have migrated rapidly during the twentieth Century (refer to the Main Map in Supplementary Material).The sinuosity ratio increased during twentieth century from 1.75 to 1.91 before dropping to 1.36 in 1996 (Chapkanski, Brocard, Lavigne, Tricot, et al., 2022) when extensive containment dykes were constructed, cutting several meanders (Diposaptono & Mano, 1998).Wide palaeochannels, already abandoned by the twentieth century, are found to the north-east of the Aceh River, between the Aceh River and the Alue Naga Canal (Figure 3).They were partly excavated and reused during the construction of the derivation canal.East of the Alue Naga Canal, coast-parallel paleochannels are found between the sand ridges.Their layout is reminiscent of presentday tidal channels.They connect to the Agan River, which collects water along eastern delta plain, at the toe of the volcanoclastic hills, and represents the only fluvial system east of the Aceh River that manages to cross the strand plain and reach the sea.We identified four belts of buried paleochannels and associated levees to the west of the Aceh River (Figure 3).They form a series of ribbons, parallel to the modern river, that stretch over the entire delta plain.The westernmost belt lies close to the delta west margin and is highly sinuous (sinuosity index of 1.68).The other three belts show similar sinuosity but tend to take a more northerly course in the downstream direction. The boreholes dug in these paleochannels expose similar stratigraphic superpositions (Figure 5 Beach ridges and swales Prominent (ridgelines 10-15 m amsl) sandy beach ridges form a belt that extends from the present-day coastline to 3-6 km inland.They are separated by swales filled by silty sands (Figures 3 and 5-T2).predominance of the waves tracking from the northwest (Diposaptono & Mano, 1998).Sediment drift generates coastal ridges that are concave seaward in the western delta and convex seaward in the eastern delta.Beach ridges between the Aceh River and Alue Naga mouths are truncated as a result of the erosion of a formerly more prominent river mouth which was surrounded by more cuspate shorelines, a trait typical of quickly prograding, wave-dominated deltas (Anthony, 2015).The delta lobe has since been smoothed by the coastal retrogradation (Chapkanski, Brocard, Lavigne, Tricot, et al., 2022). Submarine delta The most significant morphological feature of the Aceh River prodelta is an offlap break quasi parallel to the coastline at the transition between a relatively flat topset (∼0.3°) and a steeper foreset (∼2°).North of the contemporary river mouth, the offlap break lies ∼3 km from the coastline, at a depth of ∼26 m.In the eastern delta the slope break is located farther offshore, ∼6 km from the coastline.Concave lobate forms are observed offshore and may represent the remnants of former mouths of the Aceh River. HEM results and interpretations The distribution of resistivity at depth is reported on Figures 6, 7 and 8. Low resistivity values (blue) correspond to seawater-saturated sediments of the lower delta.Sand and coarser-grained sediment filled by freshwater are represented by orange to brown pixels.Low to medium resistivity (green pixels) corresponds to silty-clay or/and coarser sediment filled by brackish water (Siemon et al., 2007;Siemon & Steuer, 2011).The general decrease of resistivity values with depths is probably induced by higher groundwater mineralization at depth, or/and by biases introduced by the comparison of point information in the boreholes with volume information in the electromagnetic data. From 20 to 15 m bgl, a high resistivity belt along the valley axis seems to be composed of sand and gravel deposited by the Aceh River in its upper delta.By contrast, in the middle delta, resistive patches are aligned crosswise to the river course and may correspond to the first beach ridges to have formed after the maximum flooding of the Aceh River valley. From 10 to 5 m bgl, some resistive patches are aligned parallel to the valley margins.They are elongate and display some sinuosity consistent with paleochannels and levees of the Aceh River.Four major SE-NW-striking belts can be clearly seen.In the middle delta, a ∼3 km wide belt of high resistivity lies parallel to the coastline and coincides with the beach ridges of the eastern delta.This belt is composed of sands that evidence the sub-surface continuation of the eastern delta ridges towards the southwest, below urbanized areas of the delta.Compared to eastern delta ridges, western delta ridges appear more discontinuous, and interrupted by the former courses of the Aceh River across the western delta.Sporadically, coast-parallel unidentified ridges are preserved, allowing us to track the beach or chenier ridges across the western delta.Intervening conductive layers are made of former inter-ridge lagoons or tidal flats and are nowadays covered by flat silty-clay floodplains. Starting at 5-4 m bgl and above, the paleo-channel belts become fragmented, indicating that they were no longer fed by the Aceh River.In the lower delta, lobate, resistive bodies may correspond to former delta lobes beyond the former river mouths. Processes and evolution of the Aceh River delta At the beginning of the Holocene, the sea level of the Andaman Sea (Scheffers et al., 2012;Tjia, 1996) and Malacca strait (Geyh et al., 1979) was about -70 m below the current sea level.A marine high stand as high as + ∼ 3 m has been documented during the mid-Holocene ∼5500 years ago; it was followed by a gradual lowering to the current sea level.Based on the landward extent of the delta plain, it appears that this sea-level high stand led to a Holocene marine transgression that flooded the Aceh valley as far as 6 km inland (Culshaw et al., 1979).The resistive sandy bodies buried 20-15 m below ground may belong to a transgressive tract deposited before sea level stabilized, sheltering the flooded valley farther inland from swale, thus allowing the upper valley to be filled by lagoons, swamps, or tidal flats.The Aceh River then covered this area with channel belts, fluvial levees and silty-clay to peaty floodplain deposits, as evidenced by resistivity maps above 10 m bgl.As delta progradation proceeded, the Aceh River expanded over the central part of the upper delta, feeding a series of now-buried channels, which may represent successive courses of the Aceh River or The presence of abandoned meander belts in the western part of the delta (Figure 3) implies that the Aceh River has flown for some time across the western delta where ridges are interrupted.The downstream terminations of the meander belts most likely correspond to former mouths of the Aceh River, which occupied the estuary, east from Ulee Lheue where beach ridges show curved shape (Figures 1 and 3).Layouts of the Aceh and Angan River at different time-spans (refer to the Main Map in Supplementary Material), show that, since the end of nineteenth century, both rivers have maintained the same courses across the delta (Chapkanski, Brocard, Lavigne, Tricot, et al., 2022).Until the end of twentieth century, the mouth of the Aceh River progressively retreated before migrating westward at the beginning of 21th century, after the destruction of the up-drift ridge.Coast-parallel beach ridges across a strandplain as large as 7-8 km require a substantial amount of delta progradation since the mid-Holocene.As beach ridges are formed from available sediments along the coast, dispersed by the swash of waves, and that inter-ridge swales lie within the intertidal zone, the height and morphology of these features may reflect sea-level change, as well as the succession of depositional and erosive phases (Otvos, 2000).Low, widely spaced beach ridges form under relatively high sediment influx and rapid vertical aggradation, by contrast to tall, closely spaced ridges, which tend to form under reduced sedimentation rates (Taylor & Stone, 1996).The observed decrease in ridge and swale height seaward may indicate a lowering of sea level (Taylor & Stone, 1996).Therefore, the first cluster (∼5-6 km from the coastline; Figure 5-T2) of high, wide and imbricate beach ridges may reflect a relatively slow rate of delta progradation.The progressive seaward decrease in the elevation of the intervening swales may track the post-high stands lowering of sea-level since the mid-Holocene.The lower-standing and more widely spaced character of the more recent beach ridge may result from high rate of sedimentation and relatively faster delta progradation.Accelerating delta progradation may also be responsible for the arrangement of the beach ridges located around the mouth of the Aceh River (Figure 3).Based on the truncated aspect of ridges and their curvature increasing seaward, the cuspate shape and asymmetric character of delta would have been more pronounced in the last visible stage of the delta progradation (Figure 3).Coastal retrogradation has prevailed at least since the late of nineteenth century (∼100 m of coastal retreat during the century prior to 2004 Indian Ocean Tsunami, and ∼120 m of land loss following the tsunami events; Chapkanski, Brocard, Lavigne, Tricot, et al., 2022). Conclusion Mapping of landforms and buried sediment deposits in the Holocene tract of the Aceh River delta was performed using historical maps, aerial photographs, satellite images, digital elevation models and airborne electromagnetic data.Particular attention was paid to the depiction of fluvial levees, palaeochannels, tidal channels, lagoons, tidal flat, swales and beach ridges such as to provide keys for deciphering the evolution of the fluvial-deltaic system.The morphology of fluvial levees and beach ridges were documented through topographic transects.The airborne electromagnetic data was correlated with extensive in-situ stratigraphic data in order to evaluate the down-depth attenuation of silty-clay and sand layers.The results were used to provide relevant discretization of resistivity maps at specific depth layers.The landform layouts revealed some degree of consistency with the underlying buried sediments structures.The results were compiled to produce a geomorphological map of the Holocene delta of the Aceh River.The results were interpreted, and we proposed a brief discussion concerning the lines of the delta evolution in relation to the sealevel trends.Research involving radiocarbon dates, sedimentological analyses and provenance tracing is currently in progress to establish robust chronological and morpho-sedimentary frame of the Holocene evolution of the Aceh River delta. Software Pretreatments, feature recognition and digitizing were conducted using ArcGis Software 10.3 (ESRI, California, USA).Figures and final maps were improved using Illustrator 23.0.3 (Adobe Systems Incorporated, California, USA).Descriptive statistics were conducted using the software IBM SPSS statistics 20.0 (Armonk, NY, USA).cooperation with the Universitas Syiah Kuala.This study was funded (i) by the French Ministry of Foreign Affairs, through a Partenariat Hubert Curien -Nusantara grants (attributed to F. Lavigne in 2017 and to J-Ph.Goiran in 2019), (ii) by the University of Paris 1 Pantheon-Sorbonne, through an International Mobility Grant awarded to S. Chapkanski and STRATI project, (iii) by the French Ministry of Higher Education, Research and Innovation, through the Institut Universitaire de France (IUF) attributed to F. Lavigne and by the Laboratory of Physical Geography (LGP).Additional financial support was provided by LabEx DynamiTe (ANR-11-LABX-0046), as a part of the 'Investissements d'Avenir' program. Figure 1 . Figure 1.Location map of the Aceh fluvial-deltaic environment at different spatial scales.(A) The Sumatra Island, (B), the Aceh catchment and (C) the Aceh delta.Bathymetry, altitude, hydrological network and major toponyms are reported. Figure 2 . Figure 2. Schematic demonstration of the methodological proceeding. Figure 3 . Figure 3. Simplified version of the geomorphological map of the Aceh River delta.Field observations and snapshot orientation are displayed (Figure 4).Borehole locations and associated topographic cross-sections (T1: SW -NE and T2: SE -NW) are shown (Figure 5). Imbrication of beach ridges occur ∼3-4 km inland.The height of the beach ridges and swales decreases seaward (Figure5-T2).A more localized, pronounced drop in height occurs ∼ 3 km from the shoreline, were the ridge height decreases to 3-5 m amsl.Seaward, swales then tend to be filled by shelly and clayey sandy tidal flats 0-5 m amsl (Figure5-T2).Long-shore drift along the Aceh delta coast is predominantly to the north-east, owing to the Figure 4 . Figure 4. Fieldwork observations.Locations of snapshots are shown in Figure 3. 1: White beach consisting of biogenic carbonate sands; 2: Paddy fields in the foreground and steep limestone hills in the background; 3: Dry paddy fields in former palaeochannel; 4: Tidal flats of the western delta; 5: Beach ridge and sand dunes in the eastern delta; 6: Lagoons and tidal flats downstream of the Angan River; 7: Pre-Holocene layered terraces; 8: Angan River. Figure 5 . Figure 5. Topographic cross-sections and borehole stratigraphy.For better readability, boreholes are presented with independent scale frames from the topographic cross-sections.Resistivity values at 2, 4, 6 and 8 m below the ground level were extracted at specific core locations and reported along the boreholes. Figure 6 . Figure 6.Boxplots of resistivity value distributions for specific textures and at different depths below the ground level. Figure 7 . Figure 7. Resistivity maps at 20, 15 and 10 m below ground level and associated interpretations of resistivity bodies.Texture in boreholes at each specific depth are shown. Figure 8 . Figure 8. Resistivity maps at 5, 4 and 3 m below ground level and associated interpretations of resistivity bodies.Texture in boreholes at each specific depth is shown. Table 1 . Overview of the database used in the study. EOS, Earth Observatory of Singapore; TDMRC, Tsunami Disaster Mitigation Research Center; INAGI, Indonesian Agency of Geospatial Information; USGS, U.S.Geological Survey; and BGR, Federal Institute for Geosciences and Natural Resources.
5,759.6
2022-11-20T00:00:00.000
[ "Geography", "Environmental Science", "Geology" ]
Approach of design influence on air flow rate through the heat exchangers The heat exchangers are responsible by the thermal equilibrium in an internal combustion engine, but it is important to emphasize, that this property is directly influenced by the air mass flow rate which passes through heat exchanger, given a specific coolant flux and heat exchanger geometry. The present work aims to show the influence of vehicle components, such as air deflectors, fan, radiator’s shroud, grilles and gaskets on air mass flow rate passing through the heat exchangers and their consequences on ITD (Inlet Difference Temperature) estimation. This influence was quantified using CFD (Computational Fluid Dynamics) and 1-D simulation analysis of vehicle under-hood. The simulations were performed in top speed condition of the vehicle, which characterizes full throttle of the engine and 50kW of rejected heat. The analysis allows to state that an increment of air flux was possible reducing leakage and redirecting the air flux. Introduction A model of vehicle can show different versions, simply making changes of radius of the wheels, types of tires, set of grilles and bumpers, hoods and headlights, for example.These changes can define versions of luxury, sportive, family, off-roader or an economy car.Furthermore it is important to highlight the possibility of different setups of powertrain, so that each one of these configurations has significant influence on thermal management of the vehicle and its employment in a specific climate zone (hot and temperate lands). Such premises of operation require some care of sizing of the cooling package.For a better practice of sizing, it is recommended to use CAE resources like CFD and 1-D analysis, whenever component like radiators, condensers, fans, shrouds, air flow deflectors, thermostatic valves, expansion tanks and gaskets are been chosen. The applicability of the simulation (CFD) allows forecasting the quantity of air mass flow rate which passes through the radiator, one of the variables in the formula of the heat gained by the air which can be expressed as: ) .( .The performance of a heat exchanger can be determined by examining the heat loss and gain that takes place between its working fluids: The lost by the coolant can be expressed as: ) .( .This software uses the following input data: a) coolant pump map; b) fan map; c) the lift curve of the thermostatic valve; d) the map of rejected heat by the engine in different conditions of operation (engine speed, torque and power); e) the geometry of the hoses; f) the curves of backpressure of the engine jacket; g) the curves of backpressure of the radiator (air side); h) the curves of backpressure of the radiator (coolant side); i) condenser parameters, similar of the radiator; The 1-D simulation makes a compilation of these data searching or estimating the point of working of the vehicle. The parameter ITD is defined by -NTU method of sizing of the heat exchangers which can be expressed by: The effectiveness is given by: Where max Q  is the maximum quantity of heat transfer possible.The parameter of ITD is usually employed to specify if a vehicle and its cooling system is able to be used in a determinate condition.For top speed condition for example with 45 °C of ambient temperature and 30% of relative humidity, the recommended value of ITD is 75K. More details of -NTU method can be seen in Kanefsky et al. (1999). The reduction of engineering costs using simulation is widely proved and represents more one attractive point for its application, mainly during the phase of development avoiding costs of experimental tests and prototypes. CFD simulation In principle it is necessary to know what vehicle (model and version) will be simulated and the condition which the vehicle will be studied.In our example it is been studied a vehicle in top-speed (185km/h, with 45°C and 40% of relative humidity). After, it is prepared the geometry of the vehicle, in order to have a representative virtual model. In this phase of the process of simulation, the geometry imported from CAD, (independently of the software of CAD) are inappropriate for CFD simulation. In some case these geometric problems are increased when the model are exported using ".stl" file extension. Normally these geometries show the following type of surface problems which can be repaired manually by the own software of CFD:  Free edges. Zip edges or fill holes with new triangles. Intersecting surfaces. Intersect then delete surfaces. Imprint surfaces and edges onto target surfaces/bodies. Surfaces can be split and combined to create required boundaries. The process to clean a surface could expend a lot of time, if this procedure is usually performed manually, therefore it is recommended to use some resources to repair surface which is available in the package of the software.The software used for the proposed simulation is the Star CCM+ which has a resource called wrapping. The wrapping tool tries to solve every problem found among the cells of the virtual model. In the fig. 1, extracted from Mouffouk (2014), it is possible to notice the difference between surfaces after and before the process of wrapping. Each one of the components of the vehicle receives a specific setup of surface mesh condition, with a specific surface size.This procedure is adopted just for the surficial mesh of the vehicle which solves mainly problems such as:  Pierced faces;  Free edges;  Non-manifold edges;  Non-manifold vertices.The next step is to prepare virtual wind tunnel where the wrapped surface of the vehicle will be placed.In other to avoid the influence of the tunnel on the airflow, it is adopted the dimensions of the wind tunnel indicated in the fig.2. It is recommended to take care with the distance between the floor of the vehicle and the ground of the wind tunnel. Part of the tires should be removed, giving the effect to mold the tire on the ground.This effect is obtained making a subtraction between the vehicle and the wind tunnel.In the fig.3, it is possible to observe this effect. Fig. 3 -Subtraction between the tire and the wind tunnel ground The same command subtract is used to insert the radiators and the condenser (porous medium) in the wrapped surface of the vehicle. This procedure is adopted in order to avoid deformation of the heat exchangers geometry during the process of wrapping which could generate error during the phase of volumetric mesh generation. In the fig.4, it is possible to see the wind tunnel with the heat exchangers The fan interface is other important point to take care.Two surfaces are created in order to represent the difference of pressure between the upstream and downstream. The distance between these surfaces is 0.1mm and during the process of wrapping an initial interface is created (using command repeating).During the process to generate a volumetric mesh the same setup is kept. The fig. 5 shows the model of the fan; in the end of the process of volumetric mesh the surfaces upstream and downstream are jointed.The prismatic layer could follow what it is recommended by Mouffouk (2014): "For wall functions, each wall-adjacent cell's centroid should be located within the loglaw layer, (30<y + <300).A y + value close to the lower bound y + = 30 is most desirable". The volumetric mesh is performed using a tool called remeshing, with polyhedral configuration; the wrapped surface is reworked before this process. The present flow field is mathematically described by continuity and the incompressibility via Reynolds Average Navier Stokes (RANS) equations.To predict complex turbulent flow was adopted Realizable K- turbulence model and High-Re wall treatment. For this simulation, the space discretization can be seen in the fig.6.It was utilized a polyhedral mesh with approximately 17 million of cells. In order to increase the accuracy and reduce error during the processing, some volume of the domain is refined reducing the size of the cells.It is desirable to represent the intake airflow of the air filter and the exhaust gases in the tail pipe, in both case it is possible to represent a datum plan as an inflow for intake system and an outflow for the exhaust system. The values of these mass flow rates depend on the engine condition.In top speed these values represents full throttle of the engine. CFD setup The setup of the heat exchangers obeys the equation of Darcy (Eq.04), for porous inertial resistance ( i P ) and for porous viscous resistance ( v P ). Eq. 04 Where: dp -Differential of pressure between upstream and downstream; dx -Differential of thickness of the porous medium; v -Velocity of the fluid.15m The components of Darcy's equation are input in the software and it has been shown satisfactory results. The characteristic curve of the fan that it is given by the relation (dp/dv), where dp is the difference of pressure and dv is difference of velocity.It generates a polynomial function of 2 nd degree, that is shown in the fig.7. air volumetric flow rate (m³/s) The surfaces of the fan (upstream and downstream) should be put in rotation.For both cases (heat exchangers and fan) the values are obtained by the suppliers. The summary of the boundary conditions used in the flow field characteristics could be seen in the table 1.The air dynamic viscosity was defined constant and value of 1.85*10e -5 Pa-s.For the density of reference were adopted 1.18 kg/m 3 . 1-D Analysis Basically the 1-D analysis follows the scheme shown in the fig.8 below.The characteristic's maps of the radiator are obtained by the supplier, similar information that it is used in CFD analysis, but in this case there is a map of heat exchange of the radiator. The main information obtained via CFD is the air mass flow rate which is input in the software and it is able to estimate the values of ITD. This analysis is able to obtain ITD data following the script shown in the introduction of the present work. Although it was used simplified circuit of cooling, it is possible to increment the model, using all of possible variable that could influence the ITD estimation, as it could be seen in the fig.9. Case #3: In the 3 rd case was added a gasket between radiator and front-end (fig.12). (a) (b) In the table 2, it is possible to observe the increment of air mass flow rate which passes through the radiator after each action shown for each case.Additionally it is shown the air flux through the fan and the condenser.Our model has an auxiliary radiator used in an intercooler system, but the focus of ITD calculation is in the main radiator. ITD estimation Based on the results obtained in the table 2, it is estimated the value of ITD.In the Table 3, it is possible to see the reduction of ITD after each action shown in each case. Conclusion The values shown in the Table 4 are compiled in the fig.14.The values of ITD were reduced with the actions; it could be interesting when the analysis passes to consider the costs of the components like the radiator and a deflector. In case of the values of ITD is on borderline of acceptable values, it is interesting to reduce the leakage of the air flow rate rather than to invest in a heat exchanger with more capacity of heat exchange of effectiveness.[2] Abiodun Matthew Amao.Mathematical model for Darcy Forchheimer flow with applications to well performance analysis.Master of Science Thesis, Texas Tech University, August 2007. Overall heat transfer of the air (kW); a m  = air mass flow rate (kg/s); pa c = Specific heat capacity of the air (kJ/kg.K); ai T = Temperature of the air inlet (K); ao T = Temperature of the air outlet (K); Overall heat transfer of the coolant (kW); c m  = coolant mass flow rate (kg/s); pc c = Specific heat capacity (kJ/kg.K) ci T = Temperature of the coolant inlet (K); co T = Temperature of the coolant outlet (K); The heat loss and gain are calculated by software which uses the concept of block's diagram (1-D simulation). C = Capacity rate of the coolant side; c C = Capacity rate of the air side. Figure 1 - Figure 1 -The difference between the imported CAD surface after wrapper and before the wrapper [6]. Figure 2 - Figure 2 -Dimension of the virtual wind tunnel. Figure 4 - Figure 4 -Heat Exchangers in the tunnel. Figure 5 - Figure 5 -Fan model The setup of the wind tunnel should consider indicating the walls inlet, outlet, the floor velocity and the prismatic layer (boundary layer) It is indicating in the fig.6 the mainly configuration of the wind tunnel. Figure 7 - Figure 7 -Characteristic curve of the fan Figure 12 -Figure 13 - Figure 12 -Upper air flow deflector in the front-end and radiator set: view (a) perspective; (b) front.
2,994.6
2016-09-01T00:00:00.000
[ "Engineering" ]
A Simplified Positive-Sense-RNA Virus Construction Approach That Enhances Analysis Throughput Here we present an approach that advances the throughput of a genetic analysis of a positive-sense RNA virus by simplifying virus construction. It enabled comprehensive dissection of a complex, multigene phenotype through rapid derivation of a large number of chimeric viruses and construction of a mutant library directly from a virus pool. The versatility of the approach described here expands the applicability of diverse genetic approaches to study these viruses. A n important genetic tool to study animal, positive-sense RNA viruses is the infectious clone, a form of viral genome cloned on a plasmid that can be propagated in Escherichia coli and manipulated for reverse-genetics analysis. While the tool has yielded important insights into viral infection and is critical for generating vaccine candidates, the instability of certain viral genome sequences in E. coli (especially in the cases of most flaviviruses [1] and coronaviruses [2] and of some picornaviruses [3], togaviruses [4], and pestiviruses [5]) has limited the scale of genetic analyses for these viruses. The instability is still not well understood. Consequently, each mutant construct must be extensively sequenced to verify the absence of adventitious mutations, potentially resulting in months of efforts to establish one such clone. Though there are many cloning approaches to mitigate the instability (6)(7)(8)(9)(10)(11), a large-scale genetic analysis relying on cloning methods still requires significant efforts to establish the mutant or chimeric viruses. In addition, the cloning methods are not adept at capturing the diversity of a virus pool for manipulation and screening, hindering the application of available genetic approaches to study viruses. Here, we describe a simple virus construction approach which could bypass cloning and which eliminated its limitations in a large-scale genetic analysis of dengue viruses (DENV). The DENV genome could be reconstructed from multiple PCR products (amplified from cDNA) in a single DNA assembly reaction. Unlike the conventional DNA ligation method, DNA assembly by Gibson assembly (12), which was used in this technique, stitches DNA through overlapping, homologous sequences at the ends of DNA fragments, thus negating any need to introduce foreign sequences to accommodate ligation and providing the flexibility of being able to join DNA at any locations on the viral genome. To produce virus from DNA, viral PCR products, amplified from viral cDNA using high-fidelity polymerase with proofreading activity, were assembled onto an expression plasmid with a cytomegalovirus (CMV) promoter that precisely initiated transcription on the virus genome sequence and a terminator, such as hepatitis D virus (HDV) ribozyme, that accurately generated 3= end of the transcribed viral RNA (13,14) (Fig. 1A). Cells and virus. 293T cells were maintained in Dulbecco's modified Eagle medium (DMEM) supplemented with 10% heat-inactivated fetal calf serum, 1 mM glutamine, 1 mM sodium pyruvate, 20 mM HEPES, and high glucose (4.5 g/liter). C6/36 cells were maintained at 28°C in L15 supplemented with 10% tryptose phosphate broth, 1 mM glutamine, and 10% heat-inactivated fetal calf serum. Vero cells were maintained at 37°C with 5% CO 2 and 80% humidity in MEM supplemented with 1 mM glutamine and 10% fetal-calf serum. DENV4-H241 and DENV2-16681 were cultured in C6/36 cells that were maintained in L15 supplemented with 10% tryptose phosphate broth, 1 mM glutamine, and 1.5% heat-inactivated fetal calf serum after infection. All media used in this study were also supplemented with penicillin/streptomycin. DV4 strain v17 was generated by serial passaging of DENV4-H241 in Vero cells for 17 passages. DV4 strain 4.1 was isolated in the form of an infectious clone constructed from DENV4-H241 virus stock. The construct of DENV4 strain 4.1 was cloned and propagated in E. coli XL10 Gold strain (Agilent) cultured at 22°C in LB medium. Construction of the expression plasmid. A CMV promoter was obtained by PCR amplification from pcDNA 3.1(ϩ) Hygro (Invitrogen). HDV ribozyme and SV40 PA (sequence based on the work by Varnavski et al. [14]) was synthesized and cloned on a high-copy-number vector with flanking NheI and BamHI sites (Invitrogen). The CMV promoter, HDV ribozyme, and SV40PA were assembled onto a pUC19 backbone first with In-fusion HD (Clontech) and later by conventional restriction ligation (NheI and BamHI sites). The plasmid construct was verified by sequencing. cDNA synthesis and PCR amplification. Viral RNA was extracted from culture media with QIAamp viral RNA extraction kit according to the manufacturer's protocol (Qiagen). cDNA synthesis of the virus was carried out with Superscript III first-strand synthesis kit (Invitrogen) according to Chin-inmanu et al. (15). The primers for cDNA synthesis were 10601-10621-rv-dv4 and 10639-10661-rv-dv2 for DENV4 and DENV2, respectively. Primer sequences are shown in Table 1. PCR products for assembly of viral constructs were carried out with high-fidelity DNA polymerases (Phusion [NEB] and KAPA HiFi [KAPA Bioscience]) according to the manufacturers' protocols. All the PCR products were cleaned up using a PCR cleanup kit from Invitrogen before use in DNA assembly and sequencing. PCR products of the expression vector (for assembly with viral PCR products) were amplified with hCMV-rv and HDV-fw-dv4 (for DENV4) or HDV-fw-dv2 (for DENV2). Virus construction and production. Cleaned-up PCR products of both the expression vector and viral cDNA were assembled together in a Gibson enzyme mix according to the one-step isothermal assembly protocol detailed by Gibson et al. (12). For this study, the overlaps between fragments to be assembled were in the range of 20 to 40 bp. The formula of the enzyme mix for overlap between 20 to 150 bp was used for the assembly (12). PCR products of the expression vector (0.02 to 0.04 pmol) were assembled with 0.04 to 0.08 pmol of each PCR product of the viral genome in the enzyme mix to make up 20 l of the assembly reaction mixture. The Gibson ligation reaction mixture (7 to 20 l; approximately 40 to 120 ng of mixed PCR products) was diluted in Opti-MEM I medium and mixed with Lipofectamine 2000 according to the protocol provided by manufacturer (Invitrogen). 293T cells in either 35-mm dishes or 24-well plates were washed twice with Opti-MEM before addition of the DNA-Lipofectamine complex solution to the cells. Transfection was performed for 4 h at 37°C before the medium was changed to DMEM supplemented with 10% heat-inactivated fetal calf serum, 1 mM glutamine, 1 mM sodium pyruvate, 20 mM HEPES, and high glucose (4.5 g/liter). The cultured medium was harvested and replenished on the second and third days after transfection. For mapping and library construction experiments (see Fig. 3 and 4), ISF-1 (Biochrom) was used instead of DMEM. Harvested media were clarified of debris and dead cells by centrifugation at 3,000 ϫ g and 4°C for 5 min. The clarified medium was stored at Ϫ70°C for subsequent infection, virus titration, and viral RNA extraction for sequencing. Virus titration by focus-forming assay. Vero cells (ϳ90% confluence) in 96-well plate were used for titration. Infection was carried out at 37°C for 3 h before the cells were overlaid with MEM supplemented with 2% fetal bovine serum (FBS) and 1.5% carboxymethyl cellulose. Infected cells were incubated at 37°C for 3 days before fixation with 3.7% formaldehyde in 1ϫ phosphate-buffered saline (PBS) and permeabilized with 2% Triton X-100 in 1ϫ PBS. Staining was performed with 4G2 as the primary antibody, horseradish peroxidase (HRP)-conjugated anti-mouse IgG as the secondary antibody, and DAB (3,3=-diaminobenzidine) as the chromogenic substrate. Focus quantitation. Dried, stained virus titer plates were scanned and digitized with a KS ELISPOT reader (Carl Zeiss). Well images were extracted in Photoshop (Adobe). Foci in each well image were characterized in ImageJ (16). Well images were converted to binary form to separate foci and background. Then, the areas of foci in the binary image were quantitated (the number of pixels) using the "Analyze Particles" function. Sequencing of virus. DNA sequencing of virus was performed with PCR products derived from viral cDNA. PCR amplification for sequencing was performed with either Phusion polymerase (NEB) or Accuprime high-fidelity Taq polymerase (Invitrogen). To control for the contamination of assembled DNA from transfection reaction, another cDNA synthesis reaction with RNase A (Fermentas) was also set up. PCR amplification of the cDNA plus RNase A did not yield specific PCR products, showing that PCR products were derived from RNA (Fig. 1C). The sequence chromatograms were analyzed and displayed using the 4Peaks program (Mekentosj). Growth curve comparison. At least three independent experiments were performed for each virus tested for growth curve. The recovered viruses from Gibson assembly using DENV4-H241, DENV2-16681 and DENV4-v17 were compared against the original control viruses, and in the case of DENV2-16681, we also compared growth with that of virus produced from an infectious clone (a gift from N. Sittisombut). DENV2-16681 from the infectious clone, which had been derived from the transfection of capped in vitro transcribed viral RNA into C6/36 (17), was expanded in C6/36 before use. To perform growth curve comparisons, confluent Vero cells in 24-well plates were infected at a multiplicity of infection (MOI) of 0.25 (DENV2-16681 and DENV4-v17) or 0.01 (DENV4-H241) in 500 l MEM supplemented with 2% heat-inactivated fetal calf serum. Infection was carried out for 2 h at 37°C with 5% CO 2 . After infection, the cells were washed three times with 1 ml plain MEM. The infected cells were then supplemented with 1 ml MEM supplemented with 2% heat-inactivated fetal calf serum. The medium from infected cells was collected for virus titration at every 24 to 26 h. Genetic mapping. Eleven PCR products derived from strain 4.1 were generated by the set of primers used for strain v17. Another set of PCR products that cover two genes from both strains were also constructed to facilitate assembly using the same set of primers. Viruses in Fig. 3 were constructed from different combinations of 11 genes from both strains. To further map the mutations in E and prM, each gene was broken down with two pairs of primers. For E, the pairs E-dv4-start-fw plus midE-dv4-1587-1610-rv (E1527) and midE-dv4-1587-1610-fw plus E-dv4-end-rv (E1690) were used. For prM, prM-dv4-start-fw plus M-dv4-start-rv (prM550) and M-dv4-start-fw plus E-dv4-start-rv (M847) were used. Virus library construction. To diversify the codon that encodes the amino acid specified at position E1690, E gene was reconstituted from two PCR products generated by two pairs of primers: E-dv4-start-fw plus E1690-random-dv4-rv and post-E1690-fw plus E-dv4-end-rv. These two PCR products were incorporated in the assembly reaction to construct diversified E gene for the viruses. Recovery of dengue viruses with inal virus stocks (Fig. 1B). The recovery depended on intactness of assembled DNA, as determined by excluding a genome segment or treatment with DNase before transfection inhibited virus production (data not shown). The viruses could be produced from as many as 11 viral PCR products (Fig. 1B, strain v17). DNA sequencing of the recovered DENV4-v17 showed sequence heterogeneities at two positions, as observed in the original DENV4-v17 stock used to derive its template cDNA, while further culture of DENV4-v17 in C6/36 cells, a mosquito cell line usually used for DENV isolation and recovery, lost the NS4B mutations (Fig. 1C). The heterogeneities in focus size and sequence of the recovered viruses demonstrated the ability of this technique to retain a pool of virus mutants. In addition to similar focus characteristics, the recovered viruses also possessed growth curves similar to those of either the original viruses used as PCR templates or virus derived from an infectious clone, in the case of DENV2-16681 ( Fig. 2A to C). In addition to recovery of a heterogeneous viral pool, Gibson assembly could be used for efficient construction of mutant viruses. To date, over 100 mutant DENVs (in addition to the ones presented here) have been constructed by this approach. Sequence verification by the Sanger method (10 viruses fully sequenced and the rest sequenced on the mutant genes and the assembled sites) confirmed that the desired mutations were obtained in all cases. A comparison between the foci of a set of mutant viruses generated from Gibson assembly and from a sequence-verified infectious clone also showed similar phenotypes (Fig. 2D). Convenient shuffling of viral genome segments for characterization of mutations. The ability to conveniently and accurately construct a virus from a set of multiple PCR products or DNA fragments, with each representing a viral gene or genetic element (as shown for strain v17 in Fig. 1B), would greatly facilitate genetic mapping. It enables convenient derivation of numerous chimeric viruses from shuffling a set of PCR products or DNA fragments of genes/loci of two viruses. Instead of having to verify the DNA sequence of each full-length chimeric clone, as required by the infectious-clone approach, the shuffling requires sequencing of only two sets of viral DNA templates, greatly cutting down sequence verification in a large-scale mapping effort. In addition, the ease of recombining different virus strains offered by this scheme would provide a powerful basis that is not natural for single-genome RNA viruses but is indispensable for forward genetics of many organisms and viruses with segmented genomes. To demonstrate the utility of this technique in mapping, we "crossed" two dengue strains with large (strain v17) and small (strain 4.1) foci by shuffling PCR products of their genes or loci to produce chimeric viruses. The viruses produced directly from transfected 293T cells were directly used for phenotype characterization to minimize the effects from random PCR errors that could be enriched or amplified during subsequent virus culture to expand the virus stocks. We characterized the foci of progeny viruses to determine which genetic differences between the two ( Table 2) conferred their phenotypes. Focus size is an indicator of how well a virus replicates. A genetic variant that changes focus size can affect the replication of the virus. The mapping was done in three stages. First, we characterized the chimeric viruses derived from single-gene swaps to identify target genes (Fig. 3A and B). Replacement of either E or NS4B in v17 with the sequences from 4.1 caused a dramatic reduction in focus size (Fig. 3B). The reverse experiment, where single segments from v17 were moved into the 4.1 background, suggested a role for E which increased focus size but not to the level seen in v17 (Fig. 3A). To characterize further elements contributing to the focus sizes of v17 and 4.1, we went on to make and characterize a panel of v17 gene combinations inserted into 4.1 that could retrieve the phenotype. We found that the combination of prM, E, NS4B, and NS5 of v17 could reconstitute the large v17 foci (Fig. 3C). Since there are two nonsynonymous differences between 4.1 and v17 in the E and prM sequences each (E at positions E1527 and E1690; prM at positions prM550 and prM847) ( Table 2), we tested which of them could account for the effect of those two genes. We constructed additional 18 chimeric 4.1 viruses with single or combined changes at these positions. The focus sizes of a subset of these viruses (Fig. 3D, asterisks) suggested that the v17 mutations at position prM550 and E1690 could replace the prM and E of v17, respectively, in reconstituting v17 foci (Fig. 3D). Construction of a dengue virus library. With PCR products as the building blocks for viral infectious DNA, the approach can exploit various PCR techniques (error-prone PCR, DNA shuffling, and PCR with degenerate primers) to diversify virus sequences to produce mutant libraries. Screening and sorting of the libraries provide powerful methods to study the genetic basis of a phenotype. We created a virus library using a degenerate primer to randomize an amino acid at position E1690 (Fig. 4A). The mapping in Fig. 3 showed that the amino acid at E1690 has a strong influence on the focus size. Randomizing the amino acid at this position will generate a DENV library that can be used to study how the property of this critical amino acid affects focus size. A DENV library was created by combining PCR products of prM, E (randomized at E1690), and NS4B from strain v17 and the rest from strain 4.1. Sequencing of the recovered viruses showed the scrambling of the sequence at the target position (Fig. 4B, E1690). The sequence chromatograms of prM847 (Fig. 4B, M) and NS4B7016 (Fig. 4B, NS4B) showed heterogeneity, as observed with the original DENV4-v17 stock (Fig. 1D). The focus assay of the DENV library indicated the presence of small-focus viruses that were not present when the amino acid at E1690 was not randomized in the same genetic background (Fig. 4C). Thus, by assembling the scrambled PCR products with those derived from heterogeneous viral cDNAs, this technique could directly diversify an existing pool of virus mutants. This capability is essential in directed-evolution experiments, where the mutant pool selected from each round is directly diversified for the next round of selection. DISCUSSION The ease of assembling PCR products by Gibson assembly greatly simplifies genetic engineering of viral genomes. Traditional DNA ligation relies on complementary sticky ends generated by cutting DNA with restriction enzymes. The use of restriction enzymes has limited DNA assembly from multiple PCR products, as adding restriction sites must not interfere with the functions of the sequences at the joints. The restriction sites must also be unique to specifically ligate DNA. Seamless DNA assembly techniques, such as Gibson assembly (12), In-fusion (Agilent) (18,19), SLIC (20), and SLiCE (21), could solve this problem by generating sticky ends on any linear DNA. Subsequent annealing of complementary single-stranded DNA overhangs and ligation of annealed fragments joins the DNA in a specific manner. Similar seamless assembly, such as CPEC (22) and SHA (23), relies on the annealing of cDNA strands at the ends to join DNA fragments during PCR amplification. In principle, any of these seamless DNA assembly techniques may be substituted for Gibson assembly in the virus construction approach presented here. Very recently, CPEC was applied to construct West Nile viruses in a similar bacterium-free approach for virus construction (24). The virus construction by CPEC could achieve similar recovery of the heterogeneity of a West Nile virus stock (24). Together, these results show the applicability of a seamless technique in virus recovery. The ease of constructing viruses from multiple DNA fragments provided by this method can facilitate genetic mapping and screening of mutations (Fig. 3). The capability of Gibson assembly, as demonstrated in its application to the construction of mitochondrial genomes from oligonucleotides (25), would accommodate even finer fragmentation of viral genomes than ours (Fig. 1A) and make the approach applicable to RNA viruses with larger genomes, such as coronaviruses. While the data presented here show high accuracy of reconstructing viruses by this method, the possibility of unintended mutations caused by random PCR errors cannot be ruled out when viruses are constructed from PCR products. Confirming the identified mutations with additional methods and assays, such as full-genome sequencing of the derived viruses, will prevent such errors. The virus construction approach described here addresses the throughput limitations of infectious clones. It shortens the construction time from months to days. It can recover the genetic diversity in the virus stock ( Fig. 1B and D). It significantly reduces the amount of sequence verification required in a large-scale genetic analysis through the ease of recombining viruses (Fig. 3). It enables direct genetic manipulation of a virus mutant pool (Fig. 4). By eliminating these limitations, this approach supports largescale forward genetics (Fig. 3) and expands our current ability to mutate viruses for reverse genetics and directed-evolution experiments (Fig. 4). Its capability is based on direct exploitation of PCR products amplified from viral cDNA as building blocks to construct infectious viral DNA in a single, highly efficient reaction of Gibson assembly. The throughput capability gained will support functional characterization of virus variants, which are being discovered at a breakneck pace by next-generation sequencing. Since it relies on a DNA-based expression format applicable to many positive-sense RNA viruses, its utility will be relevant to these viruses (13,26,27). The technique should accelerate vaccine and antiviral-drug development to fight against many pathogenic positive-sense RNA viruses.
4,618.6
2013-09-18T00:00:00.000
[ "Biology" ]
Adaptive neighbor-based topology control protocol for wireless multi-hop networks Topology control protocols have been proposed to construct efficient network topologies with several design goals, e.g., network-wide connectivity, minimal energy cost, symmetry, lower nodal degree, and therefore higher spatial reuse or lower interferences. Neighbor-based topology control protocols are simple and assume that each node in the network is connected to its k least-distant neighbors. There have been several empirical and theoretical research efforts that recommend a network-wide optimal value of the local parameter k. However, since most of the design goals often run against each other the suggested lower and upper bounds on the values of k are not sufficient to provide a controllable trade-off among various design goals. In this article, an adaptive neighbor-based topology control protocol is presented where the neighboring nodes collaborate and provide feedback on the network connectivity to decide on their respective transmission ranges. Since every node adaptively adjusts its number of neighbors, the parameter k acts as a performance knob to choose a set of backbone nodes and to form a hierarchical topology structure consisting of symmetric links. Through extensive simulation-based study, it is shown that the value of k can be tuned to generate fully connected network topologies while offering an efficient trade-off among various design goals. Introduction Topology control [1,2] leads to a simpler network topology with several design goals such as network-wide connectivity, minimal energy cost, symmetry, lower nodal degree, and therefore higher spatial reuse or lower interferences. In neighbor-based [3,4] topology control protocols, each node connects to its k least-distant neighbors. The neighbor-based protocols are often characterized by their simplicity and the use of minimum amount of information needed by nodes to construct the network topology. However, finding an optimal value of k such that some or most of the design goals are achieved has been a challenging task. The network topology induced by setting a lower value of k is either not fully connected and/or consists of asymmetric links. On the other hand, the recommended upper bound on the value of k causes significant redundancy in nodal degree which is a measure of the spatial reuse and interference. There have been several research efforts both empirically and theoretically to find an optimal, network-wide value of the local parameter k. However, since most of the design goals such as connectivity, energy cost, symmetry, and nodal degree often run against each other the suggested lower and upper bound on the values of k are not sufficient to provide a controllable trade-off among various design goals. While the focus of most of the previous investigations rested on finding the number of neighbors that are necessary for connectivity [5,6] and on how different network models and design goals influence the value of k [3,4], they miss the potential of collaboration among neighboring nodes where nodes provide feedback on the network connectivity to decide on their respective transmission ranges. Nodes start with connecting to the least-distant neighbors, check for local network connectivity information and adjust the number of neighbors to achieve an efficient trade-off among various design goals. This study describes an adaptive neighbor-based topology control (ANTC) protocol that constructs fully connected network topologies while it addresses number of design goals efficiently. The basic idea is to select a subset of nodes that serve as the network backbone and to form a hierarchical topology structure consisting of symmetric links. The process of backbone node selection is carried out in a distributed manner without requiring global network connectivity knowledge. The proposed ANTC protocol runs in three phases. First each node discovers its one-hop neighbors. Next the sink or base station node initiates the topology construction (TC) phase by broadcasting a control message. Among other attributes, the control message contains information on local neighborhood connectivity. On receiving the control message, each node checks for network connectivity requirements and adjusts its number of neighbor accordingly. The control message is then rebroadcasted exactly once by means of controlled flooding. This process is realized by mean of node coloring algorithm where initially all nodes are WHITE in color. During the TC phase nodes change their color according to the feedback information on network connectivity. The backbone nodes are colored BLACK whereas the other nodes are either RED or BLUE. Both RED and BLUE nodes are linked symmetrically with the BLACK node, while BLUE nodes are also candidate BLACK nodes. In the last phase, topology maintenance is performed in order to avoid disconnected networks due to any control message loss or collision. The simulation results validate our claim that the proposed ANTC protocol shows a versatile performance for the given set of design goals. The following factors make our work extremely useful for wireless ad hoc and sensor networks. • A practical and distributed protocol that construct fully connected network topologies consisting of symmetric links. The local topology parameter k act as a performance knob that offers an efficient trade-off among several conflicting design goals. • A hierarchy of backbone nodes are selected using the communication overhead of at most 3n messages, where n is the total number of nodes in the networks. Lower message complexity makes the ANTC protocol scalable and suitable for energy and resource constraint sensor networks. • The hierarchy of backbone nodes implicitly setups the forwarding path towards the common sink or base station node. • Through extensive simulation-based study the quality of the generated topology is evaluated against several criteria such as network-wide connectivity, minimal energy cost, higher spatial reuse, or lower interference by means of lower nodal degree. The rest of this article is organized as follows. Section 2 gives a brief overview of the related work and expands on the problem statement. Section 3 presents an ANTC protocol in detail which is followed by the performance evaluation which is given in Section 4. Finally Section 5 concludes the article. Related work on topology control In this section, we will discuss a number of neighborbased topology control protocols found in the literature and comment on how they construct network topology with the desired set of deigns goals. In k-Neighbor [3,4] and XTC [7] protocols, each node creates an ordered neighbor list, which ranks all onehop neighboring nodes with respect to their distance, energy, or link quality. Both protocols are simple to implement, localized, and communication efficient. The k-Neighbor protocol simply chooses the value of k to be 6 and 9 to achieve week and strong requirements on connectivity, respectively. However, the final topology is constructed by keeping only symmetric links with fewer than k least-distant neighbors. In XTC, each node locally traverses the neighbor list in non-decreasing order of their ranks. At some arbitrary node u if a candidate node v can be reached through an intermediate neighbor w with better rank (i.e., link quality), then the link between node u and v is marked redundant and is therefore not included in the final topology. There is a class of neighbor-based topology control protocols that tries to keep the number of neighbors within a certain minimum and maximum threshold values k min and k max . The example protocols include Local Information No Topology (LINT)/Local Information Link-state Topology (LILT) [8], MobileGRID [9], Novel Topology Control Protocol (NTC) [10], and Cooperative Nearest Neighbor (CNN) [11]. Each protocol consists of two phases which adaptively adjust the transmission ranges by including or excludes links from the final topology. Both phases are responsible for maintaining the nodal degree within certain configurable bounds. Mainly the algorithms differ in actually what triggers the range adjustment at each node. For example, in LILT, the absence/presence of routing updates determines whether the node is disconnected or not. LINT, on the other hand, periodically checks for number of active neighbors and tries to keep the neighbor count around a desired value. Likewise LINT, the Mobi-leGRID protocol, utilizes the nodal degree for maintaining a specific contention index. The nodal degree value is preconfigured and depends on the node density, network area, and transmission range. NTC and CNN are especially designed to mitigate the network partitioning problem which usually arises when the wireless nodes operate at lower values of k. For uniform node distribution, the topology constructed by the local connections can perform well. However, with clustered node distribution the NTC and CNN algorithms may result into partitioned network for the given lower bound of k. The quality of a generated topology can be measure with respect to several design goals such as connectivity, energy cost, symmetry, spatial re-use and interference, etc. More recently, Banner and Orda [12] proposed a topology control protocol that caters for a wide array of design goals. The authors argue that the previous work concentrated more on finding nodes' transmission range by keeping in mind the rarely occurring worst-case scenarios. Most of the previous approaches towards topology control would certainly yield optimal performance for only a small subset of the design goals which are often conflicting in nature. Through both analytical and simulation studies they demonstrate that their protocol is capable of achieving average performance for a variety of topology related parameters simultaneously. Motivation In neighbor-based topology control protocols, the main focus is on finding a network-wide optimal value of the local parameter k that could result in an efficient topology, i.e., fully connected and energy minimal. To cater different network models and design goals, the value for k is often too large to achieve several conflicting design goals, simultaneously. Consider a 10-node network given in Figure 1 where dashed lines represent the asymmetric links and the solid lines represent the symmetric links. Here, node 1 is the sink node or base station to collect the data generated by all the other nodes. As illustrated in Figure 1d, it is not until k set to 4, that a fully connected network topology consisting of symmetric links is obtained. However, to account for worstcase scenarios the network operates at even higher values of k, i.e., 9 or (n -1). Clearly, the larger values of k result in significantly large number of redundant links and thus higher interferences among the neighboring nodes. Moreover, larger transmission ranges result in higher energy cost while having longer forward progress [10] and thus lower path length towards the final destination. Conversely, the Minimum Spanning Tree (MST) [13] generates sparser network topology (given in Figure 1f) which results in fewer links and lower energy consumption. However, the path length among source-destination pairs increases significantly. For the recommended value of k, all the nodes are forced to have larger neighborhood because some of the nodes might have too few neighbors or asymmetric links. However, only a small subset of nodes is required to extend their number of neighbors provided that each node is aware of the local network connectivity for the selected value of k. For example in Figure 1c, each node establishes links to their three least-distant neighbors (i.e., k = 3). However, there are only two pairs of nodes (8,9) and (4,6) that are linked asymmetrically. Considering distance as the metric to decide on which pair, nodes 4 and 6 may collaborate to increase their transmission range enough to have an efficient topology. Likewise in Figure 1b with k = 2 and in Figure 1a with k = 1 there are at least three and five pairs of nodes, respectively. Interestingly, as the k increases, so does the number of symmetric links and as a result the number of node pairs decreases. It is also noteworthy that how the selection of node pairs could possibly effect the forward progress and nodal degree at each node. For example in Figure 1b for node 9, selecting node 8 instead of node 2, save half of the hop distance towards the sink. Therefore, the value of k certainly acts as a performance knob which can be tuned to achieve an efficient trade-off among various design goals concurrently. Moreover, instead of following a network-wide value of the local parameter k, all the nodes locally adjust their number of neighbors accordingly. Adaptive neighbor-based topology control (ANTC) Consider a wireless network comprising of n -1 ordinary nodes and a single sink or base station node (thus total n nodes). All the nodes are stationary and deployed randomly throughout the two-dimensional space. It is assumed that all the nodes are equipped with homogeneous, single RF radio with an omni-directional antenna and are capable of adjusting their RF output power at several discrete levels [14,15]. Furthermore, the underlying topology is fully connected at maximum transmission range Tx max . The main objective of the ANTC protocol is to provide a controllable and efficient tradeoff among various design goals such as network-wide connectivity, minimal energy cost, symmetry, and lower nodal degree for any given value of the local parameter k, with 0 <k ≤ n -1. The proposed ANTC protocol consists of three phases. Neighbor discovery phase Initially, each node i announces its presence using MAC layer beaconing at Tx max . A simple yet practically reliable one-hop broadcast mechanism [16] can be used to reduce the chances of a node remain undiscovered by its neighbors. The idea is to make sure that all the nodes receive the discovery message and construct a complete neighbor list. Upon successful receiving the ANNOUNCE message from node i, the neighboring node j estimates its distance to node i. An all neighbor list N * j is maintained which consists of two attributes, a unique neighbor identity and their distance. The neighbor list is then stored in non-decreasing order of the distance. Algorithm 1 illustrates the above-mentioned procedure. The higher cost of operating GPS-enabled nodes makes this option of acquiring the distance information nearly infeasible. Instead, techniques that utilize different physical measurements, e.g., Received Signal Strength Intensity (RSSI) [17] and Time of Arrival (ToA) [18] can be employed to estimate the distance between two nodes. The use of RSSI and ToA-based mechanisms often raised criticism by the researchers due to the distance measurement error which is usually caused by attenuation present in the atmosphere or non-line of sight (i.e., the obstacles). However, in the performance evaluation section it is shown that the ANTC protocol achieves resilience to error in distance estimation at much lower cost than the previous work. Although this article utilizes the estimated distance between two nodes, other metrics such as link quality, residual energy can also be combined. TC phase ANTC is essentially a node coloring algorithm, which creates a single topology structure consisting of backbone nodes. The number of backbone nodes and other topology-related properties are decided based on the input parameter k. During the course of the TC phase, each node can have one of the four colors. Nodes modify their colors in response to the reception (or overhearing) of network connectivity information within the TC messages. As the TC phase proceeds, the nodes change their colors according to the following definitions. For ease of reference, Table 1 provides a list of parameters and algorithmic notations that are used throughout the article. • WHITE: A node in its initial state which has not received and thus acted upon the information within the TC message. • BLACK: The backbone nodes are colored BLACK, which guarantees symmetric link towards the common sink or base station node. • RED: A RED colored node is associated k-symmetrically with a BLACK node. If a WHITE node j receives the TC message from node i, i.e. ( j ∈ N K i and i ∈ N k j ), it turns into RED color. In other words, nodes i j have symmetric links for the given value of k. • BLUE: A BLUE node is associated k-asymmetrically with a BLUE or BLACK node. If a WHITE node j receives the TC message from node i, i.e. ( j / ∈ N k i and i ∈ N k j ), it turns into BLUE color. In other words, node j has an asymmetric link with node i' for the given value of k. The sink or base station node initiates the TC phase by broadcasting the TC message at Tx max . The ANTC protocol exploits the broadcast nature of the wireless communication channel. Each node selects an appropriate set of neighbors by overhearing the ongoing communication among its neighboring nodes. The process of selecting next node to disseminate its TC message is bit a like peeling the onion layers inside-out. Nodes that are closer to the initiator are given higher priority as compare to the farther ones. The TC message is also required to maintain proper value of hop distance value towards the initiator. As the TC phase proceeds, the TC message disseminates in a controlled manner such that each node relays the TC message only once. Each node i maintains and updates the following attributes of the TC message. • i.Identity_: Unique identity of node i. • i.Backbone_: Identity of a neighboring node which guarantees symmetric link towards the common sink or base station node. • i.HopCount_: Total distance in terms of number of hops from sink or base station to the current node. Following are the main highlights of the ANTC algorithm. • Algorithm 2 illustrates the TC message attribute initialization at sink node i. The sink node turns BLUE, before it broadcast the TC message into its immediate one-hop neighborhood. • Lines 10-33 describe the steps taken by node j upon receiving the TC message from node i. There are three courses of actions j may take depending on its current status. ○ Form lines 10-18, node j verifies whether it has a k-symmetric or k-asymmetric link with node i. If i and j are linked k-symmetrically, the receiver WHITE node becomes RED. In case of k-asymmetric link, the receiver WHITE node turns BLUE otherwise the color remains unchanged. This part of the pseudo-code also performs two important steps, i.e., first a soft state is maintained for each sender node and secondly asymmetric links are eliminated during this process. ○ Lines 19-26 are followed if j has not yet sent its TC message. Before a node broadcasts its TC message further into the network, it has to decide on two aspects, i.e., (1) the backbone node and (2) the TC message forwarding sequence. Later in this section, more details will be provided. ○ Last, lines 27-32 are only executed, if j overhears the TC message from one of its neighboring node which has selected j as the backbone node. If the sender's information is not already in the N k j , it is then included and the transmission range is extended to reach the most distant neighbor in the k-Neighbor list. • The topology maintenance phase, ensures that TC phase remain resilient to TC message lose. To make a single connected topology structure consisting of symmetric links towards a common sink, each node must select a backbone node. The intuition behind selecting a BLUE node as the backbone is very simple. First, for the given value of k the BLUE nodes are farthest from the sender node, which are also linked k-asymmetrically with another potential backbone node. Second, the fact that BLUE node would operate at larger transmission ranges therefore they give better performance trade-off between energy costs and hop distances. The use of distance information further helps in choosing better candidate among several BLUE nodes. Finally, in scenarios (especially for higher values of k) where there exist fewer BLUE nodes, any least-distant neighbor which is minimal hops away from sink node is selected. In Algorithm 3, lines 1 through 11 describe the backbone node selection procedure. Lines 12-15 set the required TC message attributes before they are further broadcasted by the receiver node j. For example, the j. Backbone_ attribute holds the selected backbone' identity, the hop count attribute gets an increment, and finally if nodes current color is WHITE, it changes into BLUE. Lines 16-19 simply update the k-Neighbor list and set the transmission range that is required to reach the farthest neighbor in N k j . In ANTC, each node decides backbone based on the neighbor's color, estimated distance, and hop distance towards the sink node. While the TC phase is in progress the exact path length from the sink to the current node cannot be known in advance. Therefore, the node color and the hop distance information are included within the TC message. The value in the hop count attribute is incremented each time a node forwards the TC message further into the network. To calculate correct accumulated value of the hop count, the forwarding sequence of TC message requires special consideration. A distance-based message dissemination approach is utilized which introduces forwarding delay for each node based on their distance from the previous node. The delay heuristic is implemented with the help of a simple timer (see Lines 19-26, Algorithm 2) and it works in two ways. First, the TC message is flooded in a controlled manner, i.e., it ensures that each node disseminates the TC message only once. Up on receiving the first TC message each node set the timer value based on the parameter T d,fwd , given at line 20. The timer value is set proportional to the distance between the receiving node and the previous node. Once set, the timer value cannot be re-initiated by overhearing of any subsequent TC message. Second, the coloring of the nodes and the increment in hop count value require a proper forwarding sequence, i.e., a node that is at nth hops distance from the previous node forwards the TC message before a node n + 1th hops away. Thus, the cumulative effect is to give higher priority to the nodes that are closer to the pervious node. In practice, the actual transmission time of a TC message is determined by the combination of the timer value and the time required by functions performed by a particular medium access control protocols. For instance, in contention-based MAC protocols, random back-off is performed for each message to make sure its successful reception at the receiver. Consider an example where the forwarding interval (T interval ) is 1 time unit and the nodes' Tx max is set to 1.5 units. Let, nodes i and j are at the same distance from the previous node S (1 distance unit), they decide to forward the TC message at time 0.67 (= 1 × 1/1.5). However, in order to break the tie between nodes i and j, the actual sending can be differentiated by the CSMA/CA mechanism at the MAC layer. More importantly, nodes are not required any form of strict time synchronization for distance-based forwarding to work properly. Distancebased forwarding can lead to slightly longer time. However, since the TC phase last for very small fraction of overall network operational time, therefore its effect can be consider negligible on the network performance. To exemplify the proposed heuristics, a simple topology given in Figure 2 is considered. In current example, k is set to 2 and the neighbor list is shown in parenthesis besides each node i. The sink node S initiates the TC phase by first turning its color to BLUE and then broadcasting a TC message at Tx max (represented by the dotted lines). The contents of the TC messages are given within each figure. On reception, all the neighboring nodes calculate the timer value to send their respective TC messages. Based on the estimated distance from the sender node, each node will calculate its forwarding sequence. In this example, the forwarding sequence is represented by the alphabetical order given as node IDs. Figure 2a shows the coloring of the nodes after the TC message is received by the neighbors from the sink node S. Since nodes A and B have 2-symmetric links with S, they turn RED. Similarly, nodes C, D, and E will turn BLUE because of their 2-asymmetric links. Whereas node F's color remains WHITE, because node F is neither linked k-symmetrically nor k-asymmetrically with node S. In order to construct a fully connected backbone, each node has to be linked symmetrically with a special backbone node so that it guarantees a symmetric link towards the sink. For RED nodes, the selection of backbone node is simple, since they already have symmetric link with a BLUE node. Figure 2b illustrates the process where node A selects S and send the TC message. The decision regarding backbone selection is conveyed to the other nodes while they overhear the ongoing communication. Once node S finds out that node A has selected it as the backbone node it changes its color to BLACK. In Figure 2c, node B repeats the same procedure. For the given value of k, a node might have no k-symmetric neighbors thus resulting in either asymmetric links or network partitioning. Nodes C, D, and E represent the first case where they are associated k-asymmetrically with node S. These nodes select node S as the backbone node (the least-distant BLUE node) and convey this decision in their respective TC messages. On overhearing the TC messages, node S extends its neighbor list and sets its transmission range to the level that is required to reach the farthest neighbor in the neighbor list. The selected backbone node turns BLACK (if not already BLACK color). In Figure 2d, nodes C and D perform the above procedure with node S, followed by node E in Figure 2e. The nodes retain links with their ksymmetric neighbors. Node F represents the second case where the network is partitioned between nodes F and Figure 2f, the WHITE node F selects node E as the backbone node. Node F is associated k-asymmetrically with two other neighbors X and Y (not shown in the figure). Before the TC message is sent further into the network, node F turns BLUE and increments the hop count value by 1. In scenarios where there exists no BLUE node, any least-distant node which is minimum hops away from the sink is selected. The hop count value is incremented as the TC message is disseminated across the network. The forwarding sequence of the TC message ensures that path length value is incremented properly. E. In The proposed ANTC protocol results in a single treelike structure, where RED and BLUE nodes are associated symmetrically with the set of BLACK nodes. The BLACK nodes are linked directly with relatively larger transmission ranges, thus forming a connected backbone. In order to control the number of backbone nodes and the associated links, the value of k is utilized as a performance knob. The performance evaluation section shows that the value of k provides an efficient trade-off among various design goals. The other advantage of topology constructing phase is that it also constructs an implicit data forwarding hierarchy consisting of backbone nodes towards the sink or base station node. Topology maintenance phase This section describes a scenario where the TC message is lost either due to collision or unsuccessfully reception at the receiver. The lost of TC message is significant, because it contains information regarding the backbone node that a potential BLACK node could not receive. The given scenario often leads to asymmetric links where the selected backbone node remains unable to extend its transmission range up to the requesting node. Each node i checks whether it has received/overheard the TC message from all its one-hop neighbors. If not, then it broadcasts the Topology Repair Request (TRREQ) message at Tx max . On reception, only those nodes j would respond with the Topology Repair Reply (TRREP) message that has a k-asymmetric links with the node i. Finally, on receiving the TRREP message node i extend its transmission range to include node j. Algorithm 4 describes the above procedure. Simulation environment To evaluate ANTC and other neighbor-based topology control protocols, extensive simulations are performed using NS-2 [19] simulator. In the simulation study, the network size n is varied from 100 to 250 in increments of 50 and 500 nodes. The network sizes represent sparse, moderate, and densely populated networks. All the nodes are placed randomly over a 1000 × 1000 m 2 area with the sink or base station node positioned at the center. The maximum transmission range Tx max is set to 250 m. In our simulations, T interval is set to 1 s and Tx max is set to an arbitrary constant large enough to avoid interruption with the ongoing TC phase. For a comparative study, following neighbor-based topology control protocols are considered where all the resultant topology instances are fully connected and consisting of symmetric links. • MST: Most of the topology control protocols idealize topologies generated by the MST algorithms. However, the requirements of global information at some arbitrary common node make the MST algorithm less feasible to implement especially for largescale sensor networks. • CNN with k set to 5: In CNN, all the nodes are connected to their k = k min = k max least-distant neighbors. The CNN protocol is evaluated with k = 5 (given as k-CNN). • k-Neighbor with k set to 9: The results obtain for k-Neighbor protocol suggest that to achieve strong requirements on connectivity the value of k must be set to 9. The final topology is referred as k-Neighbor. Following three metrics are used for performance evaluations. The final values are obtained by averaging 100 different simulation runs over 100 different randomly generated network topologies. All the measurements are averaged over the network sizes. • Total energy cost is defined as Tx k i represents the transmission range assigned to reach the kth neighbor in the neighbor list at the end of individual topology control protocol and a is the path loss constant typically with a value between 2 and 6 [20]. • Nodal degree is defined as the number of direct neighbors. We consider both logical and physical nodal degrees, where later is consider as a better measure of expected contention. • Path length is defined as the number of hops towards the common sink or base station node. A shortest path algorithm is executed over the topology controlled network to measure path length between a node and the sink. Figure 3 illustrates the topology snapshots obtained by several topology control protocols. In k-Neighbor, the sample topologies are generated for k = 6 and 9, in CNN the value for k is set to 5 and finally the ANTC protocol is executed with k = 1, 2, 3, and 4. The network consists of 200 nodes. The topology generated by the MST algorithm is the sparsest. For the neighborbased protocols, the parameter value k determines the topology resolution and other topological properties. In ANTC as the value of k increases, a denser communication graph starts to appear. Intuitively, k acts as a performance knob which can be used to tune various design goals such as energy cost, nodal degree, and path length. For any given value of local parameter k, the proposed ANTC protocol constructs fully connected network topologies consisting of symmetric links. Figure 4a reports on the energy cost for all the protocols. Since the energy cost measurements are averaged over network sizes, the per node energy cost decreases as the network size increases for any given value of k. Performance comparison with various network size n MST generated topologies are optimal in terms of energy cost. For the lowest value of k = 1, ANTC performed comparable to that of MST. The cost of maintaining a fully connected topology is highest for the 9-Neighbor protocol. Figure 4b shows the energy cost normalized with respect to MST. Overall the energy cost increases with the increase in the value of k. For the intermediate value of k the ANTC performance lies right between the two extremes of MST and 9-Neighbor. Compared to the 9-Neighbor protocol when n = 100, ANTC with k = 1, 2, 3, and 4 provides an improvement of 102, 73, 43, and 30%, respectively. For all the network sizes, ANTC with k = 4 performed reasonably better than the 5-CNN. Figure 4c plots the trends for all the other protocols with path loss constant value set to 2, i.e., α = 2. Figure 5a,b shows the average logical and physical nodal degrees for all the protocols. For any given network size, the MST and ANTC with k = 1 performed identically. The logical and physical nodal degrees increase consistently for all the value of k. Generally, as the k and n pair grows the network topology becomes increasingly denser. Since logical nodal degree is considered as the lower bound to the physical degree, the physical degree is always higher than the logical degree. The 9-Neighbor protocol yields almost three times higher nodal degree as compared to ANTC protocol with k = 1. Whereas the logical and physical nodal degrees of 5-CNN protocol are nearly twice of the ANTC protocol with k = 1. In ANTC, only the nodes with asymmetric links tend to extend their transmission ranges to make a symmetric links with BLUE node, whereas the RED nodes are already associated with the least-distant backbone nodes resulting in lower average logical and physical degree. Figure 6 shows average path length for the MST, k-CNN, k-Neighbor, and the ANTC protocols. MST generated topology results in highest number of hops. Despite that the ANTC protocol with k = 1 is comparable with MST in terms of energy cost and nodal degree, almost half of the number of hops are required to reach the common destination. In ANTC, generally, the path length decreases with the increase in k because of the corresponding increase in transmission ranges results in more forward progress towards the sink or base station. For the lower values of k, the difference in path length is considerable. For example, for the first four values of k the percent decrement in hop counts is within the range of 6 to 20%. For most of the network sizes, the 9-Neighbor protocol performed better, however at the expense of higher energy cost and nodal degree. The performance of 5-CNN and the ANTC protocol with k = 4 is comparable. The main reason being that in ANTC, a node only selects a backbone node that is closer in terms of hop count and distance towards the common sink. These results demonstrate that the proposed ANTC protocol achieves an efficient trade-off among various design goals. The value of k acts as a performance knob which can be tuned to construct network topologies with variety of different topological properties. For example, ANTC performed comparable with the optimal MST topologies in term of energy efficient and nodal degree while maintaining shorter path lengths. Impact of k and n pair with respect to node color To further study the energy cost distribution, a closer look on nodes with respect to their color is presented. Figure 7a,c,e shows that generally the BLACK nodes operate at higher transmission range as compared to the RED and BLUE nodes for the given values of k and n pair. This is due to the fact that the backbone nodes have to extend their transmissions ranges beyond the current value of k to construct a fully connected network topology. However, despite this, the BLACK nodes operate at as high as 70% of Tx max for the sparsely node deployment (Figure 7a). The energy cost of backbone node decreases to 45 and 35% of Tx max as the network size grows from moderate ( Figure 7c) to highly dense network (Figure 7e), respectively. The energy cost of RED nodes varies between 10 and 50% of Tx max depending on the values of k and n pair. An increase in k results in corresponding increase in transmission range, therefore more distant nodes become k-symmetric to the sender node. Whereas most BLUE nodes have an average transmission range of 30% of Tx max . Since BLUE nodes are k-asymmetric nodes, therefore they have to extend their range to create symmetric links. Since, the BLACK nodes dominate the energy cost, it is therefore desirable to minimize the number of BLACK nodes in the network. Interestingly, as given in Figure 7b,d,f, the number of BLACK nodes decreases as the value of k increases for all network sizes. Thus, allowing less number of nodes to extend their transmission ranges. The number of RED nodes on the other hand increases. This is mainly because at higher values of k, the nodes tend to operate at higher transmission range, thus leading to more neighbors nodes to turn RED, i.e., k-symmetric nodes. Finally, the numbers of BLUE nodes are nearly 10% of the network size n, which gradually decreases as the value of k increases. Impact of distance measurement error (DME) It is typically assumed that the distance estimations are error free which is far less realistic than the actual situation. Specially, the assumption that the distance can be estimated by mean of measuring certain physical phenomena (i.e., RSSI and ToA), requires extra considerations. In k-Neighbor TC protocol, the recommended value of k in presence of distance measurement error is even higher than the actual value needed to achieve the strong connectivity requirement for variety of network sizes. For network sizes n [100, 500], the preferred value of k is found to be 9 and 10, for ToA and RSSI errors, respectively [3]. Resilience to the error in distance measurement is achieved by simply connecting to the even farther neighbor in the neighbor list. To show the impact of DME on proposed ANTC protocol performance, the error model given in [3,21] is used with two different settings of the parameter values. The distance measurement between two nodes i and j is given byd (i,j) = d (i,j) + RSSI e , whered (i,j) and d (i,j) are defined as the measured and correct distances, respectively. The factor RSSI e is the ranging error (also DME or RSSI error) is given as , where Xs is a zero-mean Gaussian random variable with standard deviation s and a is the path loss constant. For CASE-I, the parameter values are s = 1 and a = 4 and for CASE-II we set s = 0.84 and a = 2 in the RSSI error equation (Equation 5 in [21]). CASE-I and CASE-II yield almost 70% of the distance estimations errors within 6 and 10% of the correct distances, respectively. Figure 8 shows one of the instances for an empirical distribution of distance measurement or RSSI error, (a) CASE-I and (b) CASE-II. A transmission range shorter than the actual would certainly results in asymmetric links or network to partition. Like k-Neighbor protocol, ANTC also let all the nodes extend their transmission ranges up to the farther neighbor in the neighbor list. However, unlike the k-Neighbor protocol, where all nodes follow a networkwide value of the local parameter k, in ANTC nodes operate at different number of least-distant neighbors according to their color for the given value of k. Each node i maintains an all neighbor ordered list N * i with respect to the distance. The farthest node in the k-Neighbor list N k i is located at some kth entry in the list N * i , which may not be the actual kth neighbor inN * i list. The listN * i holds faulty distance measurements, obtained by applying the above-mentioned error model during the post-processing phase. To accommodate error in distance estimation, each node adjusts its transmission range to reach thek th = k + ηth neighbor in h is a small positive integer constant set according to the desired application requirements. The value of h is increased until the specified requirement on the connectivity is achieved i.e., more than 95% of the nodes are connected in a single structure consisting of symmetric links. As suggested in [3], since the neighbor-based protocols work on the notion of "nearest" neighbors and the fact that error cause by ToA are positive, we have not included results for ToA errors. The simulation results given in Table 2 are obtained for the network size of 200 nodes with Tx max set to 250 m. For both CASE I and CASE II, the preferred value ofk th depends on the parameter k. For the lower value of k, the h is higher because the nodes tend to operate at lower transmission ranges and thus have fewer links. Consequently a discrepancy in distance estimates more often results in partitioned network topology. In ANTC, on the average an increment of 2 (i.e.,k th = 5) and 3 (i.e.,k th = 6) is recommended, for CASE-I and CASE-II, respectively. Despite this, the final value ofk th is considerably lower than the preferred value given in k-Neighbor protocol (i. e.,k th = 10). It is noteworthy that as the value of k increases, the network finds sufficient redundant links to maintain the prescribed requirements on the connectivity. Complying with the results obtained for the k-Neighbor protocol, our protocol also requires no further stretch in transmission range for higher value of k. Since, the solution for distance measurement error take into account the farthest neighbor in the list, the backbone nodes and their associated nodes relationship remains intact. Once again the proposed ANTC protocol exhibits better performance to accommodate distance measurement error at comparatively much smaller cost. Conclusion Topology control protocols have been utilized in wireless multi-hop networks to achieve variety of different design goals. In this article, an ANTC protocol is presented, aimed at constructing efficient network topologies. The proposed ANTC protocol exploits the potential of collaboration among neighboring nodes where nodes provide feedback on the network connectivity information to decide on their respective transmission ranges. Based on the local connectivity information each node selects a backbone node that guarantees hierarchical topology structure consisting of symmetric links. The process of backbone node selection is carried out in a distributed manner without requiring global network connectivity knowledge. To evaluate the performance of ANTC protocol against other protocols, we have performed extensive simulation based study. The results demonstrate that ANTC achieves an efficient trade-off among various design goals. The value of k acts as a performance knob which can be tuned to construct network topologies with up to 100% and threefold improvements in terms of energy cost and average nodal degrees, respectively, while maintaining shorter path lengths.
9,865.8
2012-03-09T00:00:00.000
[ "Computer Science", "Engineering" ]
Calibration Test of PET Scanners in a Multi-Centre Clinical Trial on Breast Cancer Therapy Monitoring Using 18F-FLT A multi-centre trial using PET requires the analysis of images acquired on different systems We designed a multi-centre trial to estimate the value of 18F-FLT-PET to predict response to neoadjuvant chemotherapy in patients with newly diagnosed breast cancer. A calibration check of each PET-CT and of its peripheral devices was performed to evaluate the reliability of the results. Material and Methods 11 centres were investigated. Dose calibrators were assessed by repeated measurements of a 68Ge certified source. The differences between the clocks associated with the dose calibrators and inherent to the PET systems were registered. The calibration of PET-CT was assessed with an homogeneous cylindrical phantom by comparing the activities per unit of volume calculated from the dose calibrator measurements with that measured on 15 Regions of Interest (ROIs) drawn on 15 consecutive slices of reconstructed filtered back-projection (FBP) images. Both repeatability of activity concentration based upon the 15 ROIs (ANOVA-test) and its accuracy were evaluated. Results There was no significant difference for dose calibrator measurements (median of difference −0.04%; min = −4.65%; max = +5.63%). Mismatches between the clocks were less than 2 min in all sites and thus did not require any correction, regarding the half life of 18F. For all the PET systems, ANOVA revealed no significant difference between the activity concentrations estimated from the 15 ROIs (median of difference −0.69%; min = −9.97%; max = +9.60%). Conclusion No major difference between the 11 centres with respect to calibration and cross-calibration was observed. The reliability of our 18F-FLT multi-centre clinical trial was therefore confirmed from the physical point of view. This type of procedure may be useful for any clinical trial involving different PET systems. Introduction Evaluation of a reliable quantitative or semi-quantitative index having predictive value is an important issue in clinical PET studies namely for monitoring cancer therapy. In this research area, a national clinical trial was recently promoted by UNI-CANCER to estimate the value of 18 FLT-PET for predicting response to neoadjuvant chemotherapy in patients with newly diagnosed breast cancer (ClinicalTrials Identifier: NCT00534274). In such multi-centre trial, nuclear medicine devices (from dose calibrators to PET systems) come from different manufacturers, have different technical specifications and are used differently according to local practices. These differences may affect PET results, leading to a heterogeneous panel of PET images of different quality and moreover impairing the computation of parametric values, especially the Standardized Uptake Value (SUV). Although SUV is the most available and thus currently used semi-quantitative index in clinical practice and in clinical trials, its accuracy has been widely discussed [1]. Sources of SUV variability are related to biological factors (body size measurement, blood glucose level) and to technologic factors (Uptake time, reconstruction parameters, …) [2][3][4]. In this last category, the calibration error between scanner and dose calibrator is of major importance as well as the SUV definition [5,6]. As a consequence, SUV estimation could deviate up to 50% in some cases [7]. In the procedure of PET scanner quality control, the calibration is the step establishing the relationship between event rate detected in each pixel and the true activity concentration of the corresponding volume element in the phantom. Usually, calibration is achieved using a phantom provided by the PET system manufacturer according to their own protocol, and has to be repeated regularly to assess performance constancy. The phantom of well-known volume is filled homogeneously with a known activity. This activity is determined by measurement in the local dose calibrator. This type of calibration procedure is highly dependent on local dose calibrator accuracy as well as on manufacturer recommendations (phantom volume, activity to be used, acquisition time, and the accuracy of the corrections to be applied, e.g. attenuation, scatter, randoms, count loss, normalization). Thus, the equivalence of such calibrations from different systems and sites has to be verified. The first objective of our study was to test a procedure assessing the calibration of PET systems, including all devices of the acquisition chain, which is easily applicable to scanners independent on manufacturer, system and site, thus allowing a direct comparison of different systems. The second objective was to apply this procedure in all 11 sites enrolled in our multi-centre trial. The final objective was to ensure that all PET systems were calibrated to a common standard within acceptable limits. Materials and Methods Two physicists were in charge of all examinations: the local physicist of each of the eleven nuclear medicine departments involved in this study, and the physicist of this national multicentre clinical trial, who participated in all tests. Data acquisition Dose calibrator. One dose calibrator from each site was assessed giving 11 datasets in total. This step was performed first using a solid certified standard source (QSA Global France, Courtaboeuf) of 68 Ge (50 MBq at the beginning of the study (01/ 10/2007)). The evaluation consisted in a reproducibility test (placing the standard source 10 times consecutively in the dose calibrator) and an accuracy test. Clock accuracy. the difference between clock associated with the dose calibrator and clock inherent to the scanner was evaluated. PET-CT systems. 11 PET-CT scanners (i.e. one per centre) were investigated: two Discovery LS and three Discovery ST (General Electric HealthCare), one Biograph (Siemens) and five Gemini (Philips), one of which used Time-of-Flight technology. Both Discovery LS systems allowed acquisitions in 2D mode only, while from the three Discovery ST two were currently used in both 2D and 3D mode, and one was used in 3D mode only. The Biograph and the five Gemini systems allowed acquisitions in 3D mode only. This resulted in 13 acquisitions: four in 2D mode and nine in 3D mode. All the acquisitions were performed with the same phantom, i.e. a cylinder of 20 cm in diameter and 20 cm length homogeneously filled with activity. The active volume was 5550 mL. The phantom was filled with an 18 F-FDG activity depending on the acquisition mode used: 300 MBq using 2D mode or 100 MBq using 3D mode, respectively. When the PET scanner was used in both modes, the phantom was prepared for a 2D acquisition, and then the 3D acquisition was performed 2 hours after the end of the 2D acquisition in order to reach activity level required for 3D. Data were acquired during one hour to keep the statistical noise as low as possible. In order to assess the calibration of the scanner on a comparable base, as far as possible it was recommended to reconstruct images with the filtered backprojection algorithm (FBP) utilizing a ramp filter (no apodisation) with Nyquist frequency cutoff, zoom 2, 1286128 matrix, all corrections applied in clinical routine (detector normalization, count loss, CT-based attenuation, randoms and scatter). When filtered backprojection algorithm was not available in clinical mode (Gemini systems), the clinically routine algorithm was used. Analysis criteria Data from all sites were analysed by the same person (i.e. the physicist of the national trial) according to a standardised procedure, which consisted in checking each step of the whole acquisition process. Dose calibrator. the first step of the analysis consisted in assessing the reproducibility of the measurements performed on each dose calibrator. Thus, a repeated-measures ANOVA (PRISM 4.0b, 2004, GraphPad Software, USA) was performed on each set of the 10 measurements performed on each calibrator of each site. Assuming good reproducibility of the measurements made on each dose calibrator, no significant difference was expected. If this hypothesis was confirmed, the mean value of each set of data from each calibrator (called A calibrator ) was calculated. Then, the accuracy of each dose calibrator was evaluated by calculating the relative difference RD calibrator (%) between A calibrator and the calibrated 68 Ge-source activity A Ge68 : where t is the time difference between the 68 Ge source calibration date and the date of the dose calibrator test, and T is the half-life of 68 Ge (270.95 days). As reported by Geworski et al. [7], a modulus of RD calibrator less than or equal to 10% was considered as the accuracy normally acceptable for this class of instrument. If the repeated-measures ANOVA resulted in a significant difference between the 10 measurements of the same calibrator, or if the accuracy (RD calibrator ) was more than 10%, the dose calibrator had to be checked and recalibrated. In the second step of the analysis the difference between the mean values A calibrator of the 11 dose calibrators and 68 Ge activities were assessed by a Wilcoxon matched paired test (PRISM 4.0b, 2004, GraphPad Software, USA). As the dose calibrators were measured at different dates and, hence, different 68 Ge source activities, the Wilcoxon matched paired test was performed on mean values A calibrator normalized by the corresponding 68 Ge source activity A Ge68 . Clock accuracy. A difference of less than 2 minutes between calibrator and scanner clocks was considered to be acceptable as it induces an error of less than 1% in activity determination for the half-life of 18 F. In case of larger differences, the syringe activity measurement had to be corrected for decay. PET system. The measures were performed on the reconstructed images. In order to be independent of the manufacturers, the same software (MIPAV v4.1.2, CIT/NIH, Maryland, USA) was used: 15 cm diameter ROIs were drawn on 15 consecutive slices in the middle of the homogeneous phantom, which represented a total thickness of a minimum of 30 mm on the systems having the smallest slice thickness. The difference between these 15 average activity concentrations was analysed using repeated-measures ANOVA. Assuming good homogeneity in activity concentration in the phantom, no significant difference between the 15 consecutive activity values was expected. If this hypothesis was confirmed, the mean activity concentration for each PET system (called A PET ) was calculated. Then, the accuracy of each PET system was evaluated by calculating the relative difference RD PET (%) between the mean activity concentration of each PET system (A PET ) and the phantom activity concentration (A phantom ): Furthermore, a relative difference RD PET,i (%) was calculated for each slice I of the 15 consecutive slices, and the maximum deviation among the 15 slices was reported for each PET system. As previously reported [7][8], a modulus of RD PET less than or equal to 10% was generally accepted. By extension, a modulus of RD PET,I less or equal to 10% was also accepted. If the repeatedmesures ANOVA resulted in a significant difference between the 15 consecutive activity concentration values of a PET system, and/or a RD PET more than 10%, a new calibration of the PET system was required. Statistical analysis consisted in assessing the difference between the mean values A PET of the different systems and the corresponding phantom activities with a Wilcoxon matched paired test (PRISM 4.0b, 2004, GraphPad Software, USA). The test was performed for the 2D mode data, for the 3D mode data and for the whole set of data. A visual analysis of all images was performed, looking for potential artefacts. For all statistical tests, the significant level was set at 0.05. Dose calibrator 11 dose calibrators were tested (Table 1). For each of them, the repeated-measures ANOVA resulted in no significant difference between the 10 repeated measurements of the standard 68 Ge source. This result allowed the mean activity A calibrator to be used to evaluate the accuracy by calculating the relative difference RD calibrator . As shown in Table 1, all the moduli of RD calibrator were less than 10% (10 out of 11 were lower than 5% and the latest was close to this limit at 5.63%). The Wilcoxon matched paired test resulted in no significant difference between the A calibrator values of the 11 dose calibrators and 68 Ge activities. Clock accuracy The maximum difference observed between the dose calibrator and PET system clocks in the 11 centres was less than 2 minutes, thus no decay correction was applied for 18 F. Reconstructed images Calibration factors. 13 datasets were acquired (four in 2D mode and nine in 3D mode). Because of a missing DICOM tag value on one 3D-dataset (one PET system), the quantitative analysis could not be performed. A new control of this system was not done since this centre has never included a patient. For each of the 12 other datasets (10 PET systems), the repeated-measures ANOVA resulted in no significant difference between the 15 activity concentration values estimated from 15 consecutive slices (Table 2 and Table 3). This result allowed the mean activity concentration value A PET to be used to evaluate the accuracy by calculating the relative difference RD PET . As shown in Table 2, the values of RD PET ranged from 29.97% to +2.30% with a median value of 22.94% in 2D mode, and from 21.67% to +9.60% with a median value of 20.36% in 3D mode: all the moduli of RD PET were less than 10%, and the maximum deviation of RD PET,I among the 15 consecutive slices of each PET system was also less than 10% for all the PET systems. The Wilcoxon Figure 1A]. Four datasets showed concentric artefacts in transaxial slices. Horizontal and vertical profiles showed good symmetry for all the 12 datasets [ Figure 2]. Discussion PET using 2-[ 18 F]fluoro-2-deoxy-D-glucose ( 18 F-FDG) has become a major player in the field of imaging in oncology and its applications are rapidly expanding, despite 18 F-FDG having its limitations. Although not likely to replace 18 F-FDG, emerging data suggest that other fluorinated PET tracers have many potential uses in all phases of the anticancer drug development process, in basic cancer research and in clinical oncology [9]. Over the past two decades, constant efforts have been reported to identify a reliable fluorinated PET tracer able to accurately predict the response to therapy at an early stage, i.e. early during the course of therapy, avoiding side-effects of an ineffective treatment and thus allowing to switch to another. Promising results are expected with 39-deoxy-39-fluoro-L-thymidine ( 18 F-FLT) [10,11], namely for monitoring therapy. Indeed, changes in tumour proliferation induced by effective treatment are observed before volume changes since a responding tumour cell will not synthesise new DNA, whereas it may continue to metabolise 18 F-FDG to maintain different cellular functions. Clinical studies are required to confirm the potential of a new biomarker, and a large number of patients is most often necessary to obtain reliable qualitative and/or quantitative results. This leads to the design of multicentre clinical trials involving several nuclear medicine departments, permitting rapid patient inclusion which is consistent with the rapid development of research on PET tracers and on PET systems. However, multi-centre clinical trials with a new PET tracer are faced with different problems, the most important being the supply of this new tracer to different and distant nuclear medicine departments. Moreover, in case of protocols designed for the evaluation of tumour response to treatment, the PET tracer needs to be provided at several timepoints during the course of therapy, while its production is not always completely reliable, in particular the automated radiosynthesis. We designed a national study to evaluate the potential role of 18 F-FLT for the determination of the response to anthracycline based neoadjuvant chemotherapy in patients with de novo diagnosed breast cancer. In this protocol, several 18 F-FLT-PET scans were scheduled, i.e. at baseline, after one and after four cycles of chemotherapy, and at the end of treatment before surgery. Missing, for instance, the last 18 FLT-PET scan due to radiosynthesis failure may invalidate the whole PET dataset acquired in one patient, due to the impossibility to evaluate the response at the end of the neoadjuvant chemotherapy. Another important issue concerns the consistency of data collected in multi-centre clinical trials, i.e. to ensure the reliability of data acquired in different nuclear medicine departments, on different systems with different data reconstruction methods [10]. For our national study, UNICANCER, the Sponsor of the clinical trial, has approved the development of a quality control procedure to verify that the whole acquisition chain was properly calibrated and to qualify each participating nuclear medicine department. This step is a prerequisite to allow the physicians to quantify PET activity concentration in patients included in our multi-centre clinical trial. To ensure that all PET images were similar in quality as if they had been produced on a single PET system would require much higher standards. This would involve equalizing image contrast recovery (which covers spatial resolution) and matching of signal to noise ratio. However, the tests that were performed in this study covered only the accuracy of the calibration process performed in each participating centre, in order to facilitate analysis of errors, and this is a first prerequisite for pooling of data. Our approach included all the equipments involved in the final analysis of PET images [12], not only in order to verify the devices themselves, but also to facilitate the identification of errors in the subsequent chain [13]. Thus, our verification process started with a careful dose calibrator checking with a certified 68 Ge source. Indeed, each PET scanner must be calibrated in terms of activity concentration, allowing the computation of SUV in which a scaling to the injected activity is performed, this activity itself being measured on the dose calibrator. In our study, all the dose calibrators revealed good reproducibility and accuracy [14]. This result is due to the stringent daily controls that have been strengthened by the French regulations and that are performed by radiopharmacists of each nuclear medicine department in charge of the preparation of radiopharmaceuticals. The dose calibrators and PET system clocks were well synchronised in all centres. In some of them the synchronisation was performed by the local radiophysicist the morning before the checking procedure started. Indeed, the checking procedure guide was sent a few days before and therefore the local radiophysicist sometimes circumvented this assessment. Accordingly, these excellent results of clock synchronisation may not reflect the reality in routine practice, but hopefully operators will now pay attention to the importance of these controls. Data concerning the PET system calibration were acquired to a high statistical quality to facilitate the detection of systematic errors during subsequent analysis of reconstructed images. The PET system calibration of the 11 centres has shown no major deviation, and the uncertainty of the whole acquisition chain was found to be within 10% tolerance. Nevertheless in some centres, images were generated containing concentric ring artefacts, which had not been identified previously on patient images, probably due to a lower number of events in routine clinical images when compared to the phantom acquisitions. These artefacts appeared on the 3 datasets reconstructed with LOR-RAMLA algorithm but also on one dataset reconstructed with FORE-FBP algorithm. The corresponding CT images were inspected but no circular artefact could be seen on them. A circular artefact in fan geometry should be something that is the same in each re-binned parallel projection for each angle. In our study, as the phantom was centred on the centre of the field of view, the artefacts should be centred on the centre of the re-binned projections. Thus these concentric ring artefacts may be explained by missing or faulty geometric arc correction. Because these concentric ring artefacts were no present on patient images and because no abnormality was seen on profile analysis, concentric ring artefacts were not thought to represent a problem clinically. Ours results showed that all the ten centres whose images could be analysed could participate in this multi-centre clinical trial without compromising its robustness. The centre whose images could not be analysed because of technical problems did not include any patients in this trial. These results are consistent with the fact that all systems are well monitored with preventive maintenance and regular quality control programs implemented by radiophysicists [15,16]. PET quality controls are based on manufacturers' and professional associations' recommendations. The regulation should make them mandatory in the near future, but in the meantime it appears necessary to establish specific quality control procedures for all multi-centre clinical trials. These procedures could distinguish two levels: the first level concerning the usual quality controls required by the system manufacturers and mandatory by national regulations, and the second level concerning specific tests for clinical trials, as defined by the investigator. On one hand, as discussed by Geworski et al [7], the daily quality control procedure and visual inspection of images give a rough impression of the PET system's performance, but are not sufficient to validate a system used for quantitative studies. On the other hand, our experience has shown that all the investigated PET systems undergo a more complete monitoring procedure, and that there was an improvement for the subsequent qualification processes of further multi-centre trials, due to training and increasing experience [13]. Our study proposed a simple checking procedure that was quite easily applicable in a few centres, but should be adapted if extended to a large number of centres because of the constraints due to the use of same certified standard source, the same phantom and the necessity of the same operator going to each centre. However, more and more clinical trials are now carried out at a multi-centre level, and require the qualification of each centre. A more complete harmonization is now proposed by the EANM through the EARL program [17], covering contrast recovery equalization and signal to noise ratio matching. Nevertheless, the tests that were performed in the framework of this study appear to be an intermediate step between the simple quality control procedure, which does not guarantee the accuracy of the scanner calibration, and the EARL procedure. In practice, our procedure is a prerequisite for this latter. Conclusion Criteria used to validate the calibration of the PET systems included in our multi-centre clinical trial exhibited an accuracy better than 10% for all sites except one. These first results confirm the fact that multi-centre protocols can lead to results as robust as if acquired on a single system. The whole acquisition chain should be checked regularly either for multi-centre clinical trials or even to assess the constancy of performance in a single centre for the patient's follow-up. Our work showed that a straightforward common procedure for several centres can be established easily. All the sites investigated had a regular quality control program, which could explain the good results observed and could lead assigning the validation procedure to the local physicist before the first patient inclusion.
5,227.6
2013-03-13T00:00:00.000
[ "Engineering", "Medicine" ]
Phase-matching-induced near-chirp-free solitons in normal-dispersion fiber lasers Direct generation of chirp-free solitons without external compression in normal-dispersion fiber lasers is a long-term challenge in ultrafast optics. We demonstrate near-chirp-free solitons with distinct spectral sidebands in normal-dispersion hybrid-structure fiber lasers containing a few meters of polarization-maintaining fiber. The bandwidth and duration of the typical mode-locked pulse are 0.74 nm and 1.95 ps, respectively, giving the time-bandwidth product of 0.41 and confirming the near-chirp-free property. Numerical results and theoretical analyses fully reproduce and interpret the experimental observations, and show that the fiber birefringence, normal-dispersion, and nonlinear effect follow a phase-matching principle, enabling the formation of the near-chirp-free soliton. Specifically, the phase-matching effect confines the spectrum broadened by self-phase modulation and the saturable absorption effect slims the pulse stretched by normal dispersion. Such pulse is termed as birefringence-managed soliton because its two orthogonal-polarized components propagate in an unsymmetrical “X” manner inside the polarization-maintaining fiber, partially compensating the group delay difference induced by the chromatic dispersion and resulting in the self-consistent evolution. The property and formation mechanism of birefringence-managed soliton fundamentally differ from other types of pulses in mode-locked fiber lasers, which will open new research branches in laser physics, soliton mathematics, and their related applications. Introduction Soliton initially refers to a special type of wavepacket that is capable of propagating undistorted over long distance, which has been broadly discovered in fields of plasma physics 1,2 , fluid dynamics 3-5 , Bose-Einstein condensates [6][7][8] , and optical networks 9 . In the context of fiber optics, temporal solitons were recommended for optical communications since 1973 10 , and experimentally observed in single-mode fiber (SMF) by several groups [11][12][13] . After that, various optical solitons were achieved in mode-locked fiber lasers that took into account not only the dispersion and Kerr nonlinearity but also the laser gain and loss [14][15][16][17][18] . The formation of soliton in fiber/fiber laser arises from the interplay between the anomalous dispersion and self-phase modulation effect, enabling the chirp-free feature and Sech 2 intensity profile. Attributing to periodic perturbations such as amplification and loss, the soliton maintains its nature by dispersing a part of energy displayed as Kelly sidebands 19,20 . By setting the cavity dispersion into the normal regime, fiber lasers support self-similar pulses and dissipative solitons with the assistance of saturable absorber (SA). Realization of self-similar pulses depends crucially on the self-similar amplification in gain fiber, enabling the parabolic spectral and temporal profiles 21,22 . In contrast, dissipative solitons originate from the mutual-interaction of the nonlinear gain, loss, normal dispersion, and fiber nonlinearity, taking on a variety of spectral/temporal shapes [23][24][25][26] . Attributing to the coaction of normal dispersion and SPM effect, both types of pulses are highly chirped and direct generation of chirp-free soliton in normal-dispersion fiber lasers remains a long-term challenge for decades. Beside the SPM and dispersion effects, the cross-phase modulation (XPM) should be included to describe pulse propagation in birefringent fibers or fiber lasers. Based on the aforementioned effects, polarization-locked vector solitons 27 and group-velocity locked vector solitons 28 have been achieved in low-birefringent fibers or fiber lasers. For hybrid-structure lasers comprising lowbirefringent SMFs and high-birefringent polarizationmaintaining fibers (PMFs), the polarization orientation and cavity effect should be considered particularly. In previous schemes, the PMF usually combines with a polarizer and forms a Lyot filter to control the wavelength and bandwidth of scalar solitons 29,30 . The case becomes much complex without using polarizers or other polarization-sensitive components, which however has received less attention. Here, we demonstrate near-chirp-free solitons with spectral sidebands in an all-normal-dispersion ytterbiumdoped fiber (YDF) laser comprising a section of PMF. The soliton has a bandwidth and duration of 0.74 nm and 1.95 ps respectively, giving the time-bandwidth product of 0.41. To meet the periodic boundary condition of the fiber laser, the pulse follows a phase-matching principle that incorporates the birefringence, normal-dispersion, and nonlinear effects. Simulation results and analytic solutions fully reveal the soliton formation mechanism that the phase-matching effect confines the spectrum broadened by self-phase modulation and the saturable absorption effect slims the pulse stretched by normal dispersion. Such a unique type of pulse is termed as birefringencemanaged soliton (BMS) to highlight the role of PMF for implementing the self-consistent evolution. Our work fills the gap for directly generating near-chirp-free solitons in normal-dispersion fiber lasers without external compression, which is quite attractive for wavelength band lower than 1.3 μm where the silica fiber dispersion is typically normal. Principle and simulation/experiment results of BMSs The configuration of the laser cavity is shown in Fig. 1a, which includes a section of gain fiber, a SA, and a few meters of PMF. The other fibers and pigtails of fiber components are the standard low-birefringent SMFs. A polarization-insensitive isolator ensures the unidirectional propagation of pulse. We construct an all-normaldispersion YDF laser containing 1.5 m PMF as a typical research platform. The total length and net-dispersion of the fiber laser are~9 m and 0.214 ps 2 respectively (See details in Supplementary section I). When the pulse circulates in the cavity, the coupling behavior from SMF to PMF must be taken into account, as outlined in Fig. 1b. For a certain polarization orientation θ (i.e., the angle between y-polarized component u y and fast axis of PMF), two orthogonal-polarized components along the slow (u s ) and fast (u f ) axes of PMF can be expressed by u x and u y that along SMF: At the output terminal of PMF, without loss of generality, u s and u f components are assumed to translate into the u x and u y components along SMF separately. Considering the mode coupling theory described in Eq. (1), we simulate the pulse evolution in the YDF laser according to the experiment parameters. The propagation of pulses in SMF and PMF follows the coupled Ginzburg-Landau equations that take into account the dispersion, birefringence, SPM, XPM, gain, and loss. Equation (2) are numerically solved with the split-step Fourier approach [31][32][33] (See details in "Methods"): In our simulation, near-chirp-free BMSs can be formed when θ ranges from 0.1 π to 0.4 π and giant-chirp dissipative solitons are achieved for θ of 0 or 0.5 π, which coincide with the experimental results that BMSs are observed in most cases while the dissipative solitons are obtained at several special states (See details in Supplementary Section II). As a typical example, we first compare the simulated BMS (Fig. 1c, d) with that of the measured counterpart (Figs. 1e and 1f) for PMF length of 1.5 m and θ of π/4. The saturation energy is set as 33 pJ, comparable with that of the pulse energy of 38 pJ obtained in the experiment. As shown in Fig. 1c, e, the spectra of simulated and measured BMSs show the quasi-Sech 2 profiles with sharp sidebands. It is worth noting that, two orthogonalpolarized components exhibit mirrored spectral profiles with asymmetric sidebands. Taking the u x component as an example (red solid curve), the stronger sideband is farer to its central wavelength than the weaker one. Such sidebands are formed in a normal-dispersion regime, fundamentally differs from the symmetrically distributed Kelly sidebands that only exist in anomalous-dispersion regime. The formation mechanism of the sidebands will be elaborated later with the phase-matching theory. The temporal profiles and phases of the BMS, u x and u y components are measured by a frequency-resolved optical gating (FROG), as illustrated in Fig. 1f. One can observe that, two orthogonal-polarized components exhibit asymmetric quasi-Lorentz profiles with their phases approaching to constant values at central parts, comparable with the chirp-free pulse reported previously [34][35][36] . The corresponding autocorrelation traces of BMS is broader than that of two orthogonal-polarized components ( Fig. S3 Supplementary section III), also validating the difference of retrieved pulse profiles in Fig. 1f. The measured (simulated) bandwidth and duration are 0.74 nm (0.89 nm) and 1.95 ps (1.95 ps), respectively, giving the time-bandwidth product (TBP) of 0.41 (0.487) with the Sech 2 fit, which illustrates the near-chirp-free feature of BMS (Fig. 1d). The slight deviation from the Fourier transform limited value is attributed to the spectral and temporal separations between two components. For instance, the bandwidth, duration, and TBP of u x are 0.62 nm, 1.23 ps, and 0.22 respectively, indicating that each component is near-chirp-free and deviates from the Sech 2 intensity profile. Such near-chirp-free BMSs include two unique components with asymmetric spectral sidebands, fundamentally differs from giant-chirp self-similar pulse 37 and dissipative soliton 23 reported in the normal-dispersion regime and chirp-free soliton in the anomalous-dispersion regime. It is indicated that the BMS follows a distinct pulse-shaping mechanism than other types of pulses. Informed by solitons formed in anomalous-dispersion regime 20,38 , we propose a phase-matching theory to unveil the underlying physics from mathematical analysis. During the periodic propagation, the pulse experiences SPM, XPM, and perturbations such as the gain/loss and mode coupling, the optical spectrum broadens and disperses new frequencies ω with their own velocity. Then, the central frequency ω 0 of each orthogonal-polarized component and the new-emerged frequency ω accumulate unequal phases per roundtrip relying on the birefringence, dispersion, and nonlinearity of the cavity. Once new frequencies deviate from the central frequency, the phase difference between them changes continuously from 0, π, to 2 π, as described in Eq. (3) (See detailed derivation in Supplementary section IV). Here a and b represent the contribution of fiber dispersion and birefringence, respectively. β 2 and Δn i represent the dispersion and refractive index difference between two components for fiber with the length of L i . Δω x/y is the frequency separation between the central frequency ω 0 and new-emerged ω of u x/y component. ϕ nl is the nonlinear phase shift accumulated in one roundtrip. For each orthogonal-polarized component, the newemerged frequencies between adjacent roundtrip copropagate inside the cavity and have a phase difference of Δφ x/y that depends on the Δω x/y . To give clear demonstration, we consider the interference of two lights with intensity I 0 and phase difference Δφ x/y , and obtain the phase-matching relation for each orthogonal-polarized component, as expressed in Eq. (4) (the details are given in Supplementary section V): Consequently, with the increase of Δω x/y , the newemerged frequencies between adjacent roundtrips interfere constructively, destructively, and constructively, as denoted by the dotted curves in Fig. 1c. The spectral profile, sideband intensity and position of each component are asymmetric, which differs from conventional soliton with symmetrical spectrum and sideband 20 . Such unique spectra result from the unsymmetrical birefringence-related phase-matching effect that the stronger sidebands appear for the phase difference of 2 π, as described in Eq. (4) and Fig. 1c. In the analytical calculation, the sideband separations are unequal and one of them is abandoned as it exceeds the gain bandwidth of YDF. Notably, the weaker sideband stems from the mode coupling between two orthogonal-polarized components at the input terminal of PMF per roundtrip. As two components of BMS have different central wavelengths, the stronger sideband is farer to the central wavelength than that of the weaker one for each component. It is indicated that the phase-matching effect not only confines the BMS spectrum but also results in the unsymmetrical spectral sidebands. By diminishing the PMF length from 1.5 m to 1.1 m with a step of 0.05 m, we record the spectral separations of u x and u y components between the stronger sideband and their central wavelengths. The theoretical values Δλ x/y (calculated from Δω x/y ) based on Eq. (S8) are derived from Eq. (3) by setting Δφ x/y as 2 π. As shown in Figs. 1g and 1h, both theoretical and simulation results show that the normalized spectral separation increases by clipping the PMF, which agrees with experimental observations and confirm the validity of the phase-matching theory. With the elevation of pump strength, the simulation and experiment results display the similar evolution behavior (e.g., gradually increased sideband intensity and decreased pulse duration), as illustrated in Fig. 2a, b. To compare the evolution trends clearly, the bandwidth, spectral separation, and pulse duration are normalized using equation (F(x)−F min )/(F max −F min ), as shown in Fig. 2c, d. During this evolution, the sideband intensity increases faster than that of the central spectrum, and two sidelobes appear on the wings of the pulse. Such phenomenon is understood by noting that the spectrum of BMS is strongly limited by the phase-matching effect. For a higher intensity, the spectrum is broadened by nonlinear effects while confined by the phase-matching effect. Then, the redundant energy transfers to the frequencies that the phase-matching condition is satisfied. As the SPM effect induces a larger nonlinear phase shift ϕ nl 39 , according to the phase-matching condition in Eq. (3), the sideband separation of BMS will enlarge correspondingly with the pump power. We further investigate the evolution of BMSs as a function of polarization orientation θ, as shown in Fig. 3a, b. In the experiment, the polarization controller placed before the spliced point of SMF and PMF is adopted to change the polarization orientation of pulse. When θ increases, a stronger mode coupling occurs between two orthogonal-polarized components, and results in larger spectral sidebands. Such evolution trend is somewhat similar to the well-known Kelly sideband that a stronger perturbation leads to a larger resonant peak, and will be fully elaborated in the latter part. Figure 3c, d shows that the bandwidth increases linearly while the duration decreases slightly due to the near-chirp-free feature of BMS. As part of the energy of the BMS is transferred to the sidebands, the nonlinear phase shift between the central frequency and the sideband decreases slightly. Thus, the spectral separation will reduce according to Eq. (S8) and experimentally confirmed by the red curve in Fig. 3c. As a matter of fact, the BMS can be regarded as a unique group-velocity locked vector soliton arising from co-actions of chromatic dispersion, fiber birefringence, and nonlinearity. By tuning the pump strength or θ, the nonlinear effect changes and the central frequencies of two components shift slightly to realize a new balance in the same fiber laser. For comparisons, we replace the polarization-insensitive isolator by a polarization-sensitive isolator to form a Lyot filter inside the cavity. In this case, the modulation period of Lyot filter is~1.7 nm for the PMF length of 1.5 m, which is too narrow to achieve mode locking in our fiber laser. By diminishing the PMF length to~0. 4 that the formation of BMS is dominated by the phasematching effect rather than the Lyot filtering effect 30 . Based on the aforementioned results, the formation of the BMS is summarized as follows. In the frequency domain, the spectral broadening caused by the nonlinear effects and periodic perturbations is counterbalanced by the phase-matching effect, results in the limited bandwidth with downshifted and upshifted spectral sidebands. In the time domain, the normal dispersion induced stretching of pulse is compensated by saturable absorption and phase-matching effects, helps to establish the steady-state pulse evolution in a slow-varying style. The PMF length dominates the phase-matching bandwidth, which offers a flexible approach to control the bandwidth and duration of the BMS. Buildup process and intracavity evolution of BMSs in YDF laser Based on the Ginzburg-Landau equations, we investigate the establishment process of BMS versus roundtrips, as displayed in Fig. 4a, b. The initial signal is a lowintensity noise pulse and θ is 0.25 π. During propagation through each fiber component, the light field is multiplied with the corresponding transmission matrix. For the u x and u y components shown in Fig. 4c-f, both of them first grow exponentially and then gradually reach a sub-stable state after 10 roundtrips. Subsequently, the bandwidth increases while pulse duration decreases with the roundtrips. Simultaneously, new spectral components emerge due to the coaction of SPM and XPM effects. After that, the noise components are suppressed to support the ultrashort high-intensity pulse, under the wellknown saturable absorption effect 31,[40][41][42] . When the quasi-stable state is established at 35 roundtrips, two sharp sidebands appear incrementally on the spectra. The BMS evolve to the self-consistent steady state at 50 roundtrips, in which the two components have unsymmetrical spectral sidebands, as also validated in Figs. 1c and 1d. Apart from the sideband symmetry, the formation process of BMS is similar to its two components. With the assistance of the dispersive Fourier transform (DFT) technique capable of mapping the spectrum into time domain [43][44][45][46] , we record the buildup process of BMS and its two orthogonal-polarized components for comparison with the simulation counterpart. The left panel of Fig. 5 shows real-time spectral evolutions of BMS and its two components, and the corresponding field autocorrelation trajectories are displayed in the right panel, which is obtained by applying Fourier transform of each single-shot spectrum. As illustrated in Figs. 5a and 5b, after the relaxation oscillation and beating process, the spectrum of BMS broadens drastically and parts of the energy transfer to spectral edge and form the sidebands. After that, the laser reaches the steady state at~2400th roundtrip. The two orthogonal-polarized components evolve similarly with the BMS while display asymmetric spectral sidebands (Fig. 5c-f), which coincides with the simulation counterpart and further validates the reliability of the proposed model. The dynamic evolution of the BMS along the cavity is depicted in Fig. 6. In a single roundtrip, the pulse sequentially propagates through 1 m SMF, 0.25 m YDF, 1.05 m SMF, the optical coupler, 1.5 m SMF, the SA, and 3.05 m SMF. After that, the BMS enters the 1.5 m long PMF, where the u f component and u s component propagate in a "X" manner. Such phenomenon results from the birefringenceinduced collision and walk-off effect between two orthogonal-polarized components 47 . At the end of the PMF, the BMS reaches the SMF to start the next circulation. A unique feature is that the collision point deviates from the PMF center, which partially counteracts the group delay difference induced by the chromatic dispersion of cavity. For example, the birefringence-induced group delay difference is given as 0.38 ps while that induced by chromatic dispersion is calculated as −0.12 ps. Combined with the saturable absorption effect, the two components centered at different wavelengths finally evolve to the self-consistent state, which is somewhat similar to that of the groupvelocity locked vector solitons 48 . The asymmetrical spectrum of each component is a steady-state phenomenon, which differs from the transient unsymmetrical spectrum arising from the periodic coupling between two orthogonalpolarized components 49 . We extract the key pulse parameters of the BMS and its two orthogonal-polarized components during propagation through the cavity for θ of 0.15 π, 0.25 π, and 0.35 π. Unlike the BMS in Fig. 7a, the durations of two components increase monotonously along the cavity due to the group-velocity dispersion while decrease abruptly due to the mode coupling in the PMF (Fig. 7b). Such differing evolutions arise from the varied pulse separation between two components, as confirmed by the black curve in Fig. 7c. Interestingly, the bandwidth of each component almost keeps unchanged throughout the cavity, indicating that nonlinear effects play a less important role in the formation of BMS. The TBP of BMS spans from 0.61 at the WDM to 0.31 at middle part of PMF. For comparisons, the TBP of each orthogonal-polarized component enlarges from 0.26 to 0.45 in SMF and YDF, and decreases abruptly to 0.23 at the connection point between SMF and PMF. An interesting phenomenon is that the intensity of sidebands enlarges with θ (corresponding to Fig. 3a), which can be explained by noting that a larger θ corresponds to a stronger variation of pulse property, as confirmed by Fig. 7a-d. As a matter of fact, the BMS is a ubiquitous phenomenon in normal-dispersion fiber lasers that are independent of the cavity length or mode-locking elements. For example, similar near-chirp-free solitons have also been achieved in normal-dispersion erbiumdoped fiber lasers, as shown in Supplementary section VII. Discussions and conclusions In conclusion, we proposed a phase-matching theory to explain the formation of unique near-chirp-free BMS in normal-dispersion SMF-PMF lasers. Based on coupled Ginzburg-Landau equations, we simulated the pulse evolution as a function of PMF length, pump power, and polarization orientation. The theoretical analysis and simulation results fully reproduce and explain the experimental observations. It is indicated that the phasematching effect confines the spectrum broadened by selfphase modulation and the saturable absorption effect slims the pulse stretched by normal dispersion, enabling the generation of the near-chirp-free soliton. During the evolution, the two orthogonal-polarized components propagate in a "X" manner in the PMF, which partially compensates the group delay dispersion and results in the unique soliton trapping. The BMSs, universally existed in the normal-dispersion regime, are independent of the cavity length or mode-locking elements in the SMF-PMF lasers. Similar BMSs have been achieved in four different normal-dispersion fiber lasers with cavity length ranging from 7.8 m to 25.6 m. Our results confirm that the birefringence can be adopted to shape solitons at the normaldispersion regime, in contrast to the balance of nonlinear and anomalous-dispersion effects that results in standard solitons. From an application perspective, the proposed scheme is capable of producing near-chirp-free solitons in normal-dispersion cavities without external compression, especially attractive for wavelength below 1.3 μm where silica fiber dispersion is typically normal [50][51][52] . Such principle can be extended to propagate near-chirp-free soliton over long distance in normal-dispersion regimes with periodically spaced SMF and PMF. The fiber laser is capable of delivering switchable near-chirp-free BMS and giant-chirp dissipative soliton (See Supplementary Videos S1 and S2 for details), which can also work as a flexible multi-functional pulse source. As the pulse energy of BMS is confined to a limited level, high-order harmonic modelocking may be achieved in long-cavity normal-dispersion fiber lasers based on the proposed scheme. Experimental setup The experiment setup of the fiber laser is presented in Supplementary Fig. S1 respectively, and the net cavity dispersion is calculated as 0.214 ps 2 . Numerical simulations For the coupled Ginzburg-Landau equations (Eq. (2)), u x and u y represent envelopes of two polarization components of pulse, t is the relative time in the moving frame, and z is the propagation distance. Δn, 2β = 2πΔn/ λ, and 2δ = 2βλ/2πc are the differences of refractive index, wave-number, and inverse group velocity respectively, which relate to the fiber birefringence. k 2 is the groupvelocity dispersion, γ is the nonlinear coefficient, and l represents the loss. g = g 0 exp(−E p /E s ) is the saturable gain and Ω g is the gain bandwidth, where E p , g 0 , and E s are the pulse energy, small-signal gain, and gain saturation energy respectively. The semiconductor saturable absorber is modeled by T = 0.45 − T 0 /[1 + P (τ) /P sat ], where T 0 is the modulation depth, P (τ) is the instantaneous power and P sat is the saturation power. The propagation of the pulse in fiber is modeled by the standard split-step Fourier transform technique.
5,353.6
2022-01-25T00:00:00.000
[ "Physics" ]
Italian regional health service costs for diagnosis and 1-year treatment of ADHD in children and adolescents The main aim of this study was to estimate the costs associated with diagnostic assessment and 1-year therapy in children and adolescents enrolled in 18 ADHD reference centres. Data concerning 1887 children and adolescents from the mandatory ADHD registry database during the 2012–2014 period were analysed. The overall diagnostic and treatment costs per patient amounts to €574 and €830, respectively. The ADHD centre, the school as sender, and the time to diagnosis constitute cost drivers. Non-pharmacological therapy resulted as being more expensive for patients concomitantly treated with drugs (€929) compared to those treated with psychological interventions alone (€590; p = 0.006). This study gives the first and reliable estimate of the costs associated with both diagnosis and treatment of ADHD in Italy. Although costs associated with mental disorders are difficult to estimate, continuing efforts are need to define costs and resources to guarantee appropriate care, also for ADHD. Electronic supplementary material The online version of this article (doi:10.1186/s13033-017-0140-8) contains supplementary material, which is available to authorized users. Background The costs of psychiatric disorders have been scantly investigated, particularly with regard to those that affect children and adolescents [1,2]. Methodological complexities justified the lack of comprehensive estimates of the economic impact associated with the burden of psychiatric disorders [3,4]. World-wide up to 20% of children and adolescents suffer from a neuro-psychiatric condition; in developed countries, both mental and neurological disorders account for the 40% of the burden of all brain diseases and for the 35% in Europe only [5][6][7][8]. Attention deficit hyperactivity disorder (ADHD) is a neurobiological disorder [9] characterized mainly by clinical manifestations such as difficulty in paying attention, impulsive behaviour, and a heightened level of physical activity, occurring more frequently and intensely than in other children of the same age or developmental level [10]. ADHD symptoms usually become more evident in school aged children, are more frequent in boys than girls (ratio 3:1), and tend to persist into adulthood [11]. ADHD accounts as the third most common mental disorders in children and adolescents [12]. Despite a pooled worldwide ADHD prevalence in children and adolescents of 5.3%, there is wide variability between and within countries [13]. Such variability in prevalence rates may be explained by the different methodologies, diagnostic procedures, and criteria used in the studies [14,15] as well as by the different settings and cultural approaches considered [16,17]. However, when standardized diagnostic and impairment assessment procedures are followed, prevalence does not seem to have changed over time nor to have differed in the geographic locations considered [18]. According to national and international guidelines [19][20][21][22], ADHD treatment should be based on a multimodal approach combining psychosocial interventions with pharmacological therapies, and should take into consideration the subject's characteristics, including age, symptom severity, Open Access International Journal of Mental Health Systems *Correspondence<EMAIL_ADDRESS>2 Laboratory for Mother and Child Health, Department of Public Health, IRCCS-Istituto di Ricerche Farmacologiche "Mario Negri", Milan, Italy Full list of author information is available at the end of the article co-morbid disorders, cognitive level, and social and family context. The impairments of ADHD are multi-faceted and occur in multiple settings, as costs associated with ADHD impact multiple societal costs within and outside the healthcare sector [23,24]. Studies on the economic burden of ADHD within Italy, where healthcare is provided to all citizens, are currently lacking [25,26]. An understanding of the costs associated with ADHD in children and adolescents is important for public policy makers as a rationale for improving and planning public services for both diagnosis and treatment [27]. Overall, in addition to the human suffering they cause, psychiatric disorders are among the most expensive of all health problems in adults [24,28], even if evidence regarding the costs of psychiatric disorders has been slow to accumulate, particularly with regard to those that affect children and adolescents [29]. Literature review of the health care and treatment-related ADHD economic impact estimates an annual cost, specifically for children and adolescents, at $14,576 per individual, with a comprehensive range from $36 billion to $52.4 billion considering an ADHD prevalence rate of 5% [30]. However, these estimates are incomplete and inaccurate due to the fact that the majority of the existing studies did not fully assess all the potential costs related to an ADHD diagnosis [31]. A few reviews covering the attempts made at defining economic impact of ADHD by the limited number of available, country-based studies, highlighted a wide range in the magnitude of the societal cost estimates [30,[32][33][34][35][36][37][38]. In June 2011 an official ADHD regional registry was activated in the Lombardy Region, designed as a disease oriented registry collecting information on all subjects who access ADHD centres for a suspected ADHD diagnosis [39][40][41], as part of a regional project aimed to define a common approach to, and improve, ADHD diagnosis and therapy. In the Lombardy Region, during the study period, a network of 34 Child and Adolescent Neuropsychiatric Services (CANPS) provide care at the hospital (tier three) and community (tier two) levels for children and adolescents with neurologic, neuropsychologic and/or psychiatric disorders, and for their families. About 15% of the Italian pediatric population live in this region. Regional health authorities are responsible for the accreditation of the ADHD reference centers in regional hospitals ("ADHD centers"), as specialized ADHD hubs (tier three) of the CANPS network. In such a context, the objective of this study was to estimate the costs, from the National Health Service (NHS) perspective, associated with diagnostic assessment and 1-year therapy in children and adolescents aged 5-17 years enrolled in any of the 18 ADHD reference centres of the Lombardy Region between January 2012 and December 2014 for suspected ADHD. Methods This study was designed as a review of patient medical records identified from the Regional ADHD Registry database, to estimate the costs of the diagnostic assessment and 1-year therapy for subjects with a suspected ADHD diagnosis. The research was approved by the Institutional Review Board of the IRCCS-Istituto di Ricerche Farmacologiche "Mario Negri" Milan, Italy. Written informed consent was obtained for all patients to put information in the registry database and analyze them anonymously. Setting This study is part of a specific project supported by the Regional Health Ministry and aimed to ensure appropriate ADHD management for every child and adolescent once the disorder is suspected and reported, and includes commonly acknowledged diagnostic and therapeutic procedures as well as educational initiatives for health care workers (child psychiatrists and psychologists) of the Lombardy Region's health care system. The project's participants are all the 18 ADHD centres of the Lombardy Region, the most economically important and populated Italian region with 1.690.127 citizens under 18 years old, and with an average income earned per person equal to € 24.005 in the study period. The ADHD centres, accredited by regional health authorities, are the hubs specialised in ADHD (Tier 3) of the Child and Adolescent Neuropsychiatric Services (CANPS) network and provide diagnosis and treatment care free, or at a nominal charge, working mainly on an outpatient basis and in close connection with educational and social services. ADHD centres are also responsible for the prescription of pharmacological therapies and their monitoring over time. Moreover, ADHD centres are responsible for inputting data into the official ADHD registry and for providing parent, teacher, and child training treatments [39][40][41]. Study population and pathways of care Anonymized, updated data of the official ADHD registry of the Lombardy Region (as of 31 August 2015) were available, with 3163 subjects enrolled in the June 2011-August 2015 period. The study population includes children and adolescents from January 2012 to December 2014 who had both a first outpatient visit at one of the 18 ADHD centres for a suspected ADHD diagnosis in the same period and had a complete diagnostic evaluation and treatment prescription at the time of data extraction. Our goal was to identify only children who had never been evaluated and treated before for ADHD. We further required that all of the study children with a confirmed ADHD diagnosis were received a care continuity at the ADHD centre within a 1-year period. The guideline for all clinicians at the ADHD centres is to use the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition [42] criteria for diagnosing ADHD. Moreover, to define an optimal, evidence-based, shared strategy for diagnostic evaluation, an ad hoc assessment working group was created, involving a child neuropsychiatrist and a psychologist from each participating ADHD center and a group of researchers of the registry coordinating center (IRCCS-Istituto di Ricerche Farmacologiche "Mario Negri"). More specifically, also according to the recommendations of the Italian guidelines [22,43], this strategy consisted of seven mandatory steps to be applied at the time of diagnostic evaluation: (1) a clinical anamnestic and psychiatric interview; (2) the neurological examination; (3) the evaluation of cognitive level by Wechsler Scales [44]; (4) the Schedule for Affective Disorders and Schizophrenia for School-Age Children (K-SADS) [45] for a complete psychopathology overview and comorbidity assessment; (5) the Child Behavior Checklist (CBCL) and/or the Conners' Parent Rating Scale-Revised (CPRS-R) rated by parents; (6) the Conners' Teacher Rating Scale-Revised (CTRS-R) rated by teachers; [46][47][48] and (7) the Clinical Global Impressions-Severity scale (CGI-S) [49] to quantify symptom severity. This diagnostic pathway was agreed on, approved, and shared by all participating ADHD centers. Once a patient receives a diagnosis of ADHD and a treatment prescription, the registry is designed to provide differently structured types of follow-up visits at periodic intervals: at 3 and 6 months after the diagnosis, and every 6 months afterward for all patients; while for those given methylphenidate at 1 week and 1 month after the diagnosis also (after only 1 month if they received atomoxetine or other psychotropic drugs). Moreover, patients that receive a methylphenidate prescription need to perform a visit called "dose-test" which is carried out in day hospital regimen before starting the drug treatment. Data analytic procedures Complete data of all eligible patients were extracted. Besides anamnestic and clinical information regarding age, gender, diagnosis and comorbidity, detailed information was available on a patient medical records basis for the following cost domains: • Diagnostic pathway: all services supplied to patients with a suspected ADHD diagnosis, whether or not confirmed, by an agreed and shared child and adolescent neuropsychiatrists' and psychologists' assess-ment. In addition, the working days elapsed from the request a diagnosis of ADHD were calculated. In line with the regional health service perspective, only direct healthcare costs were estimated. Unit costs related to diagnostic tests were derived from tariffs reimbursed by the Lombardy Region in 2014. Detailed information is presented in Additional file 1: Table S1. • Non-pharmacological therapies: Prescribed by physicians specialised in child and adolescent neuropsychiatry working in one of the 18 ADHD centres and provided by other therapists for example psychologists, also working within the ADHD centre. The direct health cost of non-pharmacological therapies was assessed by multiplying the estimated number of annual visits per patient by the unit costs derived from tariffs reimbursed by the Lombardy Region in 2014. Detailed information is presented in Additional file 1: Table S2. • Pharmacological treatments: Prescribed by medical doctors specialised in child and adolescent neuropsychiatry working in one of the 18 ADHD centres (only prescriptions filled by patients). Drug utilization data were also derived from the Regional ADHD registry [39,40]. Detailed information is presented in Additional file 1: Table S3. Data on the total annual consumption of ADHD medications that require a therapeutic plan prescription form were evaluated over 1 year after the date of the diagnosis. The trends in total consumption of ADHD drugs were also analysed based on changing patterns of the drugs' prescription practices, dosages, and formulations. The unit costs of prescribed drugs were assessed based on: (a) price to public of medicines fully reimbursed by the Italian NHS and supplied through retail pharmacies (class A); or (b) sum of ex-factory price and 10.2% distribution margin of fully reimbursed drugs under direct distribution. Patient co-payment was not considered as it impacted a minority (<10%) of patients under drug treatment. The cost per milligram was calculated on the basis of the prescribed formulation (immediate-or extendedrelease, tablet, capsule, drops) by selecting the generic, when marketed. Annual drug consumption was estimated for each patient by multiplying the daily dosage prescribed at each visit by the number days until the following visit. Hence, they were considered as pharmacological treatment-related expenditures. A DH unit cost of €232.00 was derived from DRG code 431 reimbursed by the Lombardy Region in 2014. Hospitalization (inpatient regimen) costs were been estimated as no patient was hospitalized. All data were analysed with SAS Version 9.2, (SAS Institute, Inc., Cary, NC, USA). Descriptive statistics were computed for the entire study population and for subgroups. As costs of diagnostic pathways (recommended, optional, and total), time to diagnosis, and treatment costs were not normally distributed, median and inter quartile ranges were analysed. The Student's t test was used to compare continuous variables (baseline clinical and anamnestic data), while χ 2 (baseline clinical and anamnestic data), Wilcoxon-Mann-Whitney (diagnosis related costs, time to diagnosis evaluation), and Kruskal-Wallis (treatment and diagnosis related costs) tests were used to compare categorical variables. A multivariate logistic regression analysis with stepwise selection was carried out to assess the socio-demographic (baseline personnel and familiar data) and clinical, service models' characteristics (clinical and organizational determinants). Moreover, a multivariate linear regression analysis was performed to assess the drivers of diagnostic costs and time to diagnosis. Diagnosis related costs Total diagnostic cost per patient to complete the diagnostic evaluation amounted to €574.00, of which €510.00 was related to recommended procedures and tests and €105.60 to optional examinations ( Fig. 1; Additional file 1: Table 4). The multivariate analysis highlighted the following cost drivers: ADHD centre, sender, and time to diagnosis (Table 2). Concomitant psychiatric disorders, as well as other clinical and anamnestic variables, didn't affect diagnosis costs (Table 2). Statistically significant (Kruskal-Wallis, p < 0.001) inter-centre variability was related to the completion rate of the recommended set of assessments (Additional file 1: Table S5). The total cost of the diagnosis also varied in relation to the sender and was highest for patients referred from CANPS (€615.60). However, the relative increase was only 7.4% over the total median cost (Additional file 1: Table S6). Globally, it took 119 days to complete the diagnostic pathway, with a wide variability mainly due to the centres (range from 51 to 302 days; Kruskal-Wallis, p < 0.0001) and the senders, with median time-to-diagnosis markedly reduced when the patient was referred by CANPS (91 days) or increased if referred by GPs (162 days), or by other specialist neuropsychiatrists practicing in agreement with the NHS (169.5 days; Kruskal-Wallis, p < 0.0001; Additional file 1: Table S7. Whether or not the diagnosis of ADHD was confirmed on elapsed time was not statistical significative: 123 days if positive compared to 108 days if negative (Wilcoxon-Mann-Whitney, p = 0.0871). However, the time to diagnosis was slightly greater for patients receiving all the recommended tests (122 working days) compared to those who did not complete the assessment (111.5 days; Wilcoxon-Mann-Whitney, p = 0.0086). Assuming that the time to diagnosis could be considered as a measure of efficiency by the regional health service (and by patients, too), Fig. 2 shows the wide variability among centres of the ratio of costs reimbursed by the health service and median time to diagnosis. This ratio varies significantly also in relation to the sending unit (Fig. 3). The remaining 174 patients (14%) were not included because they were still being monitored (watchful-waiting) at the time of data extraction. Detailed information is presented in Additional file. The median total treatment cost per patient, including laboratory and instrumental assessment needed to begin the drug therapy, was €830.00 (Additional file 1: Table S8), and resulted due mainly to the non-pharmacological therapy cost per patient. In addition to marked variability among centres (Kruskal-Wallis, p < 0.0001; Additional file 1: Table S9), non-pharmacological therapy resulted as being more expensive for patients concomitantly treated with drugs (€929) compared to those treated with psychological interventions alone (€590; Wilcoxon-Mann-Whitney p = 0.0064). Pharmacological treatments were prescribed in 14 out of 18 centres. Methylphenidate was the most used drug, prescribed in 170 patients (85.4%), followed by atomoxetine (10.1%). Median drug cost for 1 year was €97.60, of which €65.36 covered by stimulant treatment and €32.34 by non-stimulant treatment. Inter-centre variability (Fig. 4) was not statistically significant (Additional file 1: Table S10). A total of 331 adverse events associated with drug treatments were reported, of which 9 (3%), 99 (30%), and 222 (67%) were classified as severe, moderate and mild, respectively. No action was required for 208 adverse events, while, for the remaining events, patients recovered upon drug discontinuation (n = 77, 23%) or dose changing (n = 46, 14%). No patient was treated or hospitalised due to adverse events so no adjunctive cost estimation was needed. Discussion This study presents the first, and most comprehensive, estimate to date of the costs, from the NHS perspective, associated with both the diagnostic assessment and a 12-month therapy in children and adolescents with ADHD in Italy. The overall diagnostic and treatment costs per patient amounts to €574 and €830, respectively (median total: €1404). In our opinion, this is a reliable and representative estimate of ADHD health costs in Italy. Indeed, in our country, the management of ADHD patients was provided mainly by a network of specialized hubs on ADHD (Tier 3)-the Regional ADHD centreswho are responsible for following the most appropriate diagnostic procedures and treatment prescriptions according to the Italian guidelines [22,43]. This process, in the Lombardy Region, was strictly monitored by the official Regional ADHD registry [40,41] thus we can therefore expect that it accurately represents a cost estimate that is consistent with that of the management of ADHD in real clinical practice. The evaluation of ADHD costs presented in this article was calculated through a retrospective analysis of data inputted in this registry. Moreover, neuropsychiatrists working at the ADHD centres are also the only clinicians who, according to existing Italian legislation, can prescribe drug therapies for ADHD. This, in turn, gave us the possibility of calculating a reliable estimate also of the total treatment costs. These costs are consistent, in some cases, and not, in others, with those previously suggested in other countries [32,35,[50][51][52][53][54][55][56]. Such variability in ADHD costs reported may be explained by the different methodologies, especially diagnostic procedures, and criteria used in the studies, as well as by the different settings considered, and, for some authors, also the different cultural approaches [17]. In particular, psychopharmacological treatment in Italian children and adolescents is not the norm, and prescription rates for mental disorders are relatively low. A recent Italian study [40] shows that only 16% of ADHD patients are treated pharmacologically, compared with higher rates reported in other countries, suggesting that also for ADHD, the cultural education and disposition, and the professional attitude of the majority of the child psychiatrists of the Lombardy Region's mental health services, are more inclined toward behavioural treatments than the use of drugs. To date, a cost analysis on ADHD in Italy has not yet been performed and published. An abstract presented in a conference contribution shows similar costs compared to those estimated in the present study [25]. This study, however, was methodologically different and did not calculate the diagnostic versus the therapeutic costs separately, and this does not allow us to make a more critical comparison with our findings. Our cost analysis is relevant from the perspective of the Italian NHS not only for ADHD [27]. Indeed, the ADHD centre, the sender, and the time to diagnosis, but not the ADHD diagnosis itself, constitute cost drivers. We can thus expect these drivers to be common to other mental health disorders. Moreover, assuming that the time to diagnosis is a moderator measure of care efficiency, and considering the wide inter-centre variability in both the relationship between cost and time to diagnosis, and between cost and diagnosis completeness according to the National recommended guidelines, these data could serve as a measure for monitoring and reassessing the accreditation over time of the 18 regional centres as specialized hubs on ADHD. Interestingly, when community CANPS (Tier 2) was the sender, time to diagnosis markedly decreased (1 month), with a relative increase (7%) over the total cost. As such, the recommended pathway of care by the Italian National Institute of Health (Istituto Superiore di Sanità, ISS) [57] when a child has a suspected diagnosis of ADHD states that the paediatrician should refer the child to the CANPS, and that the CANPS, after a psychiatric screening assessment (if necessary) should refer the patient to the ADHD centre (Tier 3). Our findings confirm that this suggested model of transition of care is likely to be a positive, cost-effective pathway, given that it ensures a more prompt response to care needs in exchange for an acceptable increase of direct health costs. There is public concern that the more rapid efficacy, in symptomatic terms, of pharmacological therapy, combined with its lower cost compared to non-pharmacological interventions, could favour an increased use of drugs alone for the ADHD management [58]. As previously reported [16,39,40] this study showed a significantly higher prescription of non-pharmacological treatments, thus confirming that this alarming risk is absolutely not present in the Italian context. Indeed, the majority of the children with ADHD were not currently receiving medication. This is due, in part, to an Italian tradition that drug treatment should be reserved for those with more severe symptoms and impairments [59]. To some extent, the modest incremental cost of the combination of drug and behavioural interventions, compared to behaviour therapies alone resulting in the present study, representing for more effective, versus less effective, management strategies, as widely suggested, should be considered in terms of the best choice for each patient. Moreover, it has also become apparent from our analysis that the cost of the non-pharmacological therapies is higher whether these are combined with drug treatment. This finding is probably related to the fact that patients requiring a combined treatment more often present greater severity in terms of both symptomatology and functional impairment. We can thus expect that, for these patients, not only is a more intensive (in terms of frequency) psychoeducational approach needed, but there is also a need that this approach be carried out in several different settings in the children's life [58]. Finally, the ADHD project of the Lombardy Region ensured that the diagnostic and therapeutic protocol followed by the ADHD centres, on which the cost analysis was based, was strictly monitored by the official registry, that it has been established according to the main recommendations of the national guidelines and that it is representative of the real clinical practice of an entire region. Indeed, the compliance to the shared diagnostic and therapeutic evaluation, according to the project guidelines, estimated by the analysis of data recorded by all 18 ADHD centers is very high and homogeneous in the Regional context (total completeness: 93.6%; range: 81.7-99.1%). We can't expect similar conclusions assuming to analyze ADHD health care differences and similarities between the Italian regions: various socioeconomic and service organization characteristics, i.e., may be explain a part of these differences and highlighting an important and broader issue, but not specific to the ADHD management. Limited literature data available from the Italian context are not enough to reach useful comparisons and comments about regional differences among ADHD management. However, the Regional ADHD Registry, as the main tool to monitor the ADHD project, was designed as a disease-oriented registry collecting information not only on ADHD patients treated with pharmacological therapy (as provided by the National Registry) but also on all patients who access ADHD centers for a diagnosis of suspected ADHD. These reasons strengthen the potential use of our findings as a proxy model to estimate the cost of the implementation of the guidelines in clinical practice for the development of similar projects in other Italian regions or for other mental health disorders. Limitations A few study limitations should be mentioned. First, our estimate was based on tariffs reimbursed by the Regional Health Service that does reflect the direct medical costs and not all costs of care provided. Furthermore, the estimates did not include other perspectives considered in cost studies, such as societal and caregiver perspectives. Subjects with mental health problems, including ADHD patients, require support from several dimensions in life, not only from the healthcare system, i.e. social care, housing, and employment [27,60]. Service utilization, outpatient care, and medications, however, are described as the main components of the economic impact of a disorder in mental health [3]. Second, although the Lombardy Region is the most populated region in Italy, representing about 17% of the national health care costs, all data originated from a single region of Italy, and this may affect the generalizability and comparability of the reported findings. However, studies evaluating costs of mental health problems are not easily generalisable from one country to another because service systems, funding arrangements, and relative prices can vary considerably. Third, the clinical effect of the treatments was not explored. It was therefore not possible to perform a costeffectiveness evaluation, although, to our knowledge, this is the first study that estimates the ADHD costs for both diagnosis and therapeutic pathways, with previous studies typically focusing only on economic evaluation of the treatment. Implications for behavioral health This study gives a reliable indication of the economic effect of both diagnosis and treatment of ADHD in Italy from the NHS perspective. There is clearly a need, however, for a comprehensive picture of the total health and societal costs of ADHD. There also is an urgent need for studies on cost-effectiveness of interventions and for consequent support arrangements for specialised ADHD services that address the needs of patients and their families so as geographically equitable and efficient as to the best evidence care management. The costs associated with mental disorders are difficult to estimate, but continuing efforts to do so increase available evidence as well as the understanding of the struggles of the individuals and families who need appropriate and adequate care.
6,088.6
2017-04-28T00:00:00.000
[ "Economics", "Medicine" ]
MHD Inertial and Energy-containing Range Turbulence Anisotropy in the Young Solar Wind We study solar wind turbulence anisotropy in the inertial and energy-containing ranges in the inbound and outbound directions during encounters 1–9 by the Parker Solar Probe (PSP) for distances between ∼21 and 65 R ⊙. Using the Adhikari et al. approach, we derive theoretical equations to calculate the ratio between the 2D and slab fluctuating magnetic energy, fluctuating kinetic energy, and the outward/inward Elsässer energy in the inertial range. For this, in the energy-containing range, we assume a wavenumber k −1 power law. In the inertial range, for the magnetic field fluctuations and the outward/inward Elsässer energy, we consider that (i) both 2D and slab fluctuations follow a power law of k −5/3, and (ii) the 2D and slab fluctuations follow the power laws with k −5/3 and k −3/2, respectively. For the velocity fluctuations, we assume that both the 2D and slab components follow a k −3/2 power law. We compare the theoretical results of the variance anisotropy in the inertial range with the derived observational values measured by PSP, and find that the energy density of 2D fluctuations is larger than that of the slab fluctuations. The theoretical variance anisotropy in the inertial range relating to the k −5/3 and k −3/2 power laws between 2D and slab turbulence exhibits a smaller value in comparison to assuming the same power law k −5/3 between 2D and slab turbulence. Finally, the observed turbulence energy measured by PSP in the energy-containing range is found to be similar to the theoretical result of a nearly incompressible/slab turbulence description. Introduction Anisotropy is an important property of solar wind fluctuations, describing the changes in the properties of turbulence with respect to a direction relative to the magnetic field.Several characterizations, such as (i) spectral anisotropy (Horbury et al. 2008;Podesta 2009;Bruno & Telloni 2015); (ii) variance anisotropy (Bieber et al. 1996;Milano et al. 2004;Smith et al. 2006;Pine et al. 2020;Adhikari et al. 2022;Zhao et al. 2022), and (iii) correlation anisotropy (Dasso et al. 2005(Dasso et al. , 2008;;Weygand et al. 2009;Wang et al. 2019;Bandyopadhyay & McComas 2021) have been used to study the anisotropy in the solar wind fluctuations.Spectral anisotropy refers to anisotropy relative to the direction of the wavevector k.By contrast, variance anisotropy refers to the magnitude of fluctuations in directions parallel and perpendicular to the mean magnetic field.As a result, these two concepts are independent of each other (Matthaeus et al. 1996;Oughton et al. 2015). Using Parker Solar Probe (PSP) magnetometer data and following the approach used in Bieber et al. (1996), Bandyopadhyay & McComas (2021), and Zhao et al. (2022), we concluded that the energy density of 2D fluctuations relative to the slab fluctuations is smaller close to the Sun than at larger distances.Specifically, Zhao et al. (2022) found that over the distances of 27.95-64.5R e , the ratio between the amplitudes of 2D and slab magnetic energies is about 0.43 (or 30%:70%), and between 64.5 and 129 R e , it is about 1.63 (or 62%:38%).Their results differ from those observed at 1 au by Bieber et al. (1996), where the ratio between the amplitudes of 2D and slab turbulence is about 4:1.The result of Bieber et al. (1996) is similar to the theoretical prediction of a nearly incompressible magnetohydrodynamic (NI MHD) theory for a β p ∼ O(1) or =1 plasma beta regime (Zank & Matthaeus 1992a, 1992b, 1993;Zank et al. 2020). In a similar study, Adhikari et al. (2022) used measurements from both PSP and Solar Orbiter (SolO) together with the β p ∼ O(1) NI MHD turbulence model (Zank et al. 2017) to study the evolution of 2D and slab turbulence in the inner heliosphere.The geometry between the mean magnetic field and mean solar wind speed, characterized by the angle θ UB (θ UB is the angle between the mean solar wind speed and mean magnetic field, see Bieber et al. 1996;Zank et al. 2020), allows one to differentiate between slab and 2D fluctuations observed in the solar wind.By measuring turbulent fluctuations in parallel (0°< θ UB < 25°or 155°< θ UB < 180°) or orthogonal (65°< θ UB < 115°) geometry, Adhikari et al. (2022) identified slab or 2D turbulence.Their results suggested that PSP primarily measures slab-like turbulence near the perihelion of the first orbit.By contrast, SolO observes both 2D and slab turbulence more frequently, with 2D turbulence energy exceeding the slab turbulence energy.The results presented by Adhikari et al. (2022) correspond to the energy-containing range only.This manuscript investigates the evolution of anisotropic turbulence in the inertial and energy-containing ranges near the Sun in the super-Alfvénic solar wind flow in the range of ∼21-70 R e , from a region closer to the Sun than that studied previously by Bandyopadhyay & McComas (2021), Zhao et al. (2022), and Adhikari et al. (2022).Zank et al. (2017) developed the NI MHD turbulence transport model equations in the β p ∼ O(1) regime for calculating the radial evolution of 2D and slab turbulence components in the energy-containing range (see also Wang et al. 2022, for an NI MHD turbulence transport model formulation in the β p = 1 regime).Adhikari et al. (2017) derived a theoretical equation for the power anisotropy of magnetic field fluctuations in the inertial range as a function of energy-containing range fluctuating magnetic energy and the correlation length.They found that for heliocentric distances of 2-10 au, the ratio between 2D and slab magnetic fluctuations in the inertial range varies between 2.5 and 5, and then gradually approaches 1 with increasing heliocentric distance.In this study, we use the Adhikari et al. (2017) approach to calculate theoretically the ratio between the inertial range 2D and slab variances for magnetic field fluctuations, velocity fluctuations, and Elsässer energies.For this, we derive the equations for the magnetic field fluctuations, the outward and inward Elsässer energies using two approaches: one in which both 2D and slab turbulence components follow a power law of k −5/3 (where k is the magnitude of the wavenumber), and the other in which the 2D and slab components follow the power laws of k −5/3 and k −3/2 , respectively.Similarly, we also derive the corresponding equation for the velocity fluctuations, where 2D and slab components follow a power law of k −3/2 .As the theoretical 2D and slab turbulence energies and correlation lengths in the energy-containing range are required to calculate the inertial range turbulence anisotropy, we obtain them by numerically solving the solar wind (SW) + NI MHD turbulence transport model equations (Zank et al. 2017;Adhikari et al. 2022).On the other hand, we derive the observed ratio between the 2D and slab turbulence energies in the inertial range directly from the PSP measurements, by exploiting the geometry between the background magnetic field and the solar wind speed (Bieber et al. 1996;Zank et al. 2020;Adhikari et al. 2022). This paper is organized as follows.In Section 2, we present the theory of MHD inertial range turbulence.Section 3 discusses the observed transverse turbulence energy versus the angle between the observed mean magnetic field and the observed mean solar wind speed.Section 4 discusses the comparison between the theoretical and observed results.Finally, we summarize our work in Section 5. MHD Inertial Range Turbulence Theory The correlation tensors for slab ( k P ij sl ( )) and 2D ( k P ij 2D ( )) turbulence are given by Zank (2014) . The functions g sl (k || ) and g 2D (k ⊥ ) depend on parallel and perpendicular wavevectors only.Using Equations (1) and (2), and making the assumption that the 2D and slab magnetic fluctuations follow the power laws of the form k −1 in the energy-containing range and k −5/3 in the inertial range, Adhikari et al. (2017) derived the equation for the ratio of the variances between the 2D and slab magnetic fluctuations in the inertial range as where á ñ B 2D 2 ir er and á ñ B sl 2 ir er denote the variances of 2D and slab magnetic fluctuations in the inertial/energy-containing range, and l b 2D and l b sl denote the 2D and slab correlation lengths in the energy-containing range, and k inj ∼ 1.07 × 10 −9 km −1 (Adhikari et al. 2017) is the injection wavenumber.We assume k = 2π/λ, where λ is the correlation length. Following the methodology of Adhikari et al. (2017) and assuming that the 2D and slab outward and inward Elsässer energies exhibit power laws of k −1 and k −5/3 in the energycontaining and inertial ranges, the ratio between the 2D and slab variances of the outward/inward Elsässer energies in the inertial range can be expressed as where á ñ  z 2D 2 ir er and á ñ  z sl 2 ir er denote the 2D and slab outward/ inward Elsässer energies in the inertial/energy-containing range, and l  2D and l  sl denote the corresponding 2D and slab correlation lengths in the energy-containing range.We note that Equations (3)-( 5) are derived assuming the same power-law form of k −5/3 for 2D and slab fluctuating magnetic energy and Elsässer energies.It has been found that the observed magnetic fluctuations (Chen et al. 2020) and Elsässer energies (Zank et al. 2022) exhibit a k −3/2 power law near the Sun.If 2D and slab magnetic field fluctuations, and outward and inward Elsässer energies follow the power laws of k −5/3 and k −3/2 , respectively, the ratios between 2D and slab components in the inertial range take the following form: where f 1 and f 2 are the frequencies in the inertial range, and ω)f is the (angular) frequency, and U is the solar wind speed to convert a wavenumber into a frequency provided the solar wind flow is super-Alfvénic.Here, we use f 1 = 1.7 × 10 −3 and f 2 = 1.7 × 10 −2 Hz, corresponding to a 10 minute long interval data set with a resolution of 1 minute, and excluding the kinetic effects.We note that Equations ( 6)-( 8) may not be applicable in the sub-Alfvénic solar wind flow because one may need to use a modified Taylor hypothesis to convert a wavenumber into a frequency (see Zank et al. 2022). To derive the ratio between the 2D and slab variances of solar wind velocity fluctuations in the inertial range, we assume that the 2D and slab velocity fluctuations exhibit a power law of k −3/2 in the inertial range.Note that the assumption of a k −3/2 power law is not related to that of magnetic field fluctuations or Elsässer energies.This is based on observational studies (e.g., Zhao et al. 2020;Kasper et al. 2021) that often find that solar wind velocity fluctuations follow a power law of k −3/2 .The expression for the ratio between the 2D and slab velocity fluctuating energies in the inertial range can be expressed as where á ñ u 2D 2 ir er and á ñ u sl 2 ir er denote the 2D and slab velocity fluctuations in the inertial/energy-containing range, and l u 2D and l u sl denote the 2D and slab correlation lengths of the velocity fluctuations in the energy-containing range.Equations (3)-( 9) contain the energy-containing range 2D and NI/slab turbulence energies and correlation lengths, which are obtained by numerically solving the SW + NI MHD turbulence transport model equations (Adhikari et al. 2022).In this study, Equations (3)-( 9) provide the theoretical result of the ratio between the 2D and slab turbulence energies in the inertial range, which are then compared against PSP measurements from encounters 1-9. Turbulence Energy versus θ UB In Figure 1, we show two different results obtained from a day-long data set at the same heliocentric distance of ∼0.18 au during the PSP E9 encounter on 2021 August 6 (in the inbound direction).The result shown in blue is calculated using 10 minute long intervals, which represents the inertial range.The result shown in magenta is calculated using 4 hr long intervals, representing the energy-containing range.In Figure 1(A), we plot a histogram of θ UB .The blue histogram ranges from 90°-70°, and exhibits a negative skewness of −1.47.Similarly, the magenta histogram ranges from 155°-175°, and shows a negative skewness of −0.78. Using a day-long data set from 2021 August 6, we first compute the transverse turbulence energies using 10 minute and 4 hr long intervals, thereby eliminating the compressible longitudinal (parallel to the mean magnetic field B) components (Belcher & Davis 1971;Adhikari et al. 2022).We then smooth the observed transverse components by binning the results with a bin width of 10°(Figures 1(B)-(G)).It is evident from Figure 1 that the transverse fluctuating magnetic energy á ñ B2 , transverse fluctuating kinetic energy á ñ û2 , and transverse Elsässer energies á ñ  z 2 in the inertial range (represented by blue curves/stars) decrease as the angle θ UB increases from θ UB = 97°to θ UB = 165°.Similarly, the á ñ B2 , á ñ û2 , and á ñ  z 2 in the energy-containing range (denoted by magenta curves/ stars) show a slight decrease as θ UB ranges from 160°-169°.Based on the assumption that turbulence measured in a highly oblique flow θ UB → [65°-115°] or in a highly field-aligned flow θ UB → [0°-25°] or [155°-180°] can be regarded as 2D or slab turbulence, respectively (Bieber et al. 1996;Zank et al. 2020;Adhikari et al. 2022), Figure 1 shows that the inertial range turbulence consists of both 2D and slab turbulence, whereas the energy-containing range turbulence consists of slab turbulence only.However, PSP may also observe the energy-containing range of 2D turbulence near the Sun, although it is not very large (see Table 1).Note that the result shown in Figure 1 is based on the geometry between the mean solar wind speed and the mean magnetic field over 10 minute and 4 hr long intervals.During a 4 hr long interval, the average of the background magnetic field and solar wind speed effectively eliminates 2D turbulence.Whereas during a 10 minute long interval, most background fields are arranged radially.However, there are also cases where the background fields are highly oblique (see Figure 1(A) and Table 1). Based on the observed inertial range results (blue stars) shown in Figures 1(B)-(G), we derive the ratio between the 2D and slab turbulence energy.Here, the inertial range 2D component is determined from the transverse component satisfying the criterion 65°< θ UB < 115°, and taking averaged values.Similarly, the inertial range slab component is determined from the transverse component satisfying the criterion θ UB → [0°-25°] or [155°-180°], and taking the averaged values.In so doing, we assume that the solar wind plasma properties are similar within the ∼1 day period.The ratio between the inertial range 2D and slab turbulence components is as follows: (i) turbulent magnetic energy shows a ratio of 2.9 (Figure 1 (Zank & Matthaeus 1992b, 1993;Zank et al. 2017). The inertial range normalized cross helicity (blue stars/ curve) s ĉ , which measures the energy difference between the outward and inward Elsässer energies, shows a value of ∼0.7 at θ UB = 165°and ∼0.2 at θ UB = 97°(Figure 1(F)), meaning that the s ĉ is larger in the field-aligned flow than in the orthogonal flow.Similarly, the inertial range normalized residual energy (blue stars/curve) s D, which measures the energy difference between the fluctuating kinetic energy and magnetic energy density, is about −0.5 at θ UB = 165°and about −0.9 at θ UB = 97°(Figure 1(G)), indicating that in the highly oblique flow, the solar wind fluctuations are more dominated by fluctuating magnetic energy compared to field-aligned flows. Radial Evolution of Anisotropic Turbulence In this section, we discuss the radial evolution of 2D and slab turbulence energies theoretically and observationally as a function of heliocentric distance in the inbound and outbound directions.We select the PSP SWEAP/SPAN, and FIELDS data sets from encounters 1-9 (Bale et al. 2016;Kasper et al. 2016), and divide the data into inbound and outbound directions.We discard data with wind speeds larger than 450 km s −1 .In both directions, we first compute the transverse turbulence energies (á ñ B2 , á ñ û2 , and á ñ  z 2 ), and the angle θ UB in Averaging the background fields using 4 hr long intervals can reduce nonradial flow and make it more radially aligned.To determine the radial evolution of turbulence energy in the inbound and outbound directions, we calculate the mean value of the energy-containing range transverse turbulence energies over a bin width of 10.75 R e .The radial profiles of the observed energy-containing range á ñ B2 , á ñ û2 , á ñ  z 2 , and s ĉ in the inbound and outbound directions are shown in Table 2. Applying the above previously discussed criteria for θ UB over a bin width of 10.75 R e , we derive the 2D and slab turbulence energies from the observed inertial range transverse turbulence energies, and the ratio between them.Table 3 shows the radial profiles of the ratio between the observed inertial range 2D and slab fluctuating magnetic energy, fluctuating kinetic energy, outward Elsässer energy, and inward Elsässer energy in the inbound and outbound directions.We use these observed results to validate the model results obtained from the theory discussed in Section 2. First, we solve the SW + NI MHD turbulence transport model equations (Adhikari et al. 2022) with specific boundary conditions (BCs) for both inbound and outbound directions, as shown in Table 4.The BCs for slab turbulence and solar wind parameters are obtained from the PSP measurements.For 2D turbulence, the BCs are derived by scaling the BCs for slab turbulence.In the inbound direction, the BCs for 2D turbulence energy and 2D correlation length are assumed to be 3.5 and 0.5 times those for slab turbulence.By contrast, in the outbound direction, the BCs for 2D turbulence energy and 2D correlation length are assumed to be 2.5 and 0.5 times those for slab turbulence.We use different multiplicative factors in the inbound and outbound directions because the variance anisotropy becomes different in these two directions.Table 5 shows the parameter values used in the SW + NI MHD turbulence model for the inbound and outbound directions.We compare the theoretical results of the energy-containing range turbulence energies with the observed results measured by PSP.Then, we calculate the theoretical results of the ratio between the inertial range 2D and slab turbulence components as a function of distance, and compare them with the PSP measurements. Inbound Direction We first discuss the evolution of the energy-containing range 2D and slab turbulence energies with distance.As shown in Figure 2, we compare the theoretical and observed slab fluctuating magnetic energy (Figure 2 In the figure, the solid orange curve corresponds to the theoretical NI/ slab turbulence prediction, the solid yellow curve to the theoretical 2D turbulence, and the blue stars/dotted curve identifies the observed slab turbulence quantities.Notably, both the observed slab magnetic energy and the theoretical NI/slab magnetic energy are relatively close with increasing distance.Nevertheless, the observed slab turbulent energy experiences a more rapid decrease compared to the theoretical NI/slab result.The theoretical and observed slab magnetic energy stays below the theoretical 2D magnetic energy.Regarding the turbulent kinetic energy, the theoretical NI/slab kinetic energy aligns reasonably closely with the observed kinetic energy.Furthermore, the theoretical 2D 〈u 2 〉 decreases more rapidly compared to the theoretical NI/slab 〈u 2 〉. As distance increases, both the theoretical 2D and NI/slab 〈z +2 〉 decrease.Similarly, the observed 〈z +2 〉 also decreases, and is relatively close to the theoretical NI/slab 〈z +2 〉.The observed 〈z −2 〉 decreases with increasing distance, closely aligning with the theoretical NI/slab 〈z −2 〉.The theoretical 〈z ∞−2 〉 shows a decrease with distance. The theoretical NI/slab normalized cross helicity and normalized residual energy show a decreasing radial profile with increasing distance, which are similar to the corresponding observed values.Furthermore, the theoretical 2D normalized cross helicity and 2D normalized residual energy decrease more rapidly than their NI/slab counterparts. Figure 3 same k −5/3 power law between 2D and slab turbulence.By contrast, the solid yellow curve is obtained by supposing that 2D and slab turbulence follow the power laws of k −5/3 and k −3/2 , respectively.Clearly, the variance anisotropy of magnetic field fluctuations in the former case exceeds that in the latter case, due to the more rapid decrease of slab turbulence in the former case compared to the latter.results (blue stars/dotted curve) are in good agreement, with both ratios exceeding 1 between 22.37 and 70 R e .This indicates that turbulent magnetic energy is predominantly 2D rather than slab, an interpretation more closely aligned with the NI MHD theory (Zank & Matthaeus 1992b, 1993;Zank et al. 2017). For the fluctuating kinetic energy, the theoretical á ñ á ñ u u 2D 2 ir sl 2 ir decreases more rapidly within ∼33 R e , followed by a slower decrease (Figure 3(B)).The theoretical ratio closely resembles the observed á ñ á ñ u u with increasing distance.As before, the solid orange curve shows the theoretical result assuming the same power law of k −5/3 for 2D and slab turbulence.Evidently, the solid orange curve exhibits a larger value in comparison to the solid yellow curve, which is calculated by assuming k −5/3 and k −3/2 power laws for 2D and slab turbulence, respectively.Both theoretical (solid curves) and observed (blue star symbols/dotted curves) ratios are found to be larger than 1, implying that in the inertial range á ñ 2 10MIN ratios.In this case, the solid orange curve also shows a larger value compared to the solid yellow curve.Both theoretical (solid curves) and observed (blue star symbols/dotted curves) ratios are also found to be larger than 1, indicating that the 2D component dominates.Again, these results can be interpreted in terms of the NI MHD theory (Zank & Matthaeus 1992b, 1993;Zank et al. 2017).assuming k −5/3 and k −3/2 power laws for 2D and slab turbulence, respectively.We find again that the inertial range 2D fluctuating magnetic energy dominates the inertial range slab fluctuating magnetic energy.In Figure 5 , but the observed ratio is less than 1 within 41.65 R e , and then larger than 1.Here, the solid orange curve (for a k −5/3 power law for both 2D and slab turbulence) also shows a larger value compared to the solid yellow curve (assuming a k −5/3 power law for 2D turbulence and a k −3/2 power law for slab turbulence) as a function of distance.Similarly, in Figure 5 , with the theoretical and observed ratios being larger than 1.The orange curve exhibits a larger value than the yellow curve.Again, the interpretation of these results is consistent with the beta small or O(1) plasma beta NI MHD theory (Zank & Matthaeus 1992a, 1992b, 1993;Figure 2. Comparison between the theoretical and observed energy-containing range 2D and slab fluctuating magnetic energy (A), fluctuating kinetic energy (B), outward Elsässer energy (C), inward Elsässer energy (D), normalized cross helicity (E), and normalized residual energy (F) as a function of distance in the inbound direction during encounters 1-9 of PSP.Blue stars/dotted curves denote the observed slab turbulence energy calculated using 4 hr long intervals.Solid yellow curves represent the theoretical NI/slab results, and the solid orange curves the theoretical 2D results.Zank et al. 2017), in which the 2D component is the dominant component and the slab component is a minority component. Discussion and Conclusions The radial evolution of anisotropic turbulence as expressed in terms of magnetic field fluctuations, velocity fluctuations, and outward/inward Elsässer energy in the inertial range (and in the energy-containing range) was investigated for the inbound and outbound directions during encounters 1-9 of PSP between ∼21 and 65 R e , from a region closer to the Sun than those of previous studies (Bandyopadhyay & McComas 2021;Adhikari et al. 2022;Zhao et al. 2022).For this, we derived an equation describing the variance anisotropy for magnetic field fluctuations, velocity fluctuations, and inward and outward Elsässer energies in the inertial range.We used the Adhikari et al. (2017) approach, i.e., a dimensional analysis between the power spectra in the energy-containing and inertial ranges.In the energy-containing range, we assumed a k −1 power law for the magnetic field fluctuations, velocity fluctuations, and inward and outward Elsässer energies.In the inertial range, for the magnetic field fluctuations, and the outward and inward Elsässer energies, we used two approaches: one in which both 2D and slab turbulence exhibit a Kolmogorov power law of k −5/3 , and the other in which the 2D and slab turbulence have different power laws of k −5/3 and k −3/2 , respectively.For the velocity fluctuations, we assumed that both 2D and slab components follow a k −3/2 power law.As the inertial range variance anisotropy equations contain the energy-containing range 2D and slab turbulence energies and correlation lengths, we obtained them by numerically solving the SW + NI MHD turbulence transport model equations (Zank et al. 2017;Adhikari et al. 2022).We compared the theoretical result of the ratio between 2D and slab fluctuating magnetic energy, fluctuating kinetic energy, outward/ inward Elsässer energy in the inertial range with the corresponding observed results derived from the PSP measurements.We summarize our findings as follows. 1.In the inbound direction during the PSP encounters 1-9 between ∼21 and 65 R e , the total number of θ UB values computed over 4 hr long intervals is 274.Among these, 21 residual energy becomes more negative in the vicinity of orthogonal flows unlike when in the vicinity of radially aligned flows.6.The theoretical and observed ratios of the inertial range 2D and slab fluctuating magnetic energies show good agreement with increasing distance in the inbound and outbound directions.These ratios in both directions exceed a value of 1, consistent with the expectations of =1 or O(1) plasma beta NI MHD theory (Zank & Matthaeus 1992a, 1992b, 1993).However, it is noteworthy that in the inbound direction, this ratio exhibits a larger value compared to the outbound direction, which may indicate that magnetic field fluctuations in the inbound direction were more anisotropic than those in the outbound direction.7. The theoretical ratio of 2D and slab fluctuating kinetic energy reasonably agrees with the observed ratio in both inbound and outbound directions.In the inbound direction, the ratio exhibits a larger value compared to the outbound direction, which may also indicate that velocity fluctuations were more anisotropic in the inbound direction.8.The theoretical and observed ratios of the inertial range 2D and slab energy in the outward/inward Elsässer energy are in agreement in both directions.In both cases, these ratios exceed 1, consistent with the NI MHD & Matthaeus 1992a& Matthaeus , 1992b& Matthaeus , 1993)).The ratio in the inbound direction exhibited a larger value compared to the outbound direction.9.In the energy-containing range, the theoretical results of the NI/slab fluctuating magnetic energy, fluctuating kinetic energy, outward/inward Elsässer energy, and normalized cross helicity and residual energy are relatively close to those measured by PSP. 10.The theoretical variance anisotropy in the inertial range relating to the k −5/3 and k −3/2 power laws between 2D and slab turbulence exhibits a smaller value compared to assuming the same power law k −5/3 between 2D and slab turbulence. We find that solar wind fluctuations in the young solar wind, i.e., near the Sun, are predominantly 2D and not slab, consistent with previous findings from theoretical and observational studies (Zank & Matthaeus 1992b, 1993;Bieber et al. 1996;Zank et al. 2017;Adhikari et al. 2022).This conclusion is contrary to the prior results of Bandyopadhyay & McComas (2021) and Zhao et al. (2022), indicating that a closer analysis is possibly using the mode-decomposition analysis developed recently by Zank et al. (2023), which is distinct from both these analyses and that presented here.An intriguing prospect is to extend this analysis from the sub-Alfvénic region to the super-Alfvénic region, including distances up to 1 au.This will generate valuable insights into solar wind dynamics in a broader spatial range. (B)), (ii) turbulent kinetic energy shows a ratio of 1.34 (Figure 1(C)), (iii) outward Elsässer energy shows a ratio of 3.18 (Figure 1(D)), and (iv) inward Elsässer energy shows a ratio of 10.92 (Figure 1(E)).In contrast to the results of Bandyopadhyay & McComas (2021) and Zhao et al. (2022), the 2D turbulence energy exceeds the slab turbulence energy in accordance with the NI MHD turbulence theory in the low and O(1) plasma beta regimes Figure 1 . Figure 1.Panel (A): histogram of the angle between the mean solar wind speed and the mean magnetic field (θ UB ) corresponding to 10 minute and 4 hr long intervals measured by PSP at 0.18 au during E9 on 2021 August 6 (in the inbound direction).Panels (B)-(G) represent the transverse fluctuating magnetic energy, fluctuating kinetic energy, outward Elsässer energy, inward Elsässer energy, normalized cross helicity, and normalized residual energy as a function of θ UB , respectively.These quantities are calculated by binning the results with a 10°bin width.The results in blue and magenta are calculated in 10 minute and a 4 hr long intervals, respectively. (A) shows a comparison between the theoretical and observed results of the ratio between inertial range 2D and slab fluctuating magnetic respectively, as a function of distance.In the figure, the solid orange curve is obtained assuming the + z 2D 2 is the dominant component, and á ñ + z sl 2 is the minority component.Likewise, Figure 3(D) compares the theoretical á Figures 2 Figures 2 and 4 illustrate the radial evolution of the energycontaining range 2D and slab fluctuating magnetic energy (Figure 4(A)), fluctuating kinetic energy (Figure 4(B)), outward Elsässer energy (Figure 4(C)), inward Elsässer energy (Figure 4(D)), normalized cross helicity (Figure 4(E)), and normalized residual energy (Figure 4(F)), but now for the outbound direction.The solid orange and yellow curves denote the theoretical slab and 2D turbulence energies, respectively, and the blue stars/dotted curves indicate the observed values.Clearly, Figure 4 shows that the theoretical NI/slab fluctuating magnetic energy, fluctuating kinetic energy, outward/inward Elsässer energy, normalized cross helicity, and normalized residual energy are in agreement with the corresponding observed values as a function of distance.The theoretical 2D fluctuating magnetic energy, and 2D outward/inward Elsässer energy exhibit larger values compared to their theoretical NI/ slab counterparts.However, the theoretical 2D fluctuating kinetic energy remains below the theoretical NI/slab 〈u 2 〉.Similar to the inbound direction, the theoretical 2D σ c and σ D in the outbound direction decrease more rapidly than the theoretical NI/slab σ c and σ D , respectively.In Figure 5(A), we compare the theoretical á ñ á ñ B B 2D 2 ir sl 2 ir and the observed á ñ á ñ B B 2D 2 10MIN sl 2 10MIN with increasing distance, where both theoretical (solid curves) and observed (blue stars/ dotted curves) results reasonably agree with each other, and exhibit values larger than 1.Similar to above, the theoretical result of á ñ á ñ B B 2D 2 ir sl 2 ir (solid yellow curve) assumes the same k −5/3 power law for 2D and slab turbulence is larger than the theoretical á ñ á ñ B B 2D 2 ir sl 2 ir (solid yellow curve) obtained by (solid yellow curve) is consistent with the observed á Figure 3 . Figure3.Comparison of the theoretical (solid curves) and observed ratio (blue stars/dotted curves) of the inertial range 2D and slab turbulence components as a function of distance in the inbound direction of PSP during encounters 1-9 of PSP.Panels (A)-(D) correspond to the fluctuating magnetic energy, fluctuating kinetic energy, outward Elsässer energy, and inward Elsässer energy.The solid orange curve assumes that the 2D and slab turbulence possess the same power-law form of k −5/3 .The solid yellow curve assumes that the 2D and slab turbulence exhibit a power law of k −5/3 and k −3/2 , respectively. Figure 4 . Figure 4. Radial evolution of 2D and slab turbulence as a function of distance in the outbound direction of PSP during encounters 1-9 of PSP.The format of the figure is similar to Figure 2. Table 1 Values of θ UB Computed during 4 hr and 10 minute Long Intervals in Both the Inbound and Outbound Directions from Encounters 1-9 of PSP between ∼21 and 65 R e the 4 hr and 10 minute long intervals.In Table 1, we show the θ UB values computed during 4 hr and 10 minute long intervals for the inbound and outbound directions in the super-Alfvénic solar wind flow between ∼21 and 80 R e .In the inbound and outbound directions, the total number of θ UB values calculated over 4 hr long intervals that fall within the [65°-115°] range are 21 and 15, respectively.Meanwhile, the number of θ UB values that fall within the [0°-25°] or [155°-180°] range are 160 and 101, respectively.As there are only a limited number (21 and 15) of θ UB values in the [65°-115°] range, these data points are excluded to prevent statistical inadequacies.Similarly, for 10 minute long intervals, the total number of θ UB values within the [65°-115°] range are 396 and 453 for the inbound and outbound directions, respectively, and those within the [0°-25°] or [155°-180°] range are 1660 and 1627, respectively.Notably, the number of θ UB values derived from 4 hr long intervals is lower than those from 10 minute long intervals. Table 2 Radial Profiles of the Observed Transverse Fluctuating Magnetic Energy, Transverse Fluctuating Kinetic Energy, Transverse Outward/Inward Elsässer Energy, and the Transverse Normalized Cross Helicity in the Inbound and Outbound Directions during Encounters 1-9 of PSP between ∼21 and 65 R e Table 3 Radial Profiles of the Ratio between the Observed Inertial Range 2D and Slab Turbulence Energies in the Inbound and Outbound Directions during Encounters 1-9 of PSP for Distances between ∼21 and 65 R e Table 4 Boundary Values of Solar Wind Parameters and Turbulence Quantities at 22.37 R e for the Inbound Direction and at 21.55 R e for the Outbound Direction Table 5 Values of the Parameters Used for the SW + NI MHD Turbulence Model in the Inbound and Outbound Directions
7,922.6
2024-04-01T00:00:00.000
[ "Physics", "Environmental Science" ]